Our Perspectives - Dr Linda Bendikson

The UACEL Perspectives are opinion pieces that highlight current and topical educational leadership matters. Linda Bendikson writes candidly about her views and offers a fresh perspective on today’s educational leadership challenges.  Occasionally, other faculty / team members contribute to this Newsletter is published four times a year.

Subscribe now to Our Perspectives

View Linda's Profile

Inquiry - a much abused word.

I have a confession to make – while I absolutely believe that, to be effective, leaders need to inquire into the link between the teaching and the learning – I have some real reservations about what is sometimes carried out in the name of Inquiry. What I have seen quite often is that teachers are variously engaged in so-called ‘Inquiry Projects’, but I am not often convinced that this effort is always productive.  My reservations relate to the following:

  • Often these ‘projects’ appear to be ‘extra work’ for teachers rather than ‘the main work’: the ‘projects’ are frequently focused on peripheral issues such as how effectively the IT is being used, rather than the effectiveness of the teaching that is occurring.

  • These are frequently individual activities by classroom teachers that do not connect to the wider team work of the school, syndicate or department, rather than the ongoing work of a team.

  • These efforts often appear to be designed with an audience in mind (e.g., in order to present to a Board or colleagues) rather than being ‘the way we do our work’.  There is nothing essentially wrong with presenting to others to discuss and learn, but I sometimes wonder if the presentation becomes the focus, rather than the effectiveness of the work. 

As you may guess from the comments above, my real concern is that ‘Inquiry’, in my view, should be the day-to-day work of teams – not a project or special event, and definitely not an ‘add-on’.    It is a constant state of being: it is the work of effective departments and syndicates and it is carried out in an on-going way.  Inquiry is the state of identifying student learning problems, hypothesising on causes, investigating and testing causal links, and acting on the findings to improve outcomes.  Next, and most importantly, it then involves checking that the changes made to teaching, or to the learning environment, are actually making a difference in the short-term.  It is not about waiting for a couple of years for an outcome.

This sounds mind-bogglingly easy but in practice, people are highly challenged in implementing this inquiry cycle, or ‘spiral’ as Helen Timperley and colleagues now prefer to call it (because for every action on the cycle, one may have to circle around and test and re-test before moving on). Why is this process so challenging?  Firstly, perhaps it is because we are used to deciding on - and jumping to - solutions without testing their efficacy.  This is the typical approach to solutions via professional learning and development (PLD) – often the choice of PLD precedes any discussion of student needs or deeper investigation of what the cause of the problem really is.  We simply work off our assumptions.  For example, often the assumption that is bandied around is ‘the students aren’t engaged’, but almost without fail, when leaders we have worked with have tested that assumption with students, they have found it not to be true, or not to be the key problem to solve. 

Another reason why it is incredibly difficult, and the reason I want to focus on here, is that purposeful inquiry requires us to check to see if what we do makes any difference to the outcomes for students.  I think we often forget to check that crucial factor in the busy-ness of every day work, or simply, do not know how to do this.  This checking for short term impact is at the heart of inquiry. 

I confess that, in one sense, I have become a hater of rubrics largely because our country seems to be littered with them.  It seems that someone is sitting in a back room somewhere producing them in the vain hope that we will all sit on the internet, endlessly self-assessing ourselves.  And of course, we don’t.  But when you know you have a problem, and you want to gather some baseline data in order to assess the gravity of the problem, a rubric is the answer – and ones you create yourself can be very powerful.  The example below illustrates how some schools have started to measure an outcome they value.

 

There are lots of problems with rubrics.  The most obvious one is that the basis on which we make judgements about where a student’s work or behaviour sits on the rubric may not be the same as someone else’s judgement.  This risk can be mitigated with detailed descriptors, but is still no guarantee of agreement by two different assessors.  Hence, the need to moderate judgements by some means e.g., by justifying your assessment with another person, or sending samples of work to an expert for their judgement against which you can check yours.

A second problem is that the effective use of rubrics generally relies on people being driven to do some measurement of progress.  I don’t see a lot of evidence of educationalists sitting around measuring their effectiveness on a rubric related to student outcomes and making a commitment to move to the next level of effectiveness.  But this is exactly what is required. 

In both New Zealand and Australia I have noticed that school leaders struggle to do this ‘monitoring of outcomes in a timely way’.  We talk about the need to do it, but few people seem to know how.  Further, few schools appear to take the time to graph results so they can examine the patterns and create worthwhile short-term targets with their teams.  Graphing helps analysis enormously – it sometimes is enough to motive staff because it so blatantly helps us to see the patterns in student achievement and from that you can readily set the next target.  For example, in the graph below the pattern of achievement (on an arbitory achievement scale) was self-evident as was a possible target – “Let’s move the 1s and 2s to 3s! Now, what skills do we need to concentrate on teaching to make this happen?  Who are these students?”  If you do that every five or ten weeks with your team, you are almost bound to make improvements, and the inquiry is constant, ongoing and embedded into your day-to-day work. 

 


 

Monitoring outcomes in a timely way is acknowledged as an essential and fundamental component of the inquiry cycle, of the annual planning cycle, and of the problem solving cycle.  Essentially, I argue, all of these three concepts are totally aligned – the inquiry cycle is a problem solving cycle, and the annual planning cycle, carried out competently, enacts that inquiry/problem solving cycle.  All of them require monitoring outcomes – but how do you monitor outcomes as diverse as progress in written language skills and progress in becoming a self-directed learner?

I was heartened to hear our Professor and Dean in the Faculty of Education, Graeme Aitken, say recently: ‘You can measure anything’.  And indeed you can – by using rubrics.  They are an essential component of inquiry because without them we cannot measure short-term outcomes.  We cannot wait for an annual test or examination result.  And if you are truly inquiring into the link between teaching and learning, you need short-term feedback loops on your effectiveness.  So, as you start to plan for 2015, you may want to consider how you can monitor a few valued outcomes throughout the year.  I hope this short article supports you in this. 

 

 

Actions:   E-mail | Permalink |