skip to main content

Podcast Special: John Hattie and Geoff Masters In Conversation

Audio
Podcast Special: John Hattie and Geoff Masters In Conversation

Hello, thank you for downloading this podcast from Teacher magazine – I’m Jo Earp.

The team’s on the road this week at the ACER Research Conference in Sydney, where the theme for 2018 is ‘Teaching practices that make a difference: Insights from research’. In this special episode, we share highlights from the ‘In Conversation’ session on evidence-based teaching practices between Laureate Professor John Hattie and ACER CEO Professor Geoff Masters AO. The facilitator was Tony Mackay AM, of the Centre for Strategic Education in Melbourne, and his first question was where Australia sits in the international educational landscape. Geoff Masters began by pointing out, we know that what really matters in the end is what teachers do in their classrooms.

Geoff Masters: But I think if you look internationally, what you see is that school systems differ in the extent to which they understand there’s also a role that they can play in supporting teachers to implement effective practices, or evidence-based practices. And I think the most effective school systems in the world take that very seriously. They don’t just say ‘the problem sits with the teacher in the classroom, they’ll need to work out how to handle this’. They say ‘we have a role to play across the board when it comes to: where our teachers are being recruited from; how they’re being prepared and developed once they’re in the profession; what resources we can provide – including, as John was saying, assessment resources – that are going to support teachers; how as a system can we support teachers with the difficult challenge of addressing the needs of all the students in their classrooms?’. That might mean we need to think differently about how we organise learning, how we approach the curriculum. But I think the best systems in the world are taking that question seriously, seeing that they have a role to play in creating the conditions that make it possible and relatively easy for teachers to implement highly effective practices.

John Hattie: Well, I’m sure the three of us remember the days when you arrived on an aeroplane to Sydney Airport, from international, and before you were allowed to get up out of your seats you had to sit there when all the baggage compartments are open and two men in socks and sandals came through and sprayed. Do you remember that? What were they spraying for? They tried to convince you it was about insects and fruit flies. It wasn’t. It was to keep out American and British ideas. And since we’ve stopped that we look overseas for the answers.

Now, we’ve got 23 jurisdictions in Australia – the Catholics, the independents, the states – so we’ve got 23 different systems and I certainly say as I’ve said to every minister, it should be a badge of honour that during your term as a minister you should not go to Finland, Singapore or Shanghai. Have you got the courage to reliably identify the excellence here in your own district and go there? And so, my question is about courage. I’m not sure we have a lot of it.

… So, I’m not a great fan of looking overseas. I don’t think it’s about that, I think it’s about the courage to recognise the excellence here and grow it. … I don’t know of any other country in the world that has legislated national standards for teachers, Tony. We’re ahead. I think we’ve got work to do in other areas in that area. I think we’re ahead in terms of many of our debates about assessment – particularly the work that’s going through your organisation. We’re a lot more ahead in that than many other places. We’ve learnt a lot about autonomy in the right ways, sometimes the wrong ways.

I’m not a great fan that we go outside. We can learn from them, we can take it, but we have to digest and use it.

Jo Earp: Geoff Masters told the audience there has been quite a significant decline at 15 years of age in the performance of students in Australia, and we need to stop and think about why that’s occurring.

GM: Can we understand what the reasons for that are and what might we do about it? While I agree with John there’s a lot of excellent practice in this country, obviously, I do think we need to be open to learning from the rest of the world as well, having a look to see where effective practices are being implemented internationally.

But, I guess where I would agree with John – and you I think Tony – is that we do have a pretty solid base on which to move forward …

JE: Discussing the Gonski panel report, Tony Mackay said learning progressions were front and centre. Delivering the Karmel Oration earlier in the day, John Hattie talked about three notions of learning progressions – Big P, Middle P and Little P. He explained Big P Progression is where there is a document outlining ‘scope and sequence’; the Middle P Progression is more based on what students actually do when they encounter the curriculum and any topic within it; while Little P Progressions involve modelling progression for each student based on their past performance, and making optimal recommendations, based on this information, as to the next steps to optimally arrive at the desired destination. Here’s Geoff Masters explaining his view on learning progressions.

GM: … As John said, there’s a challenge ahead of us to clarify what we mean by learning progressions and John’s slide with the Big P, Middle P, Little P is helpful in that.

People are running around interpreting progressions differently. What I’m hearing some people say is that: learning progressions might be good in some areas of the curriculum; or they sit alongside the curriculum in some way; they’re not the curriculum (which they aren’t); or maybe they can be used as an assessment tool for data gathering.

The way that I think about a learning progression is, it’s simply our attempt to be more explicit and to clarify what we understand by progress, or improvement, or growth, within an area of learning. So, that understanding should underpin any curriculum. And people say ‘well, it might be okay in Literacy but it might not work in Science’. Well, in saying that what you’re saying is ‘I don’t know what it means to get a deeper understanding and higher level knowledge and increasing skills in Science’. In any area of learning, where what we’re trying to do is develop deeper understandings or more knowledge, better skills, we must be able to map that out. A learning progression, for me, is just an attempt to do that.

So, I think Gonski was absolutely right in saying it would be useful for us to be clearer about what the nature of progress is. And his reason for saying that was because we know, in schools, students of the same age or of the same year of school are at vastly different levels of attainment. And so it’s important that we have formative assessment tools, if you like, to establish where students are in their learning, so that teachers can think about how best to direct their learning, to challenge every student at an appropriate level.

JE: John Hattie used an analogy from ACER’s Ray Adams.

JH: … He talks about the road map from Melbourne to Sydney. And, if you’re going to draw that map out, most people would come up the Hume Highway. Some would come around the Ocean Road, some would go inland, some would stop in Glenrowan, maybe Gundagai to see the Dog on the Tuckerbox, etcetera. Some would start in different places. We need a kind of map that sort of helps kids understand how to most successfully get to Sydney – acknowledging where they start, what speed they take, what distractions they make, and it’s kind of like what GPS signals to say ‘you have gone on the wrong route’, ‘wow, she’s so nice, she never tells you off … she just says “readjusting” or whatever it is.’

Tony Mackay: Do a U-turn.

JH: Yeah, that’s the kind of road map and the kind of Middle P that – Geoff and ACER, for me, is Middle P – that’s for about 70-80 per cent of the kids. Then as, they say, then you need to drill down to see divergence and how people are differing. That’s the kind of notion on learning progressions we need to be talking about ...

JE: The conversation then turned to John Hattie’s call for a focus on at least one year’s student learning growth for a year’s input. Here, Geoff Masters gives his response.

GM: … What I would say is we should expect every student to be making excellent progress every year. Now, that begs the question of what ‘excellent progress’ is. But that should be our goal, and it isn’t always our goal. We have students in our schools who are more advanced in their learning for whom the year level expectations are low, they’re pretty middling expectations because they’re so advanced, they’re getting high grades on the year level expectations but they’re not being stretched and extended as well as they could be.

… If you invoke the idea of a year’s worth of growth, that’s a difficult concept. And, as John said, if we’re going to continue with that we need to explore what we mean by it. And part of reason is that because the most advanced 10 per cent of students in any year level are five or six years ahead of the least advanced 10 per cent of students, you have students [distributed along] the growth curve. And students who are still way back here at an earlier stage in their learning, you’ll expect them to make faster progress, more progress in a year, simply because they are starting from a lower base. The most advanced kids – because you’ve got, every growth curve is steep for a start and then starts to flatten off – what you might expect in terms of a year’s growth could be different for different students depending on where they are. So, that’s one possible complication in trying to think about that.

JH: Can I say at that point. Two parts of that sentence that are as critical as the rest. At least a year’s growth for a year’s input – no matter the starting point. And I worry about those kids that you’re saying are the ones that are brighter, that those are the ones I think we do a lot of damage to because they don’t get …. The mistake, and I understand this is a mistake of the interpretation of a year’s growth, a year of a curriculum – I’m not talking about that. So, if you think of it no matter where the starting point, and in that sense I think we’re probably in screaming agreement.

GM: Yeah, I think so. All I’m saying is you do want a year’s growth for the most advanced students.

JH: Absolutely.

GM: But in an absolute sense, it might be less than the growth you’d expect of students who are starting from a lower base. Simply because they’re on a steeper part of the growth curve.

JH: I’m not sure I agree with that one.

GM: Okay.

JE: The Gonski report recommends the creation of an online, formative assessment tool to help diagnose a student’s current level of knowledge. Tony Mackay asked the two panellists, given they’re going to be advising on this, what will they be saying?

GM: Well, what I’ll be saying is we shouldn’t be talking about a single formative assessment tool, that’s the first thing. I think that’s implied in the Gonski report, if not stated explicitly.

Once you have developed a map of learning within a learning area, you can then use any number of assessment instruments or assessment processes (as long as they meet quality criteria) to establish where students are in their learning. And we already have, as you said, a number of formative assessment tools of various kinds that schools are using. More than 7000 Australian primary and secondary schools are registered to use the PAT tests online, for example, out of 9444 schools last time I looked. So that’s a pretty significant proportion. That’s already out there. So that’s the first point I’d be making, that the idea of a single formative assessment tool is not the way to go, we need to recognise that there are multiple ways of assessing.

The second thing I think I’d say is that – and I think this is where John and I would probably be in very significant agreement – if we’re going to design an assessment tool it needs to be aligned with our understanding of what progress looks like. Its purpose needs to be to establish where students are in their learning, and to monitor the progress or the growth that they make over time …

JH: At the moment we have a luxury position where ministers are asking us and we can advise them. But if, as I certainly would argue very, very strongly, in fact it would be a bottom line for me, that if it’s not voluntary we shouldn’t do it; for all kinds of perverse reasons if you make something compulsory. But if it’s voluntary, surely we also have to get some of our Highly Accomplished and Lead Teachers advising, so that we know what they want, as much as what they know what we think. I think that’s going to be a critical part of the discussion. Without that, I think we’re going to have a little difficulty.

And so, when you ask teachers what they want: they want resources to help them do the job, they don’t want a single tool, they don’t want another set of data. They want help in the interpretation, they want help in bringing this together. And I think that’s where the concept of a formative assessment ‘something’, and I call it a reporting engine (which people don’t like those words) to get away from the notion that it’s a single tool.

We don’t need to rebuild PAT, it works very well,­ and there’s a lot of systems out there, but how you bring it together so that schools have resources to answer the questions about what impact is, what growth is, where they are in their progressions. And so we have to listen to how they want us to help them do that.

JE: The panel session also included questions from the audience. One educator said he was a fan of both panellists and Dylan Wiliam’s work on formative assessment, and was looking to reconcile the three. John Hattie started by saying he and Wiliam are good friends.

JH: Certainly Dylan has been quite a critic of my work, and has written about that. And I keep reminding him that it’s fascinating that he has the same story, using the same methods as I did, but he doesn’t like his methods anymore. So I said ‘fine, ignore the methods, the story is what matters’. And we’ve met up quite recently and, scarily, we were in screaming agreement about many of these issues – not the methods. And so when I ask him questions about his work. Interestingly, Education Foundation, SVA, came out with a report a couple of weeks ago of a randomised control of Dylan’s work in England. Very successful, doing the kind of things that we’re talking about here. How do you help teachers have the resources to better understand what their impact is about, who they’re having their impact on, and to what magnitude? And it came out quite well. No surprise, one hopes.

But it’s not easy, it’s not dramatic in terms of getting the impact that you want quickly because it does require a lot of expertise. So, my argument is that we have to invest in the professional learning relating to using that expertise, using these tools. Not the professional learning about the tools, but about how teachers and principals, and kids, work together to make those interpretations. That’s exactly what Dylan’s doing. Paul Black told me many years ago, if you’re going to do this don’t call it ‘formative assessment’, it’s one of those buzz words that everyone misunderstands.

Michael Scriven, when he invented ‘formative’ and ‘summative’ never talked about formative and summative assessment. Any assessment can be formative or summative – depends when. When the cook tastes the soup it’s formative, when the guests taste the soup it’s summative. One hoped the soup improves. But that notion of how we phrase this to get it right. And so, yes, after quite a few years of violent disagreement, Dylan and I have come together and said ‘hey, we’re going to write some things together about the story, maybe our methods differ’.

JE: Geoff Masters argues the labels of ‘formative’ and ‘summative’ assessment aren’t particularly helpful.

GM: You know, I took courses at the University of Chicago many, many years ago with Benjamin Bloom who introduced those terms into the assessment literature. I’ve never found them useful. I’ve never found the concepts of formative and summative useful throughout my career.

I understand, and I mean Dylan is the opposite, Dylan has built his career around trying to drive a wedge between these things. I understand that teachers make ongoing assessments, they’re constantly assessing and judging, and monitoring how students are going. The kind of assessment that I’m interested in (I recognise the value of that), but I’m also interested in assessment that occasionally says ‘let me just pause and take stock of where this student is up to in his or her learning. What point have they reached and what progress have they made over time?’. So, it’s not something that’s ongoing, it’s an event in time, if you like, and it is for that purpose.

For me, the fundamental purpose of assessment is to establish and to understand where students are up to in an aspect of their learning, at a point in time. You can then use that ‘formatively’, if you like, I don’t mind the adverb. You can use it ‘formatively’ to plan the next steps, to decide what to do, how to set appropriate targets and challenges for a student’s learning if you want to. Or, you can use it ‘summatively’, if you like, to reflect on the progress that a student has made. I mean, that’s an assessment of learning, of the progress that a student has made – an assessment of the learning that’s occurred.

So, that’s the way that I think about assessment. We love creating dichotomies in this field and pretending that there’s black and white and good and bad, and I think it’s been very unhelpful over the years.

JE: Finally, the panellists gave their views on evidence-based practice, and the Gonski recommendation to create a national evidence institute here in Australia. Here’s Geoff Masters.

GM: A starting point for me would be to say, as John did I think, that we need to think about evidence broadly. And if we move forward with a narrow understanding of what evidence-based practice looks like, that is, if we think that evidence-based practice is simply implementing things that have been demonstrated through randomised control trials to be effective, then we’re adopting a narrow definition of evidence-based practice.

If you go back to the original definition of evidence-based practice in medicine, they make it very clear that it’s the integration of expert clinical practice with external research. And if we’re to have an evidence-based institute, or an evidence-based fund, or an evidence-based anything else, a starting point for me is it needs to recognise that there are what John called two, I might actually call it three, forms of evidence. Because, for me, the first form of evidence is establishing what you’re dealing with. What are your starting points? Then there’s evidence about what are likely to be effective strategies, interventions, that I can adopt. And three, what’s the evidence that it’s making a difference and how can I evaluate my impact?

JH: It’s not going to come out right. The days of evidence are over. My gosh, I’ve got 300 million in my sample. You go to [EEF], you go to What works best – the days of implementing evidence are here. And I would love to see an evidence-based institution that do those three things, but the one that I struggle with the most, particularly as I get older, is I don’t think we’re as good at implementation as we think we are. How do I get schools that are implementing this particular program to talk to other schools that have been implementing this program? What were their enablers and barriers? How do we get them talking to each other about their evidence? So, it’s not a body of stuff up there that’s massaged and blancmanged and sent out to schools. It’s how do we build evidence across the sector?

That’s all for this special episode. To keep listening or to download all of our podcasts for free, whether it’s from our series on School Improvement, Behaviour Management, Global Education, Teaching Methods, Action Research or our monthly podcast The Research Files, just visit acer.ac/teacheritunes or soundcloud.com/teacher-ACER. The full transcript of this podcast is available at teachermagazine.com.au. That’s where you’ll also find the latest articles, videos and infographics for free.

As a teacher, how do you establish where individual students are in their learning and the progress they have made? How does this inform your next steps?

For more articles and podcasts from Research Conference, visit the Teacher archive.

Anggriani 01 September 2018

Great, great discussions. Very useful to further understand the nuanced nature of assessments.

Leave a comment




Skip to the top of the content.