Scaling effective practices in schools

Making a real difference to student outcomes is every educators' aspiration but it is really difficult work. If we are to scale and sustain good practices to benefit more students, then we want to know whether these practices work so schools and systems can use the results to inform decisions to improve, further expand, or cut the program.

For the past four years, Evidence for Learning (E4L)'s Learning Impact Fund has used an innovative model to fund and run four independent evaluations of promising programs aiming to improve academic outcomes for students (with a special focus on children from vulnerable backgrounds). There have been three randomised controlled trials (RCTs) and one pilot study.

E4L has also developed a seven-step approach to evaluate a program's effectiveness, which is based on the UK Early Intervention Foundation's ‘10 steps for evaluation success'. The E4L approach draws on building evidence, cumulatively, so that results from each stage inform decisions for testing a program for improvement and scaling best practices to benefit students and schools.

We all play a critical role in placing the evidence generated in the context of schools and classrooms (Bruniges, 2005; Prendergast & Rickinson, 2019). When robust evidence becomes available, the inputs from teachers and school leaders who are using and adapting programs to suit the needs of their students is a crucial part of the process.

Here, we draw on our experience of the last four years to share some of what we've learned about scaling effective practices in schools.

Creating an Education Action Plan

Having a clear understanding of what the program or intervention is trying to achieve, and how you are going to actively monitor, evaluate and report the progress, is a critical building block of a quality evaluation.

At the start of every project, E4L convenes a set-up meeting with the program developer and the independent evaluators to create the Education Action Plan. A single strategy, on its own, is unlikely to change student learning outcomes, so it is important to explore the content delivery and program differentiation – and how different activities (teaching, learning and assessment) are expected to lead to the desired outcome.

Here is where schools' perceptions of the practicalities of running these activities in the busy reality of their daily life really matter. This includes consideration of timeframe, number of sessions, content, resources, professional development, coaching and leadership support.

Developing a school implementation plan

Before implementing a new idea, it is crucial to assess how ready a program is, and if schools could implement the intervention as planned. The goal of an implementation plan is to assess how the activities and approaches will be put into practice in schools, by teachers, staff and the community. This includes questions about the ‘how', ‘when' and ‘what' (‘active ingredients') to support schools in implementing the program effectively (e.g. involvement, timetabling, resources and training).

Schools have complex settings and structures. From running the four projects (of varying levels of scale), E4L has learned that schools sometimes struggle to implement educational programs with the recommended level of fidelity (‘as intended by the developers') to achieve the program's promise. This can be for many reasons, including competing priorities, curriculum timetabling, resourcing and time involvement, and the lack of sufficient training and preparation.

‘Ultimately, it doesn't matter how great an educational idea or intervention is in principle; what really matters is how it manifests itself in the day-to-day work of people in schools,' (Sharples, Albers, Fraser, Deeble & Vaughan, 2019, p.3).

Results from small-scale pilots

A small-scale pilot study is informative, as it tells us whether a program can be implemented, should it be implemented in more schools with further testing, and, if so, how. E4L pilots tend to examine how an intervention is put into practice, how they operate to achieve its intended outcomes, and the factors that influence these processes. They always address three questions:

  1. How feasible is the program?
  2. Is there promising evidence?
  3. Is the program ready for trial?

From our Victorian pilot of Resilient Families Plus (aimed at helping students and parents develop knowledge, skills and support networks that promote students' health and wellbeing), we learnt that schools adapted the program because some elements were already provided by existing school activities or were not feasible or appropriate for the school.

In hindsight, our pilots could get better in exploring various dimensions of implementation such as fidelity, dosage, acceptability and adaptations.

While pilots are important in understanding implementation practices, E4L is cautious about what pilot studies can and cannot do. These studies do not have robust evidence to answer the counterfactual – that is, what would have happened if students hadn't done the program? It is therefore hard to link results to the new intervention itself – for example, schools could already be performing well with their current practices which might make a new idea look better than it really is. Similarly, a small impact could be caused by poor implementation rather than the effectiveness of the intervention itself.

Schools adapting programs to their local context

After a pilot test comes an efficacy trial (to determine whether an intervention produces an expected result under ideal circumstances – that is, with the fidelity described by the program developer). If this trial is a success, the next stage is an effectiveness trial to understand whether the program can maintain its impact in the real world, when used in many different school settings.

The sixth step in E4L's seven-step approach is for schools and systems to decide how replicable the results might be in their local contexts. Schools and systems should ask: ‘will this work in my school/s?', ‘what changes would I need to make to implement this well?', ‘how ready is my school and how ready are my teachers?', ‘what are the active ingredients that are critical for success?‘, ‘how should I monitor and test if it works in my context?' (Ho, 2019).

In our trials E4L reports on three measures of a program to help in considering these questions. They are:

  • Months of learning – an estimate of the additional months of progress you can expect students to make on average as a result of using the program (translated from an effect size);
  • Security rating – our level of confidence in the results of the trials, from 1 (lowest) to 5 (highest) padlocks; and,
  • Cost to implement – the approximate cost per student of implementing the program over three years.

We also publish detailed process evaluations with these measures. This is because although the months of learning tells us the difference a program makes, it is critical for educators to understand what happened during implementation and what it might mean for their context (Ho & Vaughan, 2017).

In addition, schools and systems should work with program developers to apply prior evidence and ‘know-how' to best tailor the approach.

Scaling up – the journey continues

Scale-ups can be a work in progress. By this stage, we would have evidence on whether the intervention or program can and has been successfully implemented in schools. For educators (and developers), this does not mean that the evaluation journey is over, as we need to verify if the intervention or program is sustainable at scale.

In our experience, we would expect a diversity of contexts to increase as a program is taken to scale. For their part, schools and program developers need to continually monitor, assess and make decisions about quality, fidelity and flexibility as the program is spread across more contexts and settings.

In this, the seventh step of our approach, systems and schools should start back with pilot testing the intervention or idea if it is implemented within new cultures – nationally and even globally – before adopting more widely.

References

Bruniges, M. (2005). An evidence-based approach to teaching and learning. https://research.acer.edu.au/research_conference_2005/15

Ho, P. (2019, November 29). Unlocking education's implementation black box. Retrieved from https://evidenceforlearning.org.au/news/unlocking-educations-implementation-black-box/

Ho, P., Cleary, J., & Vaughan, T. (2018, August 8). Change leading to improvement. Retrieved from https://www.teachermagazine.com/articles/change-leading-to-improvement

Ho, P., & Vaughan, T. (2017, November 23). Evidence to practice: Beyond an effect size. Retrieved from https://evidenceforlearning.org.au/news/beyond-an-effect-size/

Ho, P., & Vaughan, T. (2018, April 27). Supporting system change through the Education Action Plan. Retrieved from https://evidenceforlearning.org.au/news/supporting-system-change-through-the-education-action-plan/

Prendergast, S., & Rickinson, M. (2019, April 1) Access to high quality research evidence is a good start, but not enough. Retrieved from https://evidenceforlearning.org.au/news/access-to-high-quality-research-evidence-is-a-good-start-but-not-enough

Sharples, J., Albers, B., Fraser, S., Deeble, M., & Vaughan, T. (2019). Putting Evidence to Work: A school's Guide to Implementation. Viewed 5 February, 2020. https://evidenceforlearning.org.au/guidance-reports/putting-evidence-to-work-a-schools-guide-to-implementation/

This article highlights several questions school leaders should ask when seeking to adopt an improvement program. Think about your own context and a program you are looking to adopt, or have recently adopted. With a colleague, or the relevant leadership team, consider the following questions:

Will this work in my school?

What changes would I need to make to implement this well?

How ready is my school and how ready are my teachers?

What are the active ingredients that are critical for success?

How should I monitor and test if it works in my context?