Many of us are frustrated by the lack of improvement in pupil outcomes relative to spending. Educational reforms have come and gone, with vast sums of money being spent, but research shows little or no improvement in outcomes. This can be exasperating for those who strive to raise standards in our schools. However, have those responsible for improving standards in education been guilty of blindly adopting new initiatives, or using anecdotal evidence about their quality, without really understanding their true impact?
In order to improve pupil attainment, we should be considering a more evidence-based approach to school improvement. We wouldn’t, after all, prescribe medicines to our children without rigorously testing their safety and efficacy first. So isn’t it time that we put reliable evidence at the very forefront of our plans for school development?
That is exactly my intention as the Education Elf. I will be bringing you reliable research, policy and guidance to help you keep up-to-date and make evidence-based decisions about education.
I was pleased to see my ethos shared by the authors of ‘The Teaching and Learning Toolkit’ from Durham University. They have reviewed the best available evidence from the last 30 years about initiatives and approaches currently in action in our schools. The teaching profession should use this toolkit to inform future school improvements. It can enable schools to target spending more effectively, and use pupil premium funds in a more strategic, cost-effective manner.
What they did
The research team, led by Professor Steve Higgins, selected the following 21 subjects for inclusion:
- Ability grouping
- After school programmes
- Arts participation
- Block scheduling and timetabling
- Early years intervention
- Feedback
- Homework
- Individualised instruction
- Information and communication technologies (ICT)
- Learning styles
- Meta-cognitive and self-regulation strategies (‘Learning to Learn’ strategies)
- One-to-one tuition
- Parental involvement
- Peer tutoring
- Performance pay
- Phonics
- Reducing class size
- School uniforms
- Sports participation
- Summer schools
- Teaching assistants
Although thousands of studies have been published on these topics over recent years, the vast majority of them suffer from methodological weaknesses that make their findings unreliable. The authors applied strict inclusion criteria to isolate the best available evidence within each topic. Readers who are interested in the different types of study can find out more on my About page.
The authors present the impact of each approach, e.g. ability grouping. Impact is generally given as potential gain in academic achievement (measured mainly through curriculum tests and examinations). Where possible the authors highlight gains in other areas such as attendance, aspiration or behaviour, but these references are limited. In addition to impact, information is included about the typical costs of each approach, their applicability to specific subjects and key stages, as well as the reliability of the evidence used. This allows school leaders to consider new initiatives in terms of potential impact and cost-effectiveness. Furthermore, for each approach the authors provide some valuable suggestions for maximizing the impact on learning.
What they found
In terms of academic performance, providing children with effective feedback is shown to have a very high impact on learning. Other high-impact approaches include ‘learning to learn’ strategies, peer tutoring and early-years intervention.
Some strategies were shown to have no impact on academic performance, whilst there is evidence that others can have a negative impact on learning. In summary, some of the less effective approaches include performance related pay for teachers, use of teaching assistants in class, ability grouping and school uniforms.
The Education Elf’s view
Teachers are constantly bombarded with people promoting the next best thing to raise standards in education. With limited time to distinguish quality research from the myriad of poor quality studies, teachers often place their trust in claims made by colleagues, government and policy makers. The teaching profession should welcome this toolkit, as it can empower individuals and schools to gauge the most likely outcome of an initiative being implemented in their own settings. Teachers want what is best for the children in their schools, and although evidence-based practice has not been high on the educational agenda, this toolkit will hopefully enable school leaders to understand that ‘evidence’ is not a dirty word, but a necessary partner for effective decision making.
The authors themselves are very conscious to stress that the toolkit should be used to inform practice and not dictate it. They are clear to highlight that impact data is represented as the average outcome. This does not mean to say that an outstanding classroom teacher taking on a new initiative cannot see greater outcomes for their pupils than is represented in the data from the toolkit. School leaders, will therefore, need to consider a range of relevant factors before scaffolding school improvement initiatives; the evidence in the toolkit should be a factor in such considerations.
Since the publication of the Durham group’s initial research, back in May 2011, some teachers have been sceptical about the findings reported. This may be due to the fact that the research challenges many of the assumptions that have been made by teachers over the decades, for example, that smaller class sizes or use of a learning support assistant (LSA) in class improves academic outcomes for children. Some teachers may have had positive personal experience of particular approaches, or have been witness to others claiming positive outcomes, but the evidence can paint a very different picture of the average impact. Isn’t it about time that we all wiped the dirt from the word ‘evidence’, and showed greater respect for the value it can add to future school improvements? High quality evidence, used alongside well-considered professional judgements, should enable educators and elves alike to plough through the mayhem of mis-information and finally see the wood through the trees.
Links
The Teaching and Learning Toolkit. Education Endowment Foundation, July 2012.
Higgins S, Kokotsaki D, Coe R. The Teaching and Learning Toolkit (pdf). Education Endowment Foundation, July 2012.
Higgins S, Kokotsaki D, Coe R. The Teaching and Learning Toolkit – technical appendices (pdf). Education Endowment Foundation, July 2012.
like the eppi reviews (http://eppi.ioe.ac.uk/cms/), the teaching and learning toolkit is an excellent resource. However, I think it is a mistake to approach all educational research through a positivist scientific paradigm. Educational research is essentially about giving sense and meaning ti complex social, cultural and cognitive interactions. What counts as evidence and what doesn’t is also culturally and politically constructed. It is easy to dismiss educational research as rubbish, and some of it is – but then so is a lot of scientific research. The idea that there are right answers and quick fixes and that evidence is an unproblematic incontestable concept does not reflect the real complexity of the social and biological interplay of our lives.
Thank you for your comments. A recent study by Dekker et al suggests that teachers who are enthusiastic about the potential of using neuroscience in their practice, find it difficult to distinguish between scientific findings and ‘neuro myths’. Unfortunately, some people and companies have taken advantage of the fact that the average teacher is unable to distinguish between poor and high quality evidence. Brain Gym and VAK (visual, auditory, kinaesthetic) Learning Styles are but two examples of widely used initiatives that make claims beyond any reasonable evidence, often leaving the audience baffled (and sadly won over) by pseudo-scientific nonsense.
As Education Elf, I hope to help teachers sort the wheat from the chaff. Clearly, different sorts of question require different types of research methodology to answer them reliably. Questions about the safety and effectiveness of an intervention are best answered by randomised controlled trials. Questions about diagnosis are best answered through prospective cohort studies. Questions about how common a specific problem is are best answered by local and current random sample surveys or censuses. It is important for researchers to understand and use scientific methodologies appropriately to answer their specific questions.