A Response to Ben Goldacre’s Building Evidence Into Education Report. Part 1March 29, 2013
Earlier this month, Ben Goldacre’s government-backed report into the use of evidence in education was published. In this, and a subsequent post, I will respond to some of the key points and arguments.
Firstly, I will highlight the part that is to be welcomed. The suggestion is made that teaching could become an evidence based profession and that, as in medicine, evidence would aid informed decision-making. It is suggested that having expertise based on a grasp of the evidence would allow the profession to be more, rather than less, autonomous. It is suggested that teachers, by identifying the important questions from the frontline, could be the driving force in setting research priorities. There are practical suggestions, such as training teachers in research methods, finding ways to disseminate research findings and helping teachers to work with researchers.
This vision is a welcome change from the current dynamic between researchers and teachers. In my experience, the current situation is that teachers tend to encounter research in two unhelpful ways. Disastrously, dubious research is presented as the source of the latest initiatives and fads, as a reason to overrule professional judgement and embrace an idea suggested and endorsed by somebody who has either never taught full time or long since fled the classroom. In this model, research serves the interests and ideas of researchers and is used as a method of advocacy for, or an excuse for enforcement of, the latest fad. I can think of few bad ideas in teaching that weren’t, at some point, presented as the product of definitive research. Sometimes the research doesn’t really exist (e,g, Brain Gym). Sometimes the research itself is the worthless work of propagandists (e.g. Jo Boaler’s work on maths teaching). Sometimes the research is reputable but the interpretation is worthless (see countless forms of nonsense claiming to be based on Carol Dweck’s work). But the effect of this relationship has been to lower teacher autonomy and to make teachers instinctively sceptical of academics. Few teachers will change their ideas simply because of research, which is sometime an irony given that often our ideas as a profession simply reflect the work of some long since discredited researcher from a previous generation. A lot of the bad teaching methods enforced by OFSTED may have their origins in this kind of relationship between quack researchers and teachers.
The other relationship we see between teaching and research is that which broadly goes under the title of “action research”. Under this headline, some poor teacher who has foolishly decided to embark upon a masters degree in their spare time, is persuaded to carry out their own research project. Typically, this will be statistically worthless, involve lots of questionnaires and be considered worthwhile only if it shows interest in some current initiative or gimmick. While this may provide the teacher some insight into their own situation, it is unlikely to ever produce generalisable research results or anything more persuasive than the personal opinion of any other member of the teaching profession.
Overall, the “research architecture” suggested in the report is the most useful contribution to debate. The idea of a teaching profession setting the questions, and researchers investigating them, seems to be turning an upside down situation the right way round.
The less helpful discussion prompted by the report is that about Randomised Control Trials (RCTs) which are experiments conducted by applying different interventions to different people, selected at random and comparing the results. These have been particularly effective in medicine where they are used to evaluate new drugs and other interventions. In the debate over RCTs that has followed the report, I have tended to see from the anti-RCT side responses which either completely rule out evidence or RCTs, either arbitrarily, or for a reason related to a genuine difficulty but without any proper analysis of how great that difficulty is. From the pro-RCT side I have tended to see arguments which amount to little more than a confidence that problems that were overcome in medicine can be overcome here, that more trials can overcome the difficulties and the claim (without analysis) that the advantages will outweigh the costs. Goldacre acknowledges that qualitative research may be useful in explaining why certain interventions are effective (something I tend to doubt) but not that there are (quantitative) alternatives to RCTs that may prove more practical in many educational contexts.
I’m happy to accept most of the points in the report justifying RCTs as the best way to test an intervention. However, I feel that a lot of the debate in and around the paper on RCTs seems to ignore, or put off answering, some absolutely crucial questions about RCTs. These are mainly around which hypotheses are to be tested and what level of resources are to be devoted to testing them. I realise that it can be argued that these are debates for further down the road, that the first step is simply to accept the principle of RCTs, however, I think that if we fail to look at these questions first then we end up simply talking at cross-purposes for most of the discussion. What we test and how much we can spend testing it, shapes the both the usefulness and the ethics of RCTs.
In my next blogpost I will consider some of the questions that need to be considered in order to evaluate Goldacre’s case for increased use of RCTs.