I gave a talk on fluency in mathematics in March at Pedagoo London (my first public appearance) and again the weekend before last at the La Salle Education maths conference. This is based on those talks and so, inevitably, it is one in a series of posts. Part one can be found here.
At both talks I asked the audience to discuss reasons they may have been wary about giving explanations, using prolonged practice or focusing on fluency in maths. I also asked about reasons others had used to tell them not to teach that way. I then asked whether they came up with the same answers I did. Most of the following featured to some degree:
The issue of whether students ability to answer questions should take second stage to understanding of some sort is not a new one in mathematics. Tom Lehrer satirised it in the 70s.
The real problem is that understanding is not well-defined. I found four definitions for it and wrote about them here, and for these talks I added another possible definition that it could mean “knowing the connections between topics”. It is only with these multiple definitions of “understanding” to confuse matters that explanation and practice can be sidelined. When “understanding” is tied down to a specific meaning we invariably find that a clear explanation and the building up of fluent knowledge, is the best way to promote understanding.
While problems might well have a part to play later on in practising the application of knowledge, it is unclear why anyone would imagine problem-solving is useful for the acquisition of new knowledge. It causes a distraction from what we should be thinking about (the thing we need to know) and is likely to tax working memory unnecessary making retention more difficult. While problem-solving approaches to maths have been continually fashionable, their effectiveness is far from proven. Hattie (2009), who summarised research using effect sizes, where 0.4 showed average effectiveness, found problem-solving learning to have a very poor effect size of 0.15:
This is a very similar point, in that in practice it also involves engaging students in activities where new knowledge is meant to result. Again it is very fashionable. And again, the same objections apply: it is likely to distract from what is to be learnt and to limit retention. Again, Hattie found an unimpressive effect size:
It is not always clear what “engagement” means as I discussed here. If it is used to mean enjoyment, then there are serious questions to be asked about the assumption that students learn best when enjoying themselves. There is too much research on that question to reference here, although hopefully I will get round to blogging about it soon, but while anxiety or hysteria won’t be great for learning, there’s strong evidence against the general principle that we learn more effectively the happier we are.
If I’ve described anything else as “fashionable” then this tops that tree. There’s no clear evidence that groupwork is as ineffective as problem-solving or discovery, but Hattie’s effect size of 0.41 for cooperative learning would indicate it is unexceptional as a method. I would not seek to prevent groupwork, but find its aggressive promotion unwarranted.
There is a lot of psychology literature on how people work in groups. Different conditions make it more or less effective. Its potential negative effect on motivation, known as “the Ringelmann Effect” is long-established and there are certain types of thinking that are thought to be less effective in groups. The widespread belief that groupwork is an essential part of good teaching seems implausible, and provides little reason to crowd out other methods of teaching.
Update 7/9/2014: It was pointed out to me on Twitter that Hattie also quotes one meta-analysis that takes subject into account and found an effect size of 0.01 for cooperative learning in maths.
The first time I gave this talk I could only confirm that OFSTED did seem opposed to developing fluency and this was the one point on which I could not deny the objection. The details are described here and I showed some of this video to underline the point:
When I gave the talk last week, I could be more optimistic. Inspectors should not be requiring a particular way to teach and shouldn’t be grading you individually anyway. The old subject survey guidance that laid out a depressingly trendy vision of how to grade maths lessons is now gone. Hopefully, this demon has been slain.
The idea here is that technological change has changed the nature of schooling or society. I’ve used these before, but in a week where the TES published this, it is worth remember just how long the claims that traditional teaching is doomed have been going on. Here is some of the usual rhetoric:
The idea that our schools should remain content with equipping children with a body of knowledge is absurd and frightening. Tomorrow’s adults will be faced with problems about the nature of which we can today have no conception. They will have to cope with the jobs not yet invented.
…we find ourselves in a rapidly changing and unpredictable culture. It seems almost impossible to foresee the particular ways in which it will change in the near future or the particular problems which will be paramount in five or ten years.
Books will soon be obsolete in the public schools…Our school system will be completely changed inside of ten years.
These quotations are actually from 1966, 1956, and 1913, respectively. (Sources for the latter 2 can be found by searching my blog, and for the first from this book).
The philosophical arguments about independence and autonomy can be found here. And I would add to it now, that if independence from the teacher was so important, then it is hard to explain the effectiveness of Direct Instruction, a method of teaching very much focused on the role of the teacher:
9) Research and Training
Probably the biggest reason for the poor view of fluency many maths teachers have is that we’ve often been trained by those who would deny its importance. We’ve also often been told that this is justified by the research. A proper take down of the shambles that is maths education research here and in the US would take a whole blogpost, but you can get a flavour of it by looking at the work of Jo Boaler, the criticisms of it, and the methods she uses to silence critics. This does not resemble scientific research or academic debate in anyway. The field is mainly propaganda for groupwork, mixed ability teaching and the methods of “fuzzy maths”. There does now seem to be the first signs of debate, and I was able to mention a few papers that challenge improve on the research methods and challenge the orthodoxy, but it seems early days and too much of what I have found is by economists and not published in maths education journals despite seeming to be of much higher quality than the studies that do get published:
- Duflo, Dupas & Kremler (2008) – RCT on setting;
- Berlinski and Busso (2013) – Unpublished RCT on pedagogy;
- Haeck, Lefebvre and Merrigan (2014) – Analysis of Canadian maths reform;
- Schwerdt, Guido & Wuppermann, Amelie C. (2011) – Compared lecturing with problem-solving.
Continued in part 3