Fluency in Mathematics: Part 2

October 6, 2014

I gave a talk on fluency in mathematics in March at Pedagoo London (my first public appearance) and again the weekend before last at the La Salle Education maths conference. This is based on those talks and so, inevitably, it is one in a series of posts. Part one can be found here.


“And this was the greatest number of words I could fit on a single Powerpoint slide”.

At both talks I asked the audience to discuss reasons they may have been wary about giving explanations, using prolonged practice or focusing on fluency in maths. I also asked about reasons others had used to tell them not to teach that way. I then asked whether they came up with the same answers I did. Most of the following featured to some degree:

1) Understanding

The issue of whether students ability to answer questions should take second stage to understanding of some sort is not a new one in mathematics. Tom Lehrer satirised it in the 70s.

The real problem is that understanding is not well-defined. I found four definitions for it and wrote about them here, and for these talks I added another possible definition that it could mean “knowing the connections between topics”. It is only with these multiple definitions of “understanding” to confuse matters that explanation and practice can be sidelined. When “understanding” is tied down to a specific meaning we invariably find that a clear explanation and the building up of fluent knowledge, is the best way to promote understanding.

2)  Problem-Solving

While problems might well have a part to play later on in practising the application of knowledge, it is unclear why anyone would imagine problem-solving is useful for the acquisition of new knowledge. It causes a distraction from what we should be thinking about (the thing we need to know) and is likely to tax working memory unnecessary making retention more difficult. While problem-solving approaches to maths have been continually fashionable, their effectiveness is far from proven. Hattie (2009), who summarised research using effect sizes, where 0.4 showed average effectiveness, found problem-solving learning to have a very poor effect size of 0.15:

Embedded image permalink

3) Discovery/Inquiry

This is a very similar point, in that in practice it also involves engaging students in activities where new knowledge is meant to result. Again it is very fashionable. And again, the same objections apply: it is likely to distract from what is to be learnt and to limit retention. Again, Hattie found an unimpressive effect size:

Embedded image permalink

4) Engagement/Enjoyment

It is not always clear what “engagement” means as I discussed here. If it is used to mean enjoyment, then there are serious questions to be asked about the assumption that students learn best when enjoying themselves. There is too much research on that question to reference here, although hopefully I will get round to blogging about it soon, but while anxiety or hysteria won’t be great for learning, there’s strong evidence against the general principle that we learn more effectively the happier we are.

5) Groupwork/Discussion

If I’ve described anything else as “fashionable” then this tops that tree. There’s no clear evidence that groupwork is as ineffective as problem-solving or discovery, but Hattie’s effect size of 0.41 for cooperative learning would indicate it is unexceptional as a method. I would not seek to prevent groupwork, but find its aggressive promotion unwarranted.

Embedded image permalink

There is a lot of psychology literature on how people work in groups. Different conditions make it more or less effective. Its potential negative effect on motivation, known as “the Ringelmann Effect” is long-established and there are certain types of thinking that are thought to be less effective in groups. The widespread belief that groupwork is an essential part of good teaching seems implausible, and provides little reason to crowd out other methods of teaching.

Update 7/9/2014: It was pointed out to me on Twitter that Hattie also quotes one meta-analysis that takes subject into account and found an effect size of 0.01 for cooperative learning in maths.


The first time I gave this talk I could only confirm that OFSTED did seem opposed to developing fluency and this was the one point on which I could not deny the objection. The details are described here and I showed some of this video to underline the point:

When I gave the talk last week, I could be more optimistic. Inspectors should not be requiring a particular way to teach and shouldn’t be grading you individually anyway. The old subject survey guidance that laid out a depressingly trendy vision of how to grade maths lessons is now gone. Hopefully, this demon has been slain.

7) Technology

The idea here is that technological change has changed the nature of schooling or society. I’ve used these before, but in a week where the TES published this, it is worth remember just how long the claims that traditional teaching is doomed have been going on. Here is some of the usual rhetoric:

The idea that our schools should remain content with equipping children with a body of knowledge is absurd and frightening. Tomorrow’s adults will be faced with problems about the nature of which we can today have no conception. They will have to cope with the jobs not yet invented.


we find ourselves in a rapidly changing and unpredictable culture. It seems almost impossible to foresee the particular ways in which it will change in the near future or the particular problems which will be paramount in five or ten years.


Books will soon be obsolete in the public schools…Our school system will be completely changed inside of ten years.


These quotations are actually from 1966, 1956, and 1913, respectively. (Sources for the latter 2 can be found by searching my blog, and for the first from this book).

This does not disprove the argument about technology, but it does seem to shift the burden of proof. A more developed case can be found here,  here and here.

8) Independence

The philosophical arguments about independence and autonomy can be found here. And I would add to it now, that if independence from the teacher was so important, then it is hard to explain the effectiveness of Direct Instruction, a method of teaching very much focused on the role of the teacher:

Embedded image permalink

9) Research and Training

Probably the biggest reason for the poor view of fluency many maths teachers have is that we’ve often been trained by those who would deny its importance. We’ve also often been told that this is justified by the research. A proper take down of the shambles that is maths education research here and in the US would take a whole blogpost, but you can get a flavour of it by looking at the work of Jo Boaler, the criticisms of it, and the methods she uses to silence critics. This does not resemble scientific research or academic debate in anyway. The field is mainly propaganda for groupwork, mixed ability teaching and the methods of “fuzzy maths”. There does now seem to be the first signs of debate, and I was able to mention a few papers that challenge improve on the research methods and challenge the orthodoxy, but it seems early days and too much of what I have found is by economists and not published in maths education journals despite seeming to be of much higher quality than the studies that do get published:

Continued in part 3



  1. […] Continued in Part 2 […]

  2. Hi Andrew,
    I don’t know if John Hattie’s work is now becoming popular in the UK, or whether it has been for a long time. Regardless, I hope that you take the time to peruse the mathematics and statistical analysis techniques behind his analysis. To say the very least, it is disappointingly poor.

    It is farcical to attempt to summarise the net effects of an entire methodology, pedagogy or teaching ideology with a single number is farcical on the face of it. Given that a great deal of educational research involves misapplication of statistical methods in the first place, followed by Hattie’s concatenation, it is strictly a case of GIGO. The failure to allow for variations in population, environment al effects, or implementation is simply the rancid cherry on top of a rather disgusting sundae.

    Perhaps the easiest way to determine that Hattie’s research is dubious is to investigate it’s history – look up earlier versions of his work, seeking the table of the most influential educational techniques (greatest effect sizes) and compare them to later versions. The variation is untenable, if his model were to have any skill in prediction.

    Perhaps a starting point to evaluate the worth of Hattie’s research is Snook et alia; (http://connection.ebscohost.com/c/opinions/45447992/invisible-learnings-commentary-john-hatties-book-visible-learning-synthesis-over-800-meta-analyses-relating-achievemen).

    Please note, I do not intend in any way to be dismissive of your primary argument, which I (largely) agree with, but I have a concern about the validity of some of the research you are using to underpin it.

    • I am not suggesting that Hattie’s numbers are the end of it. I had to summarise the research on many different things in a few seconds each for a talk, and this seemed, and still seems, a good way to do it. A proper blogpost analysing an individual issue would, at a minimum, list the meta-analyses the effect size is calculated from. I know some figures are less reliable than others. I think the ones I picked are relatively uncontroversial, or, at least, any change would probably strengthen rather than weaken my argument.

      For what it’s worth I do have blogposts coming up on how people misuse these sorts of figures.

  3. Thank you Andrew! This is the best thing I’ve seen on building fluency – and the misconceptions that prevent it – for some time.

  4. Possibly a lot of this is fair enough for teaching maths, but there is quite a lot that is not fair enough for teaching in general. Even in the meta maths I’d say there are a few unjustified assumptions inherent in the analysis. One is the engagement/enjoyment variable treated as some sort of continuous function to correlate with output. Double enjoyment does not result in double the output (or even a 1% increase) therefore it must be unimportant? Maybe not what you meant but that is the way it reads from here. Apart from the fact that an enjoyable childhood is something to strive to achieve and does not have to be justified on the grounds of making things worse or better, enjoyment could be a threshold variable. ie once past a certain amount increasing it makes little difference but below it even if output seems for a while unaffected because you coerce the pupil into learning despite being unhappy, the chances are that any long term interest in the subject could be killed. You would have to do some very specific experiments to test that hypothesis but it seems to me just as plausible as assuming enjoying school in general is something divorced from the purpose of education. (Hattie’s irrelevance here tells us something) Maybe you just meant enjoyment as in “let’s all go to a theme park instead of having a maths lesson” is not justified because not learning any maths is bad then I agree with you but that is so blatantly obvious it is almost not worth considering. What we want is the right balance of enjoyment and learning not a war between the two.

    As for technology, we don’t have to justify technology on the grounds of increased exam performance. It’s important simply because it reflects life outside school. If it makes things worse then perhaps there is an argument to withdraw it. I dare say I could get the same A level physics results from children using no special equipment except household goods and save the state a stack of cash. No need to build anymore school labs. or employ technicians but that is not a very good reason to say pupils should not get experience of spectrometers, travelling microscopes, telescopes or op amps first hand. If the only thing that has value in education is an exam result God help us – and I write exam specs for a living. Important yes but over-riding every other consideration, no.

  5. First, I’m not a mathematician. Second, I think the basic argument you are making largely agrees with what I think about teaching physics. However, I think the first commentator has a good point. For example some versions of Hattie give problem-solving teaching an effect size of 0.61; co-operative v individual learning 0.59 therefore both comparable with DI (http://visible-learning.org/hattie-ranking-influences-effect-sizes-learning-achievement/). I don’t have access to original Hattie references here but think this website is quoting accurately. I appreciate all the difficulties of finding good quality research to support arguments in education so don’t think that is the end of the story but, at the moment, I think you could equally use Hattie to argue that problem-solving, co-operative learning, and DI were all equally valid aproaches, if you wanted to do so.
    Best wishes.

    • Problem solving teaching is not problem solving learning. It is the teacher modelling how to solve the problem. The co-operative versus individual stat is, if I remember correctly, best ignored as a strange departure from the previous methodology.

      • Okay – I’ll have a look at both of those points. Thanks.

  6. I waited for part 2 to see how you would continue your maths fluency piece; you might have already seen that in part 1 I commented on the fact that understanding and fluency go hand in hand and therefore having *only* emphasis on one is wrong. Of course, depending on context, having *more* emphasis on one might be appropriate. I agree that in some circles this has been and is the case, but not sure what the evidence is that ‘teachers have been told not to teach fluency’ or in point 9 even ‘we’ve often been trained by those who would deny its importance.’.

    Many of the points raised are more general about discovery, problem solving etc. and not much particularly aimed at maths. I can see the point of it, because obviously some more general findings could hold for maths. Replace maths with another subject, fluency with knowledge and presto, every subject can be dealt with. This is a shame, because there are much more for maths relevant sources. I think this is too much of a hotchpotch of things, and misses a lot of points *).

    To supplement 9, one can look at more psychological sources (Rittle-Johnson, Alibali, Kapur, Ollson) but also the maths education research, to me, does not seem so ‘extreme’ as you portray it here. Websofsubstance’s blog has many interesting posts and links, also of the Asian context.

    *)I am aware that Andrew’s stance is that if you leave out (in his eyes perhaps irrelevant) material one is not allowed to just say this but must provide evidence to the contrary. I have always found this to be ‘shift of the burden of proof’ because basically Andrew could find a lot of literature himself by just searching more carefully. Nevertheless some pointers.

  7. *puts pedant’s hat on*

    Tom Lehrer wrote New Math in the early 60s. (It was committed to record in ’65, but he was performing it long before then.)

    *takes pedant’s hat off again*

    • Really?

      So much harder to research a talk than a blog.

      • I’ve gone back to the original Visible Learning now. You are correct about problem-solving teaching 0.61 being different to problem-based learning (PBL) 0.15. The section on problem-solving teaching isn’t Hattie’s clearest but it seems that this is looking at meta-analyses of studies where teachers tried to teach problem-solving strategies to students. It’s a bit of a mixed bag though because some studies seem to have measured creative thinking whereas others measured problem-solving in subjects e.g. maths and science. The maths ones seem to suggest that teaching students to re-state problems, sketch diagrams, try out hunches etc. was most effective but I haven’t gone back to any of the original work. The co-operative-individualistic, now I’ve thought about it, stands at 0.59 but, again you are right because the majority of Hattie’s effect sizes are against ‘teaching as normal’ whereas this is explicitly comparing co-operative learning against an approach deliberately avoiding any collaboration. It might be worth looking at the Johnson studies though because if the individualistic is something which happens normally in classrooms (i.e. working in silence), and co-operative learning was effective then you might well expect this value of 0.59 to be higher than the straight effect size for co-operative learning of 0.41 simply because typical learning involves some collaboration already. On the other hand, I know you are not against group work per se, only it’s overly strong promotion. Thanks again.

  8. […] Teaching in British schools « Fluency in Mathematics: Part 2 […]

  9. […] somewhat careless comment on Andrew Smith’s blog (which he responded to with a clear demonstration that he knew more about Hattie’s work than […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: