h1

More About Exams

August 31, 2012

Since my last post  the argument about GCSE English has rumbled on. There have been one or two people daring to suggest that the issue is about standards but the consensus still seems to be that a great injustice has been done. A few more arguments have come up which I didn’t mention last time, so I will address them here.

1) Grades are meant to be criteria based.

For as long as I can remember, there has been an ongoing attempt to match grades (and levels at key stage 3) to particular learning objectives. However, this has always been a theoretical exercise rather than an accurate description of assessment. It was never the case that a student with a grade C had met all of the grade C (and below) objectives, but not the grade B ones. In some subjects there were huge disparities between grades/levels and learning. This is because exams hinge on scores not criteria, and because the same objective can be met in an easy or difficult way. This appears true even in very precise subjects like maths; it is unavoidable in English. Despite all the talk of “what a grade C looks like” and the posters telling you these things on classroom walls, nobody took the criteria too seriously. If they had there would have been protests every time grades went up without any clear evidence that more objectives were being met. Inevitably, this has only been dredged up now that grades have gone down. It was never the case, even in the years of rampant grade inflation, that examiners could hand out any amount of grades as long as the objectives were, in some way, met.

2) If nothing was wrong there wouldn’t be all these complaints.

The claim is made that because a fuss has been kicked up, then there must have been a problem prompting it. At one level this is true. If schools hadn’t expected to be able to hand out vast quantities of grade Cs then there wouldn’t be this problem now. What this does not demonstrate, however, is that the problem is a failure to hand out more grade Cs or some kind of political scandal. The key problem, as I argued here, is a dumbed-down qualification that gave schools the impression that they could get lots of grade Cs. That is a real problem, and the exam boards and regulator are to blame, but it does not indicate that too few students have been given grade Cs or that the schools demanding the grades be investigated have a point.

3)  Something could have been done later.

A lot of people arguing that it should have been easier to get higher grades are desperate to claim that although they wanted the exam to be easier and more grades to be handed out, they are not actually arguing for grade inflation. One way to make this case is to suggest that although something should be done about grade inflation it shouldn’t have been done now. Grades should have continued to inflate for several more years first; perhaps there should have been entirely new exams before it was to stop. The problem with this argument is that when you consider the short term of office of most education secretaries, then putting something off may well amount to never doing it. The expectation that grades don’t inflate needed to be introduced as soon as possible, before the political climate changed. And let’s be clear, this has been coming for two years. The government have been talking about it since they got elected. Ofqual have been talking about “comparable outcomes” for well over a year. This is a shock to those who believed it would never happen, but it is not sudden or unexpected.

4) You can’t prove there was ever any grade inflation.

In a way this argument is a relic. Ofqual admit there was grade inflation. The opposition frontbench admit there is grade inflation. There is no serious dispute about this. However, a lot have people have built their careers and their self-image on getting students through ever easier exams. For a few years now every educational fad has been justified with a teacher (often an English teacher as it happens) claiming “well it works for me and my classes get really good grades”. For these people it is still hard to admit that there was grade inflation. Some refuse to look at old exam papers, or if they do, simply claim they can’t tell the difference. These people are probably best ignored for simply denying the obvious. Some try to suggest that the rocketing grades are caused by better teaching. However, grades have shot up for more than two decades, and in that time the fashions in teaching have changed in all sorts of different directions, not just in one particular way. It is hard to say that teaching in the last ten years has got better when the biggest trend has been a return to ideas last popular in the early 1990s. Some claim that English might be an exception to the general trend, but the rise in English results suggest otherwise and the exams taken this year hardly look rigourous.

While people have been desperately trying to get some mileage out of the above arguments, further facts have emerged. Any claim that giving the extra grade Cs would not have caused rampant grade inflation have been discredited by reports that the number of students affected by tightening up grade boundaries may have been close to 67000, i.e. more than 10% of the cohort. Any claim that the problem was last minute political interference from Michael Gove, or Ofqual, rather than exam boards moving to rectify a problem with an exam that was always going to be a problem has also turned out to be mistaken. It has emerged that Ofqual had been talking about maintaining “comparable outcomes” (i.e. consistency between years) for some time now and, according to reports in the TES, had been aware of problems with early entry in English since 2009.

So far, those who have been complaining have been reacting to these stories as if they simply confirm their complaints, rather than confirm that something had to be done. But we now have a situation where those complaining that more Cs should have been given out have been wrong again, and again, and again:

  • It was claimed the day before results day that there was a general problem with English results. Then it turned out that actually C grades were down only 1.5%. Then it turned out that this was probably a result of increased entries for iGCSEs rather than C grade being more difficult.
  • It was claimed that results were down because of deliberate political interference, probably at the last minute. Then it turned out that Ofqual had been saying results would be comparable for months and months and had been aware of problems with the exam since 2009.
  • It was claimed that there must have been so many grade Cs given out in January for controlled assessment that results in June must have been used to compensate. Then it turned out that only about 6% of CAs were actually submitted in January.
  • It was claimed that moving the boundaries wouldn’t have been necessary to maintain standards. Then it turned out that without the move there would have been 67000 more Cs, i.e. a rise in passes of more than 10% since last year.

I suppose we shouldn’t be surprised that after two decades of grade inflation schools would fail to adjust to the consequences of grade inflation and get it completely wrong. However, it is shocking that they seem to think it is a scandal that they couldn’t just hand out C grades to 75% of the cohort and seem to claim that their ability to manipulate results is more important than maintaining standards. No amount of incompetence on the part of the exam boards and Ofqual can actually distract from the fact that schools colluded in that incompetence, right up to the point where they stopped gaining grades from it.

12 comments

  1. Having read the full report, it seems that, as expected, the issue was with the January grade boundaries being generous and not the June ones being harsh. This, along with large numbers of schools expecting very large increases in their results (42% were expecting a 10%+ rise in A*-C grades) lead to pupils being given unrealistic predictions.

    It is clear that the modular system has lead to difficulties in producing consistent grade boundaries especially when grades are first awarded. Coupled with the inherent difficulties involved in getting consistent marking in English has lead to a perfect storm that hopefully will lead to long term changes in qualifications to avoid situations like this in the future.


  2. A fantastic response. I’ve been arguing this stance on the TES website, but have been told I wanted pupils to fail. What I cannot understand is the fact that by giving out a further 67,000 grade Cs would devalue the achievements of those that did work hard and achieve a good result and also would devalue those who sat GCSEs last year and previously. I find the Headteacher’s organisations stance to be one of protectionism rather than valueing education. They keep spouting the fact that those 67,000 pupils had been accepted onto college courses etc. based upon the grade C. I find this hard to accept. I find it difficult to believe that an extra 67,000 courses of a higher standard were available than is generally the case.


  3. Given the rampant target culture in education It is hardly surprising that schools have colluded in the incompetence of the exam boards and Ofqual. If the incompetence has now been dealt with am I correct in thinking that the number of C grades given to students will not increase while the number of C grades the government expects schools to get is increasing? If this is correct then surely it places some schools in an impossible situation?


  4. 4 utterly compelling points that should squash the fuss once and for all.

    I particularly like the distinction between aggregate level performance and the attainment of certain learning objectives.

    I have spent 10 years arguing this distinction to SLT who just don’t ‘get it’. They love their ‘levelling’ posters.

    It’s one reason why APP was flawed too.

    There are lots of ways a student can, for example, understand ‘the phases of the moon”

    But this learning objective can be ‘reached’ from grade G standard up to A* standard.

    This is why we need terminal exams, examining large sections of the taught syllabus under controlled conditions proving individual, unaided competence in a specific discipline.


  5. This is a perfect football for idealogues to kick around:

    1. long, clever – word statements can be made to justify any point of view

    2. these make it hard to see clearly what the opposing points of view really are

    3. the “answer” is already known to the holder of a given opinion without needing to argue through logically

    4. there are impressionable people who feel that they are victims of some connivance, and whose innocent best efforts are easily mocked

    5. – and, best of all – they can’t possibly hit back at the political footballers

    Joy unconfined for the clever – clever ones: but beware when the head of Wycombe Abbey School (not known for its hoi poloi intake) gets upset. This is the establishment turning on itself, not Gove’s “trots” attacking the Daily Mail reader.

    Fiddle while it all burns, ye who know best. I can only hope the kids don’t beleive a word any of you say when they grow up.


  6. The one thing I don’t understand about the exam boards actions is the fact they ended up with a 1.4 percent drop in cs out of a situation where they were being faced with an infeasibly steep rise? Why did they set the boundaries to create such a drop. If it was because of increase in IGCSE candidates this would have affected higher boundaries more. Was this because of pressure form ofqual or something else?


  7. Well done. Seems like you’re a lone voice on this issue.

    With regards to college courses, colleges need bums on seats and I somehow think they’ll find a way of making sure everyone gets on a course. In fact more accurate GCSE results ought to make it easier for students to start the right level of course and perhaps reduce the dropout rate.


  8. Having read Ofqual’s report, there is one comment which troubles me on a slightly separate note from the main issues it discusses. Whilst discussing the Controlled Assessment grade boundaries for June, they write:

    “the [AQA] committee had been unanimous in endorsing the controlled assessment boundaries as there was “evidence of significant teacher overmarking”. ”

    Overmarking is not a surprise – it is one of the dangers of the Controlled Assessment system. However, my understanding of the exam board’s systems is that this should have been redressed by moderation, adjusting centre’s marks based on the standard of marking in their submitted sample. To move grade boundaries for the entire assessment based on overmarking risks disadvantaging centres who had marked work more harshly.


    • As I understand it, it could only have been reduced by moderation if over-marked by more than their “tolerance” threshold. Which is why it was significant, as I mentioned last time, that the grade boundaries for CAs moved by an amount which was within those “tolerance” levels.


      • Isn’t that tantamount to an admission that their moderation system is inadequate?


        • not really – all moderation done by humans has tolerance margins.

          In fact marking of terminal exam scripts also usually has tolerance levels.

          As OA says the fact that the exam board only shifted the boundaries by less than their tolerance levels indicates a very minor shift.


  9. […] Andrew Old’s dissenting view here and here. […]



Comments are closed.