Some Final Words on the English GCSE FarragoFebruary 21, 2013
It’s probably worth mentioning how the regrading lobby have reacted to the high court judgement that OFQUAL acted fairly. The main response I have encountered has been along the lines of “I/We know what a C grade looks like and our students should have got C grades”. This is the same argument that I have highlighted previously (at least once; maybe twice) when it appeared on Geoff Barton’s blog:
I’m merely a humble English teacher, and it took me five attempts to get my O-level Maths, so I can’t do the fancy statistical pyrotechnics that others can.
But I know what a C in English looks like; I know what you have to do to achieve it; and I cannot accept that because some kind of quota system has been created, through the incompetence of distant bureaucrats, our students – my students – should have a D on their certificates rather than the C they deserve. I taught a group of Year 11 students this year and they – like so many across the country – have been let down. Obvious C grade students have been given a D.
After all, to get a C you essentially only need to do be able to do three things: write using paragraphs; write using mostly accurate sentences and spelling; and be boring. If you stop being boring you move to a B or higher.
A grade C therefore demonstrates a general level of technical accuracy in the construction of writing and an ability to read that goes some way beneath the surface level of a text. It’s what we ought to be able to expect of more of our students.
And this, over any years, is what I’ve trained students to be able to do, in my own school, on courses and at conferences, and as a guest speaker in many other schools across England.
I know what a C grade is, what is looks like and involves, and I cannot accept that a cohort of up to 60,000 young people (according to ASCL’s calculations) should be denied the grade because of an error that’s not of their own or their teachers’ making.
As this appears to be the last remaining excuse for regrading, I think it’s time to assess it for credibility. Do English teachers have a well developed sense of what C grade work is like and what a C student looks like? One assumes that this stems from marking all the coursework in the old English GCSE and identifying accurately whether it is the work of a C student. I am prepared to believe that English teachers were good at getting coursework (particularly tasks that have been done before) up to a C grade standard (that’s historically been part of the problem). However, this does not mean that there is a strong understanding of what C should look like in any given exam or assessment. For this to be credible we would expect that the work of a C student could be easily identified regardless of whether it is teacher assessed or examiner assessed. We would expect a C grade in coursework to mean something roughly similar to a C grade in written exams. We would expect a consistent picture in all assessments and exams. Is this credible? Not according to the statistics collected by OFQUAL (See page 51 although the table appears to have the wrong date on it). According to these, in the old GCSE (the one where English teachers could easily identify the work of a C grade student) there was a huge variation in the C pass rate between modules. For the foundation tier the percentage of students getting a grade C or better in the speaking coursework was 70.9%. The percentage getting a grade C or better in the written coursework was 56.2%. That’s a bit of a discrepancy just between the two parts of the exam marked by teachers. However, for the two written exams, the bits marked by external examiners rather than by the students’ own teachers, the percentage of C grades or greater were 4.4% and 4.5%. This is not a misprint. The supposedly easily identifiable C grade was identified more than a dozen times more frequently by teacher assessment than by external examiners.
Apart from being evidence that the old, coursework based system, was heavily based on getting higher marks in the teacher assessed parts of the course, this makes a complete mockery of the claim that the work of a C student is easily identifiable. Far from knowing how to identify the work of a C student, teachers were seeing C grades everywhere. This is, to my mind, the fault of a system which forced teachers to manipulate grades, rather than the fault of teachers themselves, but any teacher who claims that they have learnt to identify C grade work from their part in this disreputable exercise strains credibility. Similarly, anyone who claims that there are objective criteria which would identify C grade work regardless of the test or assessment it appears cannot be taken seriously. The system was rigged. The new GCSE rigged it some more. Things fell apart when the introduction of comparative outcomes stopped anyone getting away with it.
As a final note, in a blogpost Geoff Barton accepted the fight for a regrade was over. However, he remained determined to ignore the facts to the very end, identifying a series of “unanswered questions” which, to the best of my knowledge, have been clearly answered. Just in case anyone is unaware of those answers, I thought I’d answer them here.
Why did the exams boards get it so wrong?
It is now established that they got it wrong in January because of a lack of information about the comparative performance of the relatively small number of students who took their GCSEs in January.
What has happened as a result?
The GCSE is being reformed to get rid of modules that can be done early and Controlled Assessments that allowed manipulation of grades.
Who has been fired?
The main people responsible were those in charge of exams when the new GCSE was introduced. They had, on the whole, already gone either due to losing power in the general election or in the abolition of the QCDA. If anyone responsible is still in place, feel free to identify them.
Why was Ofqual so slow to respond to concerns they had raised long before the exams took place?
The framework for the exam was already place. It could not be conveniently changed just because people in OFQUAL realised it was ill-judged.
Why did their subsequent report start by blaming the exam boards and then switch to blaming the teachers?
This is fantasy. The structure of the exams has been blamed. The earlier report suggested some problems with what the exam boards and teachers had done, the later report revealed how much of that was a result of an inappropriate structure and perverse incentives. The idea that teachers were blamed by OFQUAL was an invention of the media.
Why was Ofqual allowed to investigate itself?
It wasn’t. It investigated what had happened. While conspiracy theorists may have blamed either Michael Gove or OFQUAL for everything that happened, there was no actual evidence for any of it and it was up to OFQUAL to find out who had done what and why, not to “investigate” an accusation. Any claim that they failed to get to the bottom of this had now been discredited by a high court judgement confirming all the substantial points OFQUAL had made.
How are English teachers supposed to prepare this year’s cohort of GCSE students?
Teach them. Get them to be better at English. You know, what schools should have been doing in the first place instead of gaming the system.
Is the message that however you do in the examination hall, some faceless bureaucrat will decide your grade according to the superstitious mantra of ‘comparable outcomes?
It has always been the case that, in the final analysis, grades were decided by the faceless. Now they are being decided in a way likely to make them consistent from year to year, instead of going up every year. By all means make the case for grade inflation if you can, but don’t dismiss a refusal to inflate grades as “superstition”.