h1

Some Final Words on the English GCSE Farrago

February 21, 2013

It’s probably worth mentioning how the regrading lobby have reacted to the high court judgement that OFQUAL acted fairly. The main response I have encountered has been along the lines of “I/We know what a C grade looks like and our students should have got C grades”. This is the same argument that I have highlighted previously (at least once; maybe twice) when it appeared on Geoff Barton’s blog:

I’m merely a humble English teacher, and it took me five attempts to get my O-level Maths, so I can’t do the fancy statistical pyrotechnics that others can.

But I know what a C in English looks like; I know what you have to do to achieve it; and I cannot accept that because some kind of quota system has been created, through the incompetence of distant bureaucrats, our students – my students – should have a D on their certificates rather than the C they deserve. I taught a group of Year 11 students this year and they – like so many across the country – have been let down. Obvious C grade students have been given a D.

After all, to get a C you essentially only need to do be able to do three things: write using paragraphs; write using mostly accurate sentences and spelling; and be boring. If you stop being boring you move to a B or higher.

A grade C therefore demonstrates a general level of technical accuracy in the construction of writing and an ability to read that goes some way beneath the surface level of a text. It’s what we ought to be able to expect of more of our students.

And this, over any years, is what I’ve trained students to be able to do, in my own school, on courses and at conferences, and as a guest speaker in many other schools across England.

I know what a C grade is, what is looks like and involves, and I cannot accept that a cohort of up to 60,000 young people (according to ASCL’s calculations) should be denied the grade because of an error that’s not of their own or their teachers’ making.

As this appears to be the last remaining excuse for regrading, I think it’s time to assess it for credibility. Do English teachers have a well developed sense of what C grade work is like and what a C student looks like? One assumes that this stems from marking all the coursework in the old English GCSE and identifying accurately whether it is the work of a C student. I am prepared to believe that English teachers were good at getting coursework (particularly tasks that have been done before) up to a C grade standard (that’s historically been part of the problem). However, this does not mean that there is a strong understanding of what C should look like in any given exam or assessment. For this to be credible we would expect that the work of a C student could be easily identified regardless of whether it is teacher assessed or examiner assessed. We would expect a C grade in coursework to mean something roughly similar to a C grade in written exams. We would expect a consistent picture in all assessments and exams. Is this credible? Not according to the statistics collected by OFQUAL (See page 51 although the table appears to have the wrong date on it). According to these, in the old GCSE (the one where English teachers could easily identify the work of a C grade student) there was a huge variation in the C pass rate between modules. For the foundation tier the percentage of students getting a grade C or better in the speaking coursework was 70.9%. The percentage getting a grade C or better in the written coursework was 56.2%. That’s a bit of a discrepancy just between the two parts of the exam marked by teachers. However, for the two written exams, the bits marked by external examiners rather than by the students’ own teachers, the percentage of C grades or greater were 4.4% and 4.5%. This is not a misprint. The supposedly easily identifiable C grade was identified more than a dozen times more frequently by teacher assessment than by external examiners.

Apart from being evidence that the old, coursework based system, was heavily based on getting higher marks in the teacher assessed parts of the course, this makes a complete mockery of the claim that the work of a C student is easily identifiable. Far from knowing how to identify the work of a C student, teachers were seeing C grades everywhere. This is, to my mind, the fault of a system which forced teachers to manipulate grades, rather than the fault of teachers themselves, but any teacher who claims that they have learnt to identify C grade work from their part in this disreputable exercise strains credibility.  Similarly, anyone who claims that there are objective criteria which would identify C grade work regardless of the test or assessment it appears cannot be taken seriously. The system was rigged. The new GCSE rigged it some more. Things fell apart when the introduction of comparative outcomes stopped anyone getting away with it.

As a final note, in a blogpost Geoff Barton accepted the fight for a regrade was over. However, he remained determined to ignore the facts to the very end, identifying a series of “unanswered questions” which, to the best of my knowledge, have been clearly answered. Just in case anyone is unaware of those answers, I thought I’d answer them here.

Why did the exams boards get it so wrong?

It is now established that they got it wrong in January because of a lack of information about the comparative performance of the relatively small number of students who took their GCSEs in January.

What has happened as a result?

The GCSE is being reformed to get rid of modules that can be done early and Controlled Assessments that allowed manipulation of grades.

Who has been fired?

The main people responsible were those in charge of exams when the new GCSE was introduced. They had, on the whole, already gone either due to losing power in the general election or in the abolition of the QCDA. If anyone responsible is still in place, feel free to identify them.

Why was Ofqual so slow to respond to concerns they had raised long before the exams took place?

The framework for the exam was already place. It could not be conveniently changed just because people in OFQUAL realised it was ill-judged.

Why did their subsequent report start by blaming the exam boards and then switch to blaming the teachers?

This is fantasy. The structure of the exams has been blamed. The earlier report suggested some problems with what the exam boards and teachers had done, the later report revealed how much of that was a result of an inappropriate structure and perverse incentives. The idea that teachers were blamed by OFQUAL was an invention of the media.

Why was Ofqual allowed to investigate itself?

It wasn’t. It investigated what had happened. While conspiracy theorists may have blamed either Michael Gove or OFQUAL for everything that happened, there was no actual evidence for any of it and it was up to OFQUAL to find out who had done what and why, not to “investigate” an accusation. Any claim that they failed to get to the bottom of this had now been discredited by a high court judgement confirming all the substantial points OFQUAL had made.

How are English teachers supposed to prepare this year’s cohort of GCSE students?

Teach them. Get them to be better at English. You know, what schools should have been doing in the first place instead of gaming the system.

Is the message that however you do in the examination hall, some faceless bureaucrat will decide your grade according to the superstitious mantra of ‘comparable outcomes?

It has always been the case that, in the final analysis, grades were decided by the faceless. Now they are being decided in a way likely to make them consistent from year to year, instead of going up every year. By all means make the case for grade inflation if you can, but don’t dismiss a refusal to inflate grades as “superstition”.

About these ads

8 comments

  1. Interestingly comparable outcomes means that, when controlled assessment etc. goes and all work is externally assessed, these same students will be awarded a grade C because the ‘pass rate’ is fixed based on the cohort’s KS2 results*.

    Getting rid of teacher marked work won’t change the grades – it will just shine a light on the standards of exam work required to gain a C that currently exist.

    * there is a mechanism for exam boards departing from the comparable outcomes target but OFQUAL have identified that this is almost impossible to implement in practice.


  2. 4.5%!! that is …. almost beyond parody… almost.

    A very fine essay. Im going to award it an ‘A’

    -subject to moderation of course :)


  3. These people are fantasists. What really happened:

    The exam boards botched the January marking for whatever reason, handing out C grades to those who didn’t really deserve it. English teachers saw this as a way of artificially boosting their C grade levels and piled into it en masse, using the “model” C grade answers they got in January and replicating the work at that level, expecting that that would get a C grade in July.

    Then in July they realised the error in January, decided not to either regrade January or to allow devalued C grades and marked it at a harder level.

    Then the whinging started.


  4. Ofqual did blame teachers – they claimed we overmarked controlled assessments. It is the job of moderators, overseen by exam boards, and presumably Ofqual, to ensure these things are not overmarked.

    I know what a C grade looks like because I examine as well as teach. A C grade in an exam looks very different from a C grade at CW


    • The analysis showed that the average ‘overmark’ was by just over 1 mark. Yet the bounaries were moved by 3 marks so the ‘overmarking’ only accounts for a little bit of the shift in boundaries.


    • They blamed the over-marking on the system and structure of exams.

      As for the other point, if you admit a C grade was different in different units, how can you claim to know what a C grade looks like in new units?


      • We had the same thing in DiDA (an ICT course). The sample coursework done by the exam board was laughably simplistic and a lot of schools piled into DiDA as a replacement for the old GNVQ Scam (a GNVQ was allegedly ‘worth’ 4 GCSEs).

        When the exams were actually marked teachers discovered that the marking was actually quite sensible and not giving GCSEs for old rope.


  5. I used to be a moderator. Over the years I moderated I guess 20+ centres.

    Some schools were excellent- suitable, well ordered, samples of well marked work.

    I have also seen dreadful examples with ludicrously over-marked work, with centres unable to following simple instructions, frequent transcription errors, gaps in port folios, completely unsuitable work.

    Some were ‘scaled’ i.e. all the kids had the marks decreased or increased. However if a school over marked to the cusp of tolerance then nothing happened, they enjoyed the benefits of the over marking.

    Also, some boards, hilariously, asked for their moderation sample, BEFORE or AT THE SAME TIME, as the submission of teacher assessed marks thus leaving a massive window for outright cheating or even invention of marks.

    I cant believe its taken so long for the scandal to break.

    George- if what you say is true you have made a stirling argument against coursework too.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 8,179 other followers

%d bloggers like this: