Archive for the ‘Commentary’ Category

h1

Good culture mitigates bad behaviour, it doesn’t cure it

September 26, 2020

My last post, Non-Teachers Telling Teachers What to Think was, as I said at the time, a bit delayed. It was originally prompted by some of the debate about the effects of facemasks. Teachers had been worried that facemasks would be compulsory in lessons and that this would have an impact on behaviour. In practice – at least I think for most of us – facemasks haven’t been compulsory in lessons and they aren’t comfortable enough for kids to choose to wear them all day. My experience is that, where they have been worn, they’ve had relatively little effect on behaviour. Had they been compulsory in lessons, I think there would have had a more significant impact, but I could be wrong.

In that last post, I noted the extent to which those who dismissed concerns about masks were non-teachers, but there was something else I noticed about that discussion. People would claim that if masks would have a negative effect on behaviour in a school, then that showed that the culture of behaviour in that school was poor. This was even suggested about schools with an absolutely exemplary reputation for behaviour. And facemasks are not the only discussion where a claim about negative effects on behaviour has been met with a claim about culture. I saw a similar comment yesterday about the effects of kids staying in the same classroom all day due to Covid. There seems to be the assumption that if we can expect to see behaviour get worse, then there’s something wrong with school culture.

I think we should avoid this sort of argument, and I say that as somebody who believes all schools with good behaviour have a good culture. I do think that culture does the most to determine how good behaviour is in schools. How kids behave is, most of the time, how they expect to behave, and much of that time, it’s how their peers behave. The beliefs a child has about how one usually behaves in school seem to have more impact than any behaviour policy, or individual teachers, or set of sanctions, or anything. This is why the best behaviour managers will still struggle in a new school, and why in some schools good behaviour seems almost effortless to achieve.

However, we need to appreciate that unless you have the most exceptional parents, good culture is constructed deliberately by schools. Schools have a good culture of behaviour, because bad behaviour is dealt with. They have it because potential bad behaviour is anticipated and prevented. They have it because any attempt to undermine expectations will be thwarted. All those elements of behaviour management that are not as important as culture, like rules, sanctions and teacher consistency, work together to build culture. This is why it is a mistake to think that if a school has a good culture of behaviour, we can stop worrying about behaviour. Schools with the best behaviour are not the ones where you stop worrying about behaviour, they are ones where you never stop addressing behaviour, not because bad behaviour is common, but because it can always be even rarer.

And this is why I think we should never argue “X won’t be a problem in a school where the culture is really good”. Culture mitigates bad behaviour, it does not cure it. It might mean that a wasp coming in the classroom causes five seconds of distraction, not five minutes, but the way to get that good culture is to keep working on getting it down to five nanoseconds. In schools with bad behaviour, people make decisions that will make behaviour worse without even thinking about it. In schools with great behaviour, people avoid decisions that will make behaviour worse, even if the effect is marginal. Sometimes decisions that will have a negative effect on behaviour are inevitable – living with Covid has certainly forced schools to make tough choices they’d not have made otherwise – but such decisions should never be made thoughtlessly. It doesn’t matter how good a school’s behaviour is, we should never be casual about making it worse, even in a marginal way. If you think your school’s good culture means you don’t have to worry about behaviour, it won’t have that good culture for long. Schools rapidly go from having great behaviour to “good enough” behaviour and from “good enough” behaviour to poor behaviour. So let’s give teachers who worry about losing ten seconds of teaching time, or having one more interruption in the lesson, some respect. Attentiveness to the potential for behaviour becoming worse is a building block of the great culture that ensures it doesn’t.

h1

Non-Teachers Telling Teachers What to Think

September 19, 2020

I wrote this at the start of term for an education publication that then didn’t use it. Apologies that it’s slightly out of date.

Like a lot of teachers, I found the prospect of returning to school after lockdown to be daunting. I’d be teaching in unfamiliar parts of my school; the make-up of many classes would be different; break and lunchtime routines would be transformed. In the last few days of the holiday it was announced that face masks might be required in schools. Regardless of whether it is the right policy, it was an extra complication for teachers and, for many, an unwelcome one. A particular concern was how it would affect behaviour if children’s faces were partially covered. As teachers took to the media (and social media) to express their apprehension, I was surprised to see a number of ex-teachers explaining how there would be no behaviour difficulties of note and dismissing concerns. I hoped they were right, and I’m sure some teachers agreed with them. Nevertheless, how could anyone who wasn’t currently experiencing the changes in classroom routines, and the general stress of being about to return to the classroom on a regular basis after months of working from home, possibly judge the impact of last minute changes? Who are they to tell us our concerns are not reasonable?

Of course, an ad hominem argument is fallacious. Whether somebody is right or wrong does not depend on who they are, but on what they say. Ex-teachers and even people who have never been teachers, have frequently told me things about teaching and learning that I have found useful and wise. So why do I sometimes lose patience with those opinions expressed from outside of the profession? I think that some people show insufficient respect for the insights of those who still teach.

Firstly, there are those who claim that teachers are wrong about what they experience at work, particularly the problems of the job. They suggest that workload can’t be all that bad with those long leisurely holidays, or that children’s bad behaviour can’t be that serious. I’ve lost count of the number of times people who, having never taught in a challenging school, or having traded the classroom for the office at the first opportunity, have told me their hot takes about behaviour I see every day, or how well they understand the children they no longer encounter.

Secondly, there are those who know how to do the job of teaching better than those who do it. Behaviour worries would melt away if you just shook hands with students at the door. Shakespeare would entrance every student if you just explained how it was like rap music. Simultaneous equations would be grasped in a second if you used graphs that showed how phone companies charge. Indeed, listening to some ex-teachers you’d have to wonder how so many of those apparently infallible and endlessly caring practitioners would ever have come to abandon the classroom.

Thirdly, there are those who claim to speak for teachers. Over the years I’ve read newspaper headlines about what teachers are saying, or even petitions supposedly signed by thousands of teachers, that actually just represented the opinions of educationalists, consultants, or full time trade union activists. Too often, teachers are seen not as individuals, but as a single interest group, supposedly signed up to some simple political idea that actually doesn’t reflect the priorities of anyone in the classroom.

Finally, there are those who wish to take power from teachers. There are influential organisations that have been set up to represent teachers which ended up dominated by those who no longer teach. I’ve known some educationalists to be outraged when politicians and policymakers show signs of listening to those still in the classroom rather than non-teaching “experts” in teaching.

I’ll calm down now, because I have learnt loads from governors, advisors, academics and MAT CEOs. I don’t believe for a second that only teachers are worth listening to. But there are definitely times, like now, when the only people who can really know what it’s like to be teaching, are teachers.

h1

Another look at exclusions and SEND

September 12, 2020

A couple of years ago I looked at the rhetoric around permanent exclusions and SEND in this blogpost. As I explained, it is argued that…

…a disproportionate number of excluded pupils have SEND (Special educational needs and disability). This is a favourite fact of those who believe that children are not responsible for their bad behaviour. The impression is given that a child will only behave badly because they have SEND, then schools cruelly exclude them rather than supporting them with their SEND. Some get so carried away with the idea that they will talk about badly behaved children and the disabled as if they were interchangeable. One Australian article on exclusions actually illustrated the connection between SEND and exclusions with a picture of a young person in a wheel chair, as if those with physical disabilities were likely to be excluded.

A lot of this is designed to fool politicians, or parents, who may have no idea how the SEND system works. They may imagine a precise, objective system of identifying a coherent category of genuine needs and disabilities in a small minority on the basis of scientific evidence in order to assist them in ways that have been shown to work. Having made this mistake it would be easy to assume that there is no reason why students with SEND would be disproportionately represented in the exclusion figures, unless they were the victims of prejudice or their bad behaviour resulted from their SEND in a way that suggests it was not their fault. This then allows the anti-exclusion lobby to claim that exclusions are a form of discrimination against the disabled, an issue of social justice, and very probably illegal.

Roughly speaking, those who wish to obstruct or prevent permanent exclusions argue that the situation looks like this.

I argued that there were two problems with this picture.

  1. The labelling of students with SEND is not a precise process of diagnosis which identifies a meaningful difference between SEND and non-SEND students. It covers a fairly arbitrary category of students, pretty much anyone who needs extra help for any reason. The one exception to this is those whose difficulties are due to not speaking English as a first language, which is considered to be distinct from all other difficulties and given a different category.
  2. If a child is badly behaved, and particularly if they are at risk of exclusion, there are lots of incentives to look for SEND and to label them SEND, including types of SEND that are identified mainly from bad behaviour.

Taking this into account the process looks more like this:

When I wrote that previous post I argued mainly from

  • experience;
  • reports into the SEND system;
  • the rules regarding identifying SEND,
  • and the rules regarding exclusions.

As a whole, these generally seemed to indicate that it was easy for a school to classify a child as SEND if they want to, and that there are incentives to label badly behaved children as having SEND. However, while this seems plausible, and plenty of teachers confirmed this was their experience, I didn’t indicate whether this was supported by the data. I am now able to do this.

My first claim above was about how arbitrary the category of SEND is, and in particular, the extent to which, if you look for SEND in a student, you will find it. The New Labour years saw an expansion of the SEN bureaucracy and teachers can tell you just how much paperwork they saw produced on students which identified trivial problems, or made amateur diagnoses of fashionable problems, and recommended interventions that were impractical and not evidence based. FFT Education Datalab looked at the SEN data and, in a blogpost entitled More pupils have special educational needs than you might think, they confirmed the scale of the phenomenon. Looking at the cohort of students who were in year 11 in 2016/17 they found that “44% of the cohort had ever been classified as having SEN by the time they reached the end of Year 11”. As most permanent exclusions involve boys, I asked on Twitter what was the percentage of boys who were classified as SEN at some point was, and was told:

So it would appear, that for some cohorts it was possible to identify a majority of boys as having special needs at some point, which is a curious definition of “special” in itself. I think this is good evidence for my first point: when you look hard enough for SEND in a child, you will find it.

My second point was the extent to which it’s the case that badly behaved students would be identified as having SEND, rather than it being the case that students who have SEND would be likely to be badly behaved. We know that excluded students are likely to have SEND. If it is bad behaviour that results in students being labelled SEND, we would expect the categories of SEND that are most linked to exclusion to be those which are likely to be diagnosed from bad behaviour. We would also expect those students with an EHC Plan, or statement of SEN, i.e. those for whom more evidence of genuine need has been identified, to have a lower risk of exclusion than those just labelled by schools. We would also expect non-specific SEND labels, where a school has decided a child has SEN, but has not even identified enough evidence to say what the SEN is, to be well represented among the excluded. If, however, SEND causes bad behaviour, or permanent exclusions discriminate against those with SEND, we would expect a wide variety of SEND categories to be represented among the permanently excluded and we would expect those with more evidence of genuine SEND (i.e. those with an EHC Plan or statement of SEN) and those with more clearly identified SEND, to be more likely to be excluded.

Fortunately the Timpson report, looked at whether SEND was a risk factor for exclusion after controlling for other factors.

This chart shows the risk of a student without SEND being excluded as a horizontal line, and those categories of SEND that depart significantly from this level of risk are in dark blue. Those categories of SEND with no statistically significant difference in risk from those with no SEND are in light blue.

As you can see, the data shows that having an EHC plan, or statement of SEN, for anything other than “Behavioural emotional and social difficulties” and “social, emotional and mental health”, the two categories most likely to be diagnosed from extreme poor behaviour, actually lowers the risk of exclusion. For those who are identified by schools as having SEND, but without an EHCP/statement, the very high odds of exclusion are found in those same two categories and the miscellaneous category of “SEN type not recorded”. Although there is a statistically significant higher risk for some other categories of SEN, they are not much higher, given the incentives for diagnosis. This is all far more consistent with the “bad behaviour leads to being labelled SEND” hypothesis than the “having SEND leads to exclusion” hypothesis. For those involved in the debate around this issue, where children who are excluded unfairly for behaviour related to their autism feature prominently in the rhetoric, it is particularly noticeable that children with Autistic Spectrum Disorder do not have a high risk of being permanently excluded. If they have a EHC plan or a statement of SEN, they have less chance of being excluded (everything else being equal) than a student without SEND.

This will not stop the debate. Those who believe that permanent exclusions are never justified, will argue that even the most extreme behaviour is a result of “unmet needs” regardless of the data. It’s impossible to exaggerate the tenuous nature of the reasoning used to portray excluded children as helpless victims, and school leaders as villains. A report on exclusions from the think tank IPPR, followed up the claim that SEND is a causal factor in exclusions with the following argument for believing it likely that all excluded students have mental health problems:

In 2015/16, one in fifty children in the general population was recognised as having a social, emotional and mental health need (SEMH). In schools for excluded pupils this rose to one in two. Yet the incidence of mental ill health among excluded pupils is likely to be much higher than these figures suggest. Only half of children with clinically diagnosed conduct disorders and a third of children with similarly diagnosed emotional disorders are recognised in their schools as having special educational needs. This means the proportion of excluded children with mental health problems is likely closer to 100 per cent.

The errors of reasoning in this are incredible. SEMH is not synonymous with “mental health problems”; it’s a category that can include those whose difficulty is that they are badly behaved. “Schools for excluded pupils” here appears to be Pupil Referral Units (PRUs) which, while they are often attended by excluded pupils, are actually institutions for any students who are unable to attend school, including those who are unable to attend due to SEMH. Therefore, their SEMH figures tell us nothing about the rate of SEMH among excluded children. It is, of course, possible to find out the actual proportion of excluded students with a label of SEMH that year by looking at the figures. In 2015/2016 the number of excluded children labelled as having SEMH was 1 860 out of 6 685 or 27.8% (which is surprisingly low given that poor behaviour is a common reason to label a child with SEMH). The “clinically diagnosed conduct disorders” and “similarly diagnosed emotional disorders” were diagnosed from survey data (collected from parents, teachers and children themselves) by a method that found 6% of young people to have a conduct disorder and 4% to have an emotional disorder and not from direct assessments by clinicians. While the survey did find that a large minority of the former category, and almost two thirds of the latter category, did not have officially recognised Special Educational Needs at that time, this was not referring specifically to either permanently excluded children or children in PRUs which may be wildly different. Any one of these errors (assuming this is just an extremely unlikely series of mistakes, rather than a deliberate intention to deceive) would invalidate the argument; so many errors in one paragraph suggests the IPPR was not too bothered about factual accuracy.

Does it matter that such dodgy data is being used? Well the IPPR is a well-established and supposedly reputable think tank. The author of this report went on to set up The Difference, a very influential charity that has done a lot to oppose schools’ right to exclude. The one in two figure was quoted as fact, sometimes alongside the 100% figure, by The BBC, Schools Week, The Huffington Post, the Guardian and even referred to by a report of the House Of Commons Education Committee. Invented and contrived statistics about exclusions can be widely circulated by the media, politicians, charities and think tanks. However the fact that excluded children often have the label of SEND is not evidence that innocent children with SEND are being unfairly excluded, only evidence that we label the children likely to be excluded as having SEND, and it’s time the public debate reflected this truth, rather than the horror stories of the anti-exclusion lobby.

h1

Teachers on the Edge

September 6, 2020

Making the frontline the centre of the education system

The biggest difference in education is made by those at the frontline: the teachers (including school leaders), lecturers and support staff. They know who they are serving; they have a responsibility to their learners. They can also see more directly what is working and what isn’t. At every other level, and unfortunately sometimes in school leadership, there is a distance between the decisions made and their results in actual classrooms.

At other levels, the education system is its own worst enemy. This is not a whine about the political leadership of education: the politicians, the policy makers and the civil servants. For good or ill, their careers usually cover far more than just education, changing portfolios and moving departments as they progress. Whatever faults they bring to the system they usually take them with them when they go. What I am referring to is the way that parts of the education system itself seem to be perpetually focused on something other than education.

It’s a given that those responsible for tens of thousands of schools and other educational institutions, are not trying to shape every single classroom. Whether they do their job well or not, it’s clear that their responsibility is to serve the interests of the public as a whole. It’s also clear that they can consult frontline staff if they wish to, and it’s not obvious that they have any particular reason not to. What concerns me, are those parts of the system which seem to have a vested interest in keeping frontline staff out of sight and out of influence. There are parts of the system that tell frontline staff what to do, but do not have to do those frontline jobs themselves and often haven’t done them for years and often look very uncomfortable if those at the frontline have any say in the matter.

In ITT, education departments in universities overwhelmingly expect those training teachers to teach to be full time academics and not to be teaching in schools. As a result, ITT staff are often concerned only with the political and pedagogical orthodoxies of educationalists, not what works in schools. They have no ‘skin in the game’. On issues such as mixed ability teaching and use of exclusion and discipline in schools, university education lecturers typically appear to have attitudes that are militant, extreme and entirely out of touch with teachers. While they would claim their positions are more evidence-informed than those of teachers, there are also some issues such as phonics where it is noticeable how often educationalists stand against the evidence.

Frontline staff are not encouraged to have much say over their own professional development. CPD budgets are spent by schools and colleges, not by the individual professionals. While it is only appropriate for schools and colleges to provide some proportion of CPD, after all schools need to train their staff in the school specific systems and expectations, this has left education workers unable to set their own priorities. As a result, a voluntary “shadow” system of CPD has developed that teachers take part in during their own time and often pay for out of their own pockets. After school teach meets, BrewED events in pubs, and huge researchED conferences at weekends rely on speakers (often frontline staff themselves) speaking for free and teachers attending in their own time. Sometimes school staff can ask their schools to pay for tickets or travel (although I suspect most don’t), but attendance is on top of the time already spent on days of employer-directed CPD.

A considerable downside to too much employer-directed, and too little self-directed CPD, is that a market for a particular type of consultant has been created. Rather than concentrating on improving the effectiveness of frontline staff, these consultants concentrate on appealing to managers. Teachers find they are given training on how to help the school pass inspections and how to ensure that their response to bad behaviour doesn’t create work for those in charge, rather than being trained on how to teach or manage behaviour more effectively. They may even be employed simply to fill a gap in the schedule for an INSET day, or to give a motivational talk, rather than to provide meaningful professional development. This type of consultant then becomes another vested interest within the system, arguing against effective teaching methods and whole school behaviour systems.

And once you have consultants and educationalists earning a living without providing a benefit to frontline staff, they take an interest in capturing resources intended to serve the frontline. The marginalisation of the frontline is perhaps best illustrated by the way that, in recent years, new institutions have promised to change the balance of power only to replicate what already existed. Two recent examples of institutions funded by the DfE being created to serve the frontline and being captured by interests other than the frontline are:

The Education Endowment Fund. This was apparently intended to move control over education research from the ideologically motivated individuals in education academia. Michael Gove claimed it would “provide additional money for those teachers who develop innovative approaches to tackling disadvantage” and “it is teachers who are bidding for its support and establishing a new research base to inform education policy” [my emphasis]. In practice, it’s chief executive is an educationalist who has been involved in writing papers on how setting children into ability groups is “symbolic violence” based on the theories of Bourdieu. The EEF is now a law unto itself in the agendas it promotes. It recently squandered funds for research into the effectiveness of setting and mixed ability by failing to compare them directly and continues to share older research of doubtful provenance instead. And nobody can work out who, other than the opponents of phonics, wanted the EEF to spend money on the latest iteration of Reading Recovery.

The Chartered College of Teaching. This was created by government policy (and government funding) to be an independent teacher led professional body, “run by teachers, for teachers”. In practice, it is run largely by ex-teachers who already have or had positions of power in education; it is funded by employers, and it is now only too happy to campaign against government policy, even taking its lead from the trade unions. It now holds events in the day time when most teachers can’t leave school, promotes educational fads and censors teachers who dare question educationalists.

Another issue is how difficult it is for frontline staff to express opinions. Teachers have been reported to their employers for expressing opinions on social media. Those training to teach have been reported to their training institutions. Without being able to divulge the details of specific cases it’s hard to prove the trivial nature of such instances. But it doesn’t take long on teacher twitter to discover that whereas consultants and educationalists can heap online abuse on anyone they like, teachers find there are professional consequences for even disagreeing with fashionable opinions and very often those making the complaints are the same consultants and educationalists who have complete freedom of speech themselves.

Finally, the education system promotes and protects the beliefs and interests of those who make the job at the frontline more difficult. Some of this, like the consultants described earlier, appears to be about self-interest. We have organisations that provide training to schools campaigning for the government to ban internal exclusions, suspensions and expulsion, thus creating behaviour problems which require more training for staff. We have organisations that provide mental health services and advice to schools, running public campaigns claiming there is a youth mental health crisis that requires schools to spend more money on mental health services and advice.

To be charitable, it’s not all self-interest, sometimes it’s ideological. When the newly appointed head of Goldsmiths Education department indicates that her department’s programmes focus on “inclusion and social justice in educational settings”, she is no doubt sincere, but it is far from clear why money from the education budget should fund an organisation with such openly political priorities. Similarly, when The Children’s Commissioner joins an online campaign that demonises schools, she is no doubt sincere in her belief that the campaigners are right that schools are cruel and internal exclusion is unnecessary. But it’s far from clear why the government should be funding ideologically motivated attacks on things that are perfectly normal in schools.

Here are my suggestions for changing the system to empower the frontline.

  1. Remove all ITT from university education departments. No teacher needs to be trained by experts in Marxist sociology and critical theory. Remove funds from any organisation, such as the EEF, that is giving power and influence to educationalists to promote their pet theories of learning.
  2. Reduce the number of CPD days controlled by schools, and allow teachers to choose their own CPD for part of that allocation and encourage schools to make this as convenient as possible. Make it harder to make a living providing CPD that teachers don’t want, and easier to make a living providing CPD that teachers would choose for themselves.
  3. Create incentives for those providing teacher training or employer-directed CPD to also teach, whether that’s in the structures or in financial incentives. All parts of the system should be encouraged to audit the extent to which those that shape its policies are currently working at the frontline of education. It would be fascinating to know what proportion of people invited into the DfE to give advice on the education system have worked in a school or college in any capacity other than consultancy in the previous week.
  4. Give teachers a right to freedom of speech. While teachers should not be able to say anything they like about their employers or their students, it is not up to schools to regulate opinions on pedagogy or politics expressed on social media by teachers who are not representing their employer and sometimes not even writing under their own name.
  5. Require every organisation that receives funds directly from the DfE, or indirectly from educational institutions, to refrain from taking part in, or funding anything close to political activism. Abolish completely any institution, such as the Office Of The Children’s Commissioner that seems to have been set up almost entirely to push an ideological agenda.
h1

The tragedy of grades based on predictions

August 16, 2020

When I wrote about an exam announcement last week it was out of date before I’d finished typing. This post too may now be out of date if the appeals system allows major changes, but I have seen so much false information that I thought I’d better get this out there.

Exams were not sat this year. The decision was made instead to predict what grades would have been given. This is probably the decision that should have been debated. Instead the debate has centred on how grades were predicted with much talk of an evil algorithm crushing children’s hopes. Some wished to predict grades deliberately inaccurately in order to allow grade inflation to hide the problems. Because opportunities such as university places and employment are finite, grade inflation actually doesn’t solve any problem. What it does is make sure that when people lose out on opportunities, it would not be clear that this year’s grades were the problem. I argued against the idea that grade inflation solves problems here and will not be going into it again now, but it is worth noting that most disagreement with any opinions I express in this post will be from advocates of using grade inflation to solve problems, rather than anything else. In particular, it needs to be acknowledged that the use of teacher assessment would have on average led to more grade inflation.

However, because people seemed to think inaccuracy in grades would justify grade inflation, and because people objected to specific grades when they arrived, there has now been huge debate about how grades were given. Much of this has been ill-informed. 

I intend to explain the following:

  1. How grades are predicted.
  2. Why predicted grades are inaccurate.
  3. What claims about the process are false or unproven.

Normally, I’d split this into 3 posts, but things are moving so fast I assumed people would want all this at once in one long post.

How grades are predicted.

Ofqual produced a statistical model that would predict the likeliest grades for each centre (usually a school or college). This used all the available data (past performance and past grades of the current cohort) to predict what this year’s performance would have been. This was done in accordance with what previous data showed would predict grades accurately. A lot of comment has assumed that if people are now unhappy with these predictions or individual results, then there must have been a mistake in this statistical model. However, this is not something where one can simply point at things one doesn’t like and say “fix it”. You can test statistical models using old data, e.g. predict 2019 grades from the years before 2019. If you have a model that predicts better than Ofqual’s then you win, you are right. If you don’t, and you don’t know why the Ofqual model predicts how it does, then you are probably wrong. In the end, proportions of grades were calculated from grades given in recent years, then adjusted in light of GCSE information about current students, then the number of expected A-levels in each subject at each grade was calculated for each centre. Centres were given information about what happened in this process in their case.

Although the model came up with the grades at centre level, which students got which grades was decided by the centres. Centres ranked their students in each subject and grades were given in rank order. Some commentary has overlooked this, talking as if the statistical model decided every student’s grade. It did not. It determined what grades were available to be given (with an exception to be discussed in the next paragraph), not which student should get which grade. As a result the majority of grades were not changed and where they were, it would often have been a result of the ranking as well as the statistical model.

Finally, there was an exception because of the problem of “small cohorts” taking exams i.e. where centres had very few students taking a particular exam (or very few had taken it in the past). This is because where there was less data, it would be harder to predict what grades were likely to be given. Centres had also been asked to predict grades (Centre Assessed Grades or CAGs) for each student and for the smallest cohorts these were accepted. Slightly larger cohorts were given a compromise between the CAGs and the statistical model, and for cohorts that were larger still, the statistical model alone was used.

It is important to understand this process if you think a particular grade is wrong. Without knowing whether the cohort was small; why the statistical model would have predicted what it did; how the distribution was calculated for a centre, and where a student was in the ranking, you do not know how a grade came to be given. For some reason, people have jumped to declare the evils of an “algorithm”. Didn’t get your result? It’s the result of an algorithm.

As a maths teacher, I quite like algorithms. Algorithms are the rules and processes used to solve a problem, perhaps best seen as the recipe for getting an answer. Every year algorithms are used after exams to decide grade boundaries and give grades. A mark scheme is also an algorithm. The alternative to algorithms deciding things is making arbitrary judgements that don’t follow rules. This year is different in that CAGs; a statistical model (also a type of algorithm), and centre rankings have replaced exams. The first thing that people need to do to discuss this sensibly is to stop talking about an algorithm that decided everything. If you mean the statistical model then say “the statistical model”. There are other algorithms involved in the process, but they are more like the algorithms used every year: rules that turn messy information into grades. Nobody should be arguing that the process of giving grades should not happen according to rules. Nobody in an exam board should be making it up as they go along.

Why predicted grades are inaccurate.

Predicted grades, whether from teachers or from a statistical model, are not likely to be accurate. That’s why exams are taken every year. The grades given will not have been the same as those that would have been given had exams been sat. Exam results are always influenced by what seem like random factors that nobody can predict (I will discuss this further in the next section). We can reasonably argue over what is the most accurate way to predict grades, but we cannot claim that there is a very accurate method. There are also situations where exam results are very hard to predict. Here is why I think this year’s results will be depressingly inaccurate.

Some students are exceptional. Some will get an A* in a school that’s never had an A*. Some will get a U in a school that’s never had a U. Predicting who these students are is incredibly difficult and remains difficult even where historic A-level results are adjusted to account for the GCSE data of current students. Students will have often unfairly missed out (or unfairly gained) wherever very high or low grades were on the table (i.e. if students were at the top and the bottom of rankings). This is the most heartbreaking aspect of what’s happened. The exceptional is unpredictable. The statistical model will not pick up on these students. If a school normally gets some Us (or it gets Es but this cohort is weaker than usual) the model will predict Us. If a school doesn’t normally get A*s (or it does but this years cohort is weaker than usual) the model will not predict A*s. This will be very inaccurate in practice. You might then think that CAGs should be used to identify these students. However, just as a statistical model won’t pick up an A* or U student where normally there are none, a teacher who has never taught an A* or U student will not be able to be sure they have taught one this time. In the case of U it might be more obvious, but why even enter a student for the exam if it was completely obvious they’d get U? The inaccuracy in the CAGs for extreme grades was remarkable. In 2019, 7.7% of grades were A*; in 2020, 13.9% of CAGs were A*. In 2019, 2.5% of grades were Us; in 2020, 0.3% of CAGs were Us. Both the CAGs and the statistical models were likely to be wrong. There’s no easy way to sort this out, it’s a choice between two bad options.

As well as exceptional students, there are exceptional schools. There are schools that do things differently now, and their results will be different. Like exceptional students, these are hard to predict. Ofqual found that looking at the recent trajectory of schools did not tell them which were going to improve and so the statistical model didn’t use that information. Some of us (myself included) are very convinced we work in schools that are on the right track and likely to do better. However, no school is going to claim otherwise and few schools will admit grades are going to get worse, so again, CAGs are not a solution. Because exceptional schools and exceptional students are by their very nature unpredictable, this is where we can expect to find the biggest injustices in predicted grades.

Perhaps the biggest source of poor predictions is the one that people seem to be reluctant to mention. The rankings rely on the ability of centres to compare students. There is little evidence that schools are good at this, and I can guarantee that some schools I’ve worked at would do a terrible job. However, if we removed this part of the process, grades given in line with the statistical model would be ignoring everything that happened during the course. Few people would argue that this should happen, so this hasn’t been debated anywhere near as much as other sources of error. But for individual students convinced their grades are wrong, this is likely to be incredibly important. Despite what I said about the problems with A*s and Us, a lot of students who missed out on their CAG of A* will have done so because they were not highly ranked, and a lot of students who have got Us will have done so because they were ranked bottom and any “error” could be attributable to their school rather than an algorithm. 

Finally, we have the small cohorts problem. There’s no real way round this, although obviously plenty of technical debate is possible about how it should be dealt with. If the cohort was so small that the statistical model would not work, something else needs to be done. The decision was to use CAGs fully or partially, despite the fact that these are likely to have been inflated. Inflated grades are probably better than random ones or ones based on GCSE results. But this is also a source of inaccuracy. It also favours centres with small cohorts in a subject and, therefore, it will allow systematic inaccuracy that will affect some institutions very differently to others. It is the likely reason that CAGs have not been adjusted downwards equally in all types of school. Popular subjects in large sixth forms are likely to have ended up with grades further below CAGs than obscure subjects in small sixth forms.

Which claims about the process are false or unproven

Much of what I have observed of the debate about how grades were given has consisted of calls for grade inflation disguised as complaints about inaccuracy, or emotive tales of students’ thwarted ambitions that assume that this was unfair or unusual without addressing the cause of the specific disappointment. As mentioned above, much debate has blamed everything on an “algorithm” rather than identifying what choices were made and why. Having accepted the problems with predicting grades and acknowledged the suffering caused by inaccuracies, it’s still worth trying to dispense with mistaken, misleading or inaccurate claims that I have seen on social media and heard on the news. Here are the biggest myths about what’s happened.

Myth 1: Exams grades are normally very accurate. A lot of attempts to emphasise the inaccuracies in the statistical model have assumed that there is more precision in exam grades than there actually are. In reality, the difference between a B grade student and a C grade student can be far less than the difference between two B grade students. Some types of exam marking (not maths, obviously) is quite subjective and there is a significant margin of error, making luck a huge factor in what grades are given. Add to that the amount of luck involved in revising the right topics, having a good day or a bad day in the exam, and it’s no wonder grades are hard to predict with accuracy. It’s not comforting to think that a student may miss out on a university offer because of bad luck, but that is not unique to this year; it is normal. The point of exam grades is not to distinguish between a B grade and a C grade, but between a B grade and a D grade or even an E grade. It’s not that every A* grade reflects the top 7.7% of ability, it’s more a way of ensuring that anyone in the top 1%, say, should get an A*. All grades are a matter of probability, not a definitive judgement. That does not make them useless or mean that there are better alternatives to exams, but it does mean everyone should interpret grades carefully every year. 

Myth 2: CAGs would have been more accurate.

As mentioned above, CAGs were higher than they should have been based on the reasonable assumption that a year group with an interrupted year 13 is unlikely to end up far more able than all previous year groups. There’s been a tendency for people to claim that aggregate errors don’t tell us anything about inaccuracies at the level of individual students. This is getting things backwards. It is possible to have inaccuracies for individual students that cancel each other out and aren’t visible at the aggregate level. So you could have half of grades being too high, and half too low, and on average the distribution of grades seems fair. You could even argue that this happens every year. But this does not work the other way. If, on average, grades were too high it does tell us something about individual grades. It tell us that they are more likely to be too high than too low. This is reason enough to adjust downwards if you want to make the most accurate predictions.

Myth 3: Individual students we don’t know getting unpredicted Us and not getting predicted A*s are examples of how the statistical model was inaccurate.

As argued above, the statistical model is likely to have been inaccurate with respect to the extremes. However, because we know CAGs are also inaccurate, and that bad rankings can also explain anomalies, we cannot blindly accept every story about this from kids we don’t know. I mention this because so much commentary and news coverage has been anecdotal in this way. If there were no disappointed school leavers that would merely tell us that the results this year were way out compared to what they should have been, because disappointed school leavers are normal when exam grades are given out. Obviously, the better you know a student, the more likely you are to know a grade is wrong, but even then you need to know their ranking and the justification for the grade distribution to know the statistical model is the problem.

Myth 4: The system was particularly unfair on poor bright children.

This myth seems to have come from two sources, so I’ll deal with each in turn.

Firstly, is has been assumed that as schools which normally get no A*s would not be predicted A*s (not quite true) this means poor bright kids in badly performing schools would have lost out. This misses out the fact that even with little history of getting A*s previously, they might still be predicted if the cohort has better GCSE results than usual, so the error is less likely if the poor bright kid had good GCSEs. It also assumes that it is normal for poor kids to go to do A-levels in institutions that get no A*s which is unlikely for big institutions. Additionally, schools are not uniform in their intake. The bright kid at a school full of poor kids who misses out is not necessarily poor, in fact because disadvantaged kids are likely to get worse results, they often won’t be. Finally, it’s not just low achieving schools whose A* students are hard to predict. While a school that usually gets no A*s in a subject, but who would have got one this year makes for a more dramatic story, the situation of that child is no different to the lowest ranked child in a school that normally gets 20 A*s in a subject and this year would have got 21. 

The second cause of this myth, is from statistics about downgrading from CAGs like these.

Although really this shows there’s not a huge difference between children with a different socioeconomic status (SES) it has been used to claim that poorer students were harder hit by downgrading and, therefore, it is poor bright kids that will have been hit worse than wealthier bright kids. (Other arguments have looked at type of school, but I’ll deal with that next). Whether this figure is a result of the problem of small cohorts, or from the fact that it is harder to overestimate higher achieving students, I don’t know. However, we do know the claim these figures reflect what happened to the highest achieving kids is incorrect. If we look at the top two grades, the proportion of kids who had a high CAG and had them downgraded is smaller for lower SESs (although because fewer students received those grades overall the chance of being downgraded given that you had a high CAG would show the opposite pattern).

 

Myth 5: The system was deliberately rigged to downgrade the CAGs of some types of students more than others

I suppose it’s probably worth saying that it’s impossible to prove beyond all doubt that this is a myth, but I can note the evidence is against it. The statistical model should not have discriminated at all. The problem of small cohorts and the fact it is easier to over-estimate low-achieving students and harder to over-estimate high achieving students seem to provide a plausible explanation of what we can observe about discrepancies in downgrading. Also, if we compare results over time, we would expect those types of institutions who on average had a fall in results last time to have a rise this year. Take those three factors into account and nobody should be surprised to see the following or to think it sinister (although it would be useful to know to what extent each type of school was affected by downgrading and by small cohort size).

If you see anyone using only one of the above two sets of data, ignoring the change from 2018 to 2019, or deciding to pick and choose which types of centre matter (like comparing independent schools with FE colleges) suspect they are being misleading. Also, recall that these are averages and individual subjects and centres will differ a lot. You cannot pick a single school like, say, Eton and claim it will have done well in avoiding downgrading in all subjects this year.

Now for some general myth-busting.

The evidence shows students were affected by rounding errors. False. Suggestions like this, often used to explain unexpected Us, seem entirely speculative and not necessary to explain why students have got Us.

Some students got higher results in further maths than maths. True. Still a tiny minority, but much higher than normal.

No students at Eton were downgraded. Almost certainly false. This claim that was all over Twitter is extremely unlikely; denied anecdotally and there is no evidence for it. We would expect large independent schools to have been downgraded in popular subjects.

Something went wrong on results day. False. Things seem to have gone according to plan. If what happened was wrong it was because it was the wrong plan. Nothing surprising happened at the system level.

Students were denied the grades they needed by what happened. True for some students, but on average there is no reason to think it would have been more common to miss out on an offer than if exams had taken place, and some institutions might become more generous, if they can, due to the reduced reliability of the grades.

Results were given according to a normal distribution. False.

Rankings were changed by the statistical model. False. Or at least if it did happen, it wasn’t supposed to and an error has been made.

The stressful events of this year where exams were cancelled show that we shouldn’t have exams. False. Your logic does not resemble our earth logic.

And one final point. So many of the problems above come down to small cohort size, that next week’s GCSE results should be far more accurate. Fingers crossed. And good luck.

h1

Grade inflation is not the way to resolve an exam kerfuffle

August 13, 2020

This year, it was decided that exams would be cancelled due to COVID-19, and grades for years 11 and 13 in England (and, as I now know from the news, for higher students in Scotland) would be decided by a mixture of centre assessed grades (CAGs) and a statistical model based on rankings provided by centres. Both elements of this have their limitations, and that is why a combination is necessary. It remains to be seen how effectively this will be done. In England, I suspect it will work well for GCSEs, but I’m not sure about A Levels. In Scotland, the Scottish government gave in to pressure and accepted CAGs as grades, despite them being much higher and results this year now being massively different from previous years. There is a widespread misconception that in normal years, exams represent an objective standard and luck does not play a role in allocating grades. For people who believe this, this year’s system is completely broken no matter how accurately it might predict what students would have got. Moreover, there is also a belief that when an exam system has a problem, grade inflation is a solution.

I would argue that inaccurate grades create their own problems, and that honesty, by which I mean maximising accuracy in predictions, is the best policy. I am aware that there are unavoidable difficulties. Schools and individuals whose success (or failure) this year is unprecedented will not get the grades they would have got. I’ve also worked in schools where assessment was poor, and I hate to think how their rankings will be compiled. But for large cohorts, CAGs will not be more accurate than a model that corrects the tendency towards over-estimation. It flies in the face of mathematics to deny that if grades are inflated, they are less likely to be accurate, although there appear to be many involved in education who claim a large systematic bias in a single direction is not a source of inaccuracy. It’s been reported that A-level grades at A-A* would have gone up from 27% to 38% if CAGs had been used. Nobody can argue that such grades would have been accurate.

Grade inflation is not a victimless crime. It does have real, negative effects. Firstly, devalued grades take opportunities away from those who have received them in the past, as their grades start to be interpreted according to lower standards. Secondly, inflated grades create inconvenience for employers and educational institutions who will find them harder to interpret. Thirdly, some of those who receive grades they never would have achieved without grade inflation will find themselves in courses and jobs for which they are unsuitable. Fourthly, if the rate of grade inflation is not uniform across the system, some will lose out relative to their peers. This is particularly noticeable in Scotland, where there is evidence that grades were inflated more for some socio-economic groups than others. Finally, students in the following year will lose out if the higher pass rates are not maintained, particularly if students can defer for a year before going to university. I would expect there to be pressure in Scotland to keep the much higher pass rates from this year for next year – although a cynic might wonder whether such pressure is easier to resist further away from an election.

There is also a bigger picture here. This might seem like a one-off event, but this is not the first exam kerfuffle for which some have advocated massive grade inflation as a solution. When a new modular English GCSE exam resulted in grade boundaries moving drastically in 2012, there were those who advocated a huge shift in C grade pass rates. When grades are revalued, the direction is almost always the same: more passes without any underlying improvement in achievement or ability. Recent stability in pass rates is the exception, not the norm. It has only being achieved through a deliberate policy effort to hold the line after years of constant grade inflation. If we discard this policy this year, it will be easier to abandon it in other years too.

Whether or not grading goes well today and next Thursday (and I know some will inevitably lose out compared with exams), we would be fools to give up on maintaining the value of grades.

An additional couple of notes.

Firstly, good luck to all students (and their teachers) getting results today and next week. Secondly, the grade allocation might go completely wrong, but remember, anomalies will be reported from schools even if it goes really well. Don’t jump to conclusions when the first angry school leaders appear on the news or on social media. We won’t know if there’s a problem for certain until somebody checks the maths for those schools, which is easier said than done.

h1

Mock results are not a good prediction of final exam grades

August 12, 2020

The government has announced last minute plans to let students use their mock exam result as a grade this year following the cancellation of exams. Although, I have just heard Nick Gibb say mocks could be used for an appeal. so maybe the proposal is not what we thought. Just in case I’ll explain now why it would be insane to allow mocks to count for the following reasons.

  1. There is no consistent system of doing and recording mock exam results with schools doing drastically different things. Schools would definitely have done them differently if this had been on the cards.
  2. Mock exams don’t have boundaries. Schools just make up the boundaries.
  3. Some schools deliberately play down mock results; some even play them up. It’s completely unfair for such arbitrary decision to have any effect on students.
  4. Some students with private tutors “accidentally” see the paper the before sitting the mock exam. Schools then have to sort out how a child surprisingly got almost everything right sometimes on topics that have they’ve never studied.
  5. This new system creates a precedent. Schools will want to have dodgy over-inflated mock results on the system in future.
  6. Schools do mocks at completely different times of the year so they are not comparable between schools.
  7. Nobody wanted this. I’d bet Ofqual don’t want this.
  8. Some subjects, like A level English literature and language, have very long exams which might not be practical to do rigorously as mocks. (And let’s not even mention art A-level)
  9. Schools have already done teacher assessed grades while these are unlikely to be reliable, there is no reason they should less accurate than mock exams.
  10. Making last minute decisions like this makes the job harder for everyone.

Update: It does appear to be the case that mocks will only be used for appeals. Looks like last night’s announcement was incorrect, thank goodness.

h1

Could Fad CPD Harm Your School?

July 29, 2020

A difficult question for any school leader is how best to use the time allocated for Continuing Professional Development (CPD) with some schools conspicuously getting it wrong and no easy answers as to who gets it right. One tendency that I have noticed, which I consider to be a mistake, is to ignore the context of one’s school and the needs of one’s own staff, in favour of what is currently fashionable. Sometimes this is just responding to the ideological climate of the moment, but at other times schools can respond to some gimmick that will soon be forgotten about, or something that has just been in the news.

All CPD runs the risk that, even if it seems fine on the day to the people in charge, it might make no difference in the longer term. There is also the ongoing problem of CPD that passes on false information (like learning styles or the predictive power of attachment theory) and bad practices (like Brain Gym or discovery learning). These difficulties are compounded when CPD is based on the latest fad. There simply may not have been time to evaluate the ideas or the effects of the training. At least with something well-established, you can ask prior recipients of the training if it was helpful; with the next big thing in CPD you might turn out to be the school that discovers its effects are unarguably harmful.

There are two current fads I am hearing lots about at the moment that I think are both partially based on myths and also potentially harmful.

1) Mental health training based on pandemic trauma.

There has been an overwhelming amount of nonsense about a mental health crisis in schools following the pandemic. For instance, this article in Schools Week claimed “child development experts are predicting a ‘national disaster’ as lockdown threatens to create a generation with mental health problems.”

Why might the ideas be false?

We have good reason to be sceptical of those claiming that lockdown has traumatised children. There was already a mental health fad in education, and a trauma fad. During the pandemic a number of people I had previously associated with the idea that schooling caused children to be mentally ill, began arguing that lack of schooling would cause children to be mentally ill. Anyone making claims about the psychological effects of lockdown based on attachment theory, developmental psychology or anything else with no proven record of predicting the prevalence of mental health problems in the real world can be assumed to be a charlatan. Psychiatric epidemiology – the study of the causes of mental disorders in society – is an academic discipline not a hobby. While the mental health of some children may have been harmed by bereavement; being confined to a home that was already a psychologically unhealthy environment, or reduced support for existing mental health conditions, there is good reason to be sceptical of any claims about a Covid mental health crisis.

Why might the training be harmful?

I don’t want to overstate the risks here, as far I know nobody has good evidence that even the most extreme and alarmist talk about mental health in a school causes harm. However, we can’t rule out that children’s mental health could be affected by their perception of mental health disorders in their school. We know that suicides can cluster in a community; that there is an ongoing debate about emotional contagion, and there are studies suggesting that there is some level of peer contagion for depressed states. There is also the Nocebo effect: evidence that telling people that they will be harmed by something, causes them to experience harm. Additionally, even among psychiatrists, there is concern about fad diagnoses. Perhaps worst of all, if teachers and students are told it is normal to have suffered mental health difficulties as a result of lockdown, it might cause teachers to see warning signs of mental illness as “normal” and students with genuine mental health symptoms to think that everyone has them and they are not a reason to seek help.

None of these concerns are a reason not to want staff to be aware of potential mental health problems among students, but it is a reason why we shouldn’t just assume that misinformation and panic about mental health is harmless and that if your intentions are good you won’t make things worse. .

What’s the alternative?

School leaders should be aware that schools already contains experts in looking after children. They already have access to training in safeguarding that includes dealing with mental health. The best option is to avoid assuming any radical discontinuity in children’s mental health before and after lockdown, until there is good evidence for it. School leaders should make use of their existing expertise, and their knowledge of their students and their community. They shouldn’t be asking outsiders to tell them how to react to problems that might not even exist in their school. And, of course, there are good practices such as keeping schools safe, supportive and free from bullying, that will always be best for mental health.

2) Racism training based on unconscious bias.

In the aftermath of the Black Lives Matter protests there seemed to be a rush by some school leaders to abandon all critical thinking regarding racism in society and its causes. A particular focus, perhaps because of its potential for replacing meaningful action with gestures and opportunities to judge one’s peers, has been on the idea that unconscious attitudes are a significant cause of discrimination and unfairness.

Why might the ideas be false? 

There are two main sources of evidence that I have seen presented to educators as evidence of the power of unconscious bias. The first is from research involving the Implicit-Association Test (IAT), a psychological assessment that is meant to uncover one’s hidden prejudices. The IAT has now been established to be neither a reliable test nor a valid predictor of anyone’s actual behaviour, but is apparently still common among corporate diversity trainers*. The second source of evidence for unconscious bias I have seen in education debates is less specific, but more open to interpretation. Any and all evidence of racism in society, particularly in outwardly progressive organisations, is presented as evidence of unconscious bias. This cannot ever be ruled out, however, it cannot be assumed. Anyone who sees evidence of prejudice, for instance in how often people are offered job interviews, might be seeing the results of unconscious bias. However, this is only one explanation among several. There may be conscious prejudice, unless you happen to believe that deliberate racism no longer exists. There may be institutional racism, with rules and procedures that lead to racist outcomes. There may be discrimination based on ignorance and misconceptions which, while unintentional, is still very much a matter of conscious beliefs and actions that could be challenged.

Why might the training be harmful?

If discrimination is happening for any of the reasons I just mentioned (conscious racism, institutional racism, ignorance) then blaming it on unconscious bias will make it harder to prevent. Any institution wanting to perpetuate a racist status quo will find a belief in unconscious bias a convenient excuse for taking no action to identify actual racists, reform discriminatory practices, or identifying where false beliefs about race are leading to discrimination.

Even if it was not a potential distraction from dealing with actual problems, there is also a possibility that the worst anti-racism training might be harmful in its own right (and this goes beyond just training based on unconscious bias). There is evidence that some types of training might have “ironic effects” and that some efforts to address stereotypes might reinforce them. As with mental health training, one should not assume that because one has good intentions, one isn’t making the problem worse. A further complication, that this blogpost discusses, is whether some ideas about race may be so heavily politicised that, if brought into schools and passed on to students, they could conflict with the legal duty to be non-partisan.

What’s the alternative?

Despite theories of “white supremacy” that are intended to explain everything from slavery to the colour of sticking plasters, my experience of teaching in a variety of schools is that actual racism manifests itself in different ways in different schools. There is no single explanatory theory of racism in society that covers every problem and 99% of the time it seems that if somebody suggests such a theory it describes the United States more closely than it does England.  Forget looking for racism in unconscious minds, you should be finding out about racism in your school right now. Is there racist bullying among students? Is there discrimination in pay and progression? Is there anyone, staff or student, who feels less safe and less valued in your school because of their race? Is there an expectation that some ethnic groups cannot be expected to behave or learn? School leaders should know of the problems in their own schools. They should be making sure there is zero tolerance of racism and zero opportunity for racism to spread, even if that means punishments and exclusions for kids and disciplinary action and dismissal for staff. They shouldn’t be asking outsiders to tell them how to react to problems that might not even exist in their school.

Perhaps the allure of fad CPD is something to do with the widespread belief that schools are there to solve society’s problems rather than to educate. It might be that this makes it almost addictive to look for the latest analysis of social problems and the latest ways to address them. But if the CPD needs of a schools are assessed by looking at what’s in a newspaper or being discussed on Twitter, then who is actually addressing the problems and challenges that already exist in that school?

Read the rest of this entry ?

h1

Achievement For All is bad for kids

June 9, 2020

I’m not a huge fan of the Education Endowment Fund, partly because they’ve allowed some pretty shoddy research in the past, and partly because they have a history of being partisan on certain issues. However, they do fund RCTs that test certain education initiatives and, at the very least, that should enable them to spot some popular initiatives that have no effect or even a negative effect.

The latest emperor with no clothes is Achievement For All, which according to the EEF website

 …is a whole-school improvement programme that aims to improve the academic and social outcomes of primary school pupils. Trained Achievement for All coaches deliver a bespoke two-year programme to schools through monthly coaching sessions which focus on leadership, learning, parental engagement and wider outcomes, in addition to focusing on improving outcomes for a target group of children (which largely consists of the lowest 20% of attainers). The programme has cumulatively reached over 4,000 English schools.

Their evaluation of the programme found that:

In this trial, Achievement for All resulted in negative impacts on academic outcomes for pupils, who received the programme during five terms of Years 5 and 6 (ages 9-11). Children in the treatment schools made 2 months’ less progress in Key Stage 2 reading and maths, compared to children in control schools, in which usual practice continued. The same negative impact was found for children eligible for free school meals. Target children (those children the intervention specifically aimed to support) also made 2 months’ less progress in reading, and 3 months’ less progress in maths. The co-primary outcome finding (whole-group reading, and target children reading) had a very high security rating, 5 out of 5 on the EEF padlock scale.

Given the size of the effects and the consistency of negative findings, these results are noteworthy. Of particular importance is the impact that the programme had on target children, and children eligible for free school meals.

A report in Schools Week filled in some details.

The findings rank AfA as the joint worst-performing of more than 100 projects reviewed by EEF since 2011, with only three other projects earning an impact rating of negative two months.

Of these it is the only one to have the highest possible evidence strength of five – which indicates EEF “have very high confidence in its findings”.

They also reported the laughable response of the founder of AfA, Professor Sonia Blandford:

Blandford pointed out that disadvantaged pupils within the AfA trial schools still “achieved above national expectations, which was our key aim in the intervention”.

She added “it was an error to agree to a trial that attempted to evaluate the effectiveness of our broad and yet bespoke approach through the narrow lens of two school improvement parameters”.

Does this matter? I think it does. Since it started in 2011, it’s entirely possible that 4000 schools have harmed their students’ learning or at least wasted resources on something that is more likely to be harmful than helpful. And it’s worth asking how. Probably the single biggest reason this disaster lasted for so long is because the DfE endorsed it with a report assessing its positive effects on SEN children with data collected through:

  • teacher surveys
  • academic sources
  • interviews with strategic people
  • longitudinal case studies of 20 AfA schools
  • mini case studies of 100 pupils and their families
  • AfA events

In other words, the kind of “research” that costs money but nobody can reasonably believe is a fair way to evaluate an initiative of this kind. So the first thing we can learn from this is that the DfE should not be endorsing projects in this way. Particularly when the chances are some teachers, like Mrs S below, could have given a more accurate evaluation.

But another point, is the extent to which the people who run organisations like this become a vested interest, eagerly telling politicians and the public that schools are getting it wrong. There is a huge amount of expertise in the system in teachers and school leaders. Yet, it is staggering how often the AfA’s Professor Blandford was a voice in important debates. I have particularly noticed how such people, whose position means they don’t have to deal with the consequences of dangerous and out of control schools, seem to dominate the debate on exclusions. Professor Blandford was a particularly loud voice on this issue:

Calling for fewer exclusions in response to the Timpson Review.

In the TES claiming schools could do without exclusions

Talking on exclusions at Kinsgston University

Addressing LA conferences on reducing exclusions

Any one of these would have been far better used as an opportunity for a successful school leader to explain why exclusions are necessary. But our education system as a whole promotes the voices of “experts” whose ideas don’t work over the voices of practitioners with a proven track record.

And I won’t ever forget, as I reported here, back in the days when the Chartered College Of Teaching was still pretending it was going to be teacher led, Professor Blandford was one of the first non-teachers to be given a leadership role that, if promises had been kept, would have gone to a teacher.

I’ve always defended the right of non-teachers to help and advise schools, but we need a system where schools look first to a) practitioner expertise and b) what has been proven to work. Not a system where it’s only after 4000 schools and 9 years that we actually realise that we’ve been listening to the wrong people.

h1

Rules and exceptions

May 26, 2020

The weekend’s news was dominated by the story of the prime minister’s special adviser, Dominic Cummings, and his long trip to Durham during lockdown which he justified, at least partially, in the following way:

The rules make clear that when dealing with small children that can be exceptional circumstances and I think that was exceptional circumstances.

I suspect I’m in the majority in not considering this an acceptable interpretation of the rules. However, given that I don’t feel any particular prior animosity to Cummings, and given that I could easily imagine other exceptional circumstances that I would have thought made his actions acceptable, I find myself considering precisely what the problem is.

This also led me back to many debates about rules in schools where the topic of exceptions have come up. To hear some people talk about the evils of “no excuses” or “zero tolerance” behaviour policies you could be forgiven for thinking that there were schools that make no exception to the rules at all. It is more common than it should be to hear people, usually not working in schools, claim that a rule against letting kids out to the loo means kids soiling themselves, and a rule against letting kids out of the room every time they say they feel sick means that it is normal for kids to sit in class vomiting into a bucket. I don’t think these claims describe any schools at all. I think, if anything, there is far more of a constituency of people involved in education who always make exceptions and will justify almost any rule-breaking as something a child couldn’t help doing. I could probably rant for hours about the most ridiculous excuses for breaking rules I’ve heard from both kids, and, depressingly, from the kind of adult who believes children should be freed from adult authority.

So how do we distinguish between valid and invalid exceptions to rules?

Here are some considerations.

1) Does the exception make the rule pointless?

If making a particular exception to a rule renders the entire rule pointless then it’s not a valid exception. This might seem obvious, but schools often have rules that are rendered pointless by the exceptions. If your break duty is to keep kids out of a school building unless they have a reason to be inside, you may quickly discover that the only kids who don’t have a reason to be inside are those who didn’t realise they weren’t supposed to be in the building.

Giving endless chances before any sanction is given can mean that a rule of “don’t do X” quickly becomes “do X as much as you like, until a teacher tells you to stop”. Not confronting a child’s behaviour because they will respond badly to being confronted, can mean that rules are essentially guidelines to be followed by choice.

2) Is the exception actually exceptional?

Related to the last point, the sheer number of exceptions can make a rule pointless. A rule of “nobody leaves the lesson” becomes pointless if there are exceptions for medical reasons that are self-reported. In one school I worked in I taught a large class where a quarter of the students had a medical reason to leave the classroom to go to the toilets signed by a member of staff. After I reported this to the head of year and he made it marginally harder to get permission without recent parental contact or in serious cases a medical note, this immediately fell to no students at all.

A lot of the debate over strict rules and special needs is in this category. There are people who use a wide variety of SEN as a reason a child should not have to obey rules or as a reason particular rules should not exist. Many people argue as if being labelled SEN alone was a reason not to obey rules, even though in some cohorts 44% of children (51% of boys) have been labelled as SEN at some point in their time at school. A particular concern is those who begin their claims with “Children with autism can’t…”. These claims are virtually never true, given the nature of autism and how different kids with autism are. A special need should always be special and an exception should always be exceptional.

3) Is the exception obvious and uncontroversial?

One thing that always amazes me about debates on strict rules, is how willing some people are to believe that schools will not make obvious exceptions. Strangely enough, schools that have a rule that students stand up when a member of staff enters the room, don’t actually punish a student in a wheelchair for not following it. Those schools that encourage eye contact when greeting a member of staff cannot be assumed to be attempting to force out those whose SEN might make that difficult. And those who claim that rules against shouting abuse are discriminatory against those with Tourette’s, not only don’t understand schools, they don’t understand Tourette’s either. It’s debatable how clearly school rules have to say “this rule doesn’t apply in exceptional circumstances” as I can only think of one case in my two decades of teaching where an obvious exception wasn’t made (and that was a member of support staff, not a teacher who refused to make it) but we could probably move the debate on behaviour on a lot if everyone could admit that obvious exceptions are made all the time, and the controversial bit is over when we stop making exceptions.

4) Were “non-routine decisions” thought through?

A child has a medical note saying they should be let out to use the toilet. They use it every Thursday afternoon, usually as soon as they are given written work.

A child is being investigated for autism. Their paperwork says clearly that they will not be able to cope with being shouted at and will walk out. They walk out one lesson when told firmly they need to get on with the work.

A child uses a homophobic term that is not common these days to address a friend and they are overheard by a member of staff. When they are told that it is homophobic, they are shocked (strangely enough they usually have to hear this from their friends rather than just their teacher) and it is clear that they did not intend to be homophobic and have agreed that what they said was unacceptable.

All these cases are ones where, at the very least, there is room to consider for a moment, whether the normal course of action is the appropriate one and/or whether some follow up action will be needed later. I almost used the term “tough calls” for this section, but actually I’m not sure all the situations described above are “tough” for the experienced teacher, they are just not the sort of routine judgement teachers make without thinking and forget about a second later.

It should be the case that a teacher making a non-routine decision has thought it through and can explain it. This may even require delaying the final decision or making a provisional decision “e.g. I believe this is against the rules, I will look into it before issuing the detention”. It should be the case that schools make situations that require that extra thought as rare as possible by making sure rules are clear and widely understood. This is why it matters if people involved in writing the rules start interpreting them in ways that are unexpected.

5) Were “debatable decisions” considered by the appropriate people and clear guidance given?

If a teacher, even after some thought, is still not 100% sure they are right in their decision, what their decision should be, or even if they are confident but they think a parent might contact the school asking for the justification for a decision, it’s best to speak to somebody else. Firstly, it makes it clear that the teacher is aware of the issue; they didn’t make a mistake, and then try to cover it up. Secondly, this might bring to light more information or even a better understanding of the rules that resolves the problem. Thirdly, it means that if, eventually, it is decided the decision is the wrong one, then they are not personally hung out to dry, it becomes the responsibility of the school to get it right. The other side of this is, if a teacher’s decision is decided to be wrong, they should be informed immediately and the reasons why explained clearly. I have left schools because it was normal for teacher’s decisions about sanctions to be overruled without the teacher being informed or where managers would deny responsibility for the things they made teachers do.

I hope these are useful considerations about rules and exceptions in schools. I’ll leave it as an exercise for the reader to decide whether any of these points also apply to the case of Dominic Cummings.

%d bloggers like this: