Teachers on the Edge

September 6, 2020

Making the frontline the centre of the education system

The biggest difference in education is made by those at the frontline: the teachers (including school leaders), lecturers and support staff. They know who they are serving; they have a responsibility to their learners. They can also see more directly what is working and what isn’t. At every other level, and unfortunately sometimes in school leadership, there is a distance between the decisions made and their results in actual classrooms.

At other levels, the education system is its own worst enemy. This is not a whine about the political leadership of education: the politicians, the policy makers and the civil servants. For good or ill, their careers usually cover far more than just education, changing portfolios and moving departments as they progress. Whatever faults they bring to the system they usually take them with them when they go. What I am referring to is the way that parts of the education system itself seem to be perpetually focused on something other than education.

It’s a given that those responsible for tens of thousands of schools and other educational institutions, are not trying to shape every single classroom. Whether they do their job well or not, it’s clear that their responsibility is to serve the interests of the public as a whole. It’s also clear that they can consult frontline staff if they wish to, and it’s not obvious that they have any particular reason not to. What concerns me, are those parts of the system which seem to have a vested interest in keeping frontline staff out of sight and out of influence. There are parts of the system that tell frontline staff what to do, but do not have to do those frontline jobs themselves and often haven’t done them for years and often look very uncomfortable if those at the frontline have any say in the matter.

In ITT, education departments in universities overwhelmingly expect those training teachers to teach to be full time academics and not to be teaching in schools. As a result, ITT staff are often concerned only with the political and pedagogical orthodoxies of educationalists, not what works in schools. They have no ‘skin in the game’. On issues such as mixed ability teaching and use of exclusion and discipline in schools, university education lecturers typically appear to have attitudes that are militant, extreme and entirely out of touch with teachers. While they would claim their positions are more evidence-informed than those of teachers, there are also some issues such as phonics where it is noticeable how often educationalists stand against the evidence.

Frontline staff are not encouraged to have much say over their own professional development. CPD budgets are spent by schools and colleges, not by the individual professionals. While it is only appropriate for schools and colleges to provide some proportion of CPD, after all schools need to train their staff in the school specific systems and expectations, this has left education workers unable to set their own priorities. As a result, a voluntary “shadow” system of CPD has developed that teachers take part in during their own time and often pay for out of their own pockets. After school teach meets, BrewED events in pubs, and huge researchED conferences at weekends rely on speakers (often frontline staff themselves) speaking for free and teachers attending in their own time. Sometimes school staff can ask their schools to pay for tickets or travel (although I suspect most don’t), but attendance is on top of the time already spent on days of employer-directed CPD.

A considerable downside to too much employer-directed, and too little self-directed CPD, is that a market for a particular type of consultant has been created. Rather than concentrating on improving the effectiveness of frontline staff, these consultants concentrate on appealing to managers. Teachers find they are given training on how to help the school pass inspections and how to ensure that their response to bad behaviour doesn’t create work for those in charge, rather than being trained on how to teach or manage behaviour more effectively. They may even be employed simply to fill a gap in the schedule for an INSET day, or to give a motivational talk, rather than to provide meaningful professional development. This type of consultant then becomes another vested interest within the system, arguing against effective teaching methods and whole school behaviour systems.

And once you have consultants and educationalists earning a living without providing a benefit to frontline staff, they take an interest in capturing resources intended to serve the frontline. The marginalisation of the frontline is perhaps best illustrated by the way that, in recent years, new institutions have promised to change the balance of power only to replicate what already existed. Two recent examples of institutions funded by the DfE being created to serve the frontline and being captured by interests other than the frontline are:

The Education Endowment Fund. This was apparently intended to move control over education research from the ideologically motivated individuals in education academia. Michael Gove claimed it would “provide additional money for those teachers who develop innovative approaches to tackling disadvantage” and “it is teachers who are bidding for its support and establishing a new research base to inform education policy” [my emphasis]. In practice, it’s chief executive is an educationalist who has been involved in writing papers on how setting children into ability groups is “symbolic violence” based on the theories of Bourdieu. The EEF is now a law unto itself in the agendas it promotes. It recently squandered funds for research into the effectiveness of setting and mixed ability by failing to compare them directly and continues to share older research of doubtful provenance instead. And nobody can work out who, other than the opponents of phonics, wanted the EEF to spend money on the latest iteration of Reading Recovery.

The Chartered College of Teaching. This was created by government policy (and government funding) to be an independent teacher led professional body, “run by teachers, for teachers”. In practice, it is run largely by ex-teachers who already have or had positions of power in education; it is funded by employers, and it is now only too happy to campaign against government policy, even taking its lead from the trade unions. It now holds events in the day time when most teachers can’t leave school, promotes educational fads and censors teachers who dare question educationalists.

Another issue is how difficult it is for frontline staff to express opinions. Teachers have been reported to their employers for expressing opinions on social media. Those training to teach have been reported to their training institutions. Without being able to divulge the details of specific cases it’s hard to prove the trivial nature of such instances. But it doesn’t take long on teacher twitter to discover that whereas consultants and educationalists can heap online abuse on anyone they like, teachers find there are professional consequences for even disagreeing with fashionable opinions and very often those making the complaints are the same consultants and educationalists who have complete freedom of speech themselves.

Finally, the education system promotes and protects the beliefs and interests of those who make the job at the frontline more difficult. Some of this, like the consultants described earlier, appears to be about self-interest. We have organisations that provide training to schools campaigning for the government to ban internal exclusions, suspensions and expulsion, thus creating behaviour problems which require more training for staff. We have organisations that provide mental health services and advice to schools, running public campaigns claiming there is a youth mental health crisis that requires schools to spend more money on mental health services and advice.

To be charitable, it’s not all self-interest, sometimes it’s ideological. When the newly appointed head of Goldsmiths Education department indicates that her department’s programmes focus on “inclusion and social justice in educational settings”, she is no doubt sincere, but it is far from clear why money from the education budget should fund an organisation with such openly political priorities. Similarly, when The Children’s Commissioner joins an online campaign that demonises schools, she is no doubt sincere in her belief that the campaigners are right that schools are cruel and internal exclusion is unnecessary. But it’s far from clear why the government should be funding ideologically motivated attacks on things that are perfectly normal in schools.

Here are my suggestions for changing the system to empower the frontline.

  1. Remove all ITT from university education departments. No teacher needs to be trained by experts in Marxist sociology and critical theory. Remove funds from any organisation, such as the EEF, that is giving power and influence to educationalists to promote their pet theories of learning.
  2. Reduce the number of CPD days controlled by schools, and allow teachers to choose their own CPD for part of that allocation and encourage schools to make this as convenient as possible. Make it harder to make a living providing CPD that teachers don’t want, and easier to make a living providing CPD that teachers would choose for themselves.
  3. Create incentives for those providing teacher training or employer-directed CPD to also teach, whether that’s in the structures or in financial incentives. All parts of the system should be encouraged to audit the extent to which those that shape its policies are currently working at the frontline of education. It would be fascinating to know what proportion of people invited into the DfE to give advice on the education system have worked in a school or college in any capacity other than consultancy in the previous week.
  4. Give teachers a right to freedom of speech. While teachers should not be able to say anything they like about their employers or their students, it is not up to schools to regulate opinions on pedagogy or politics expressed on social media by teachers who are not representing their employer and sometimes not even writing under their own name.
  5. Require every organisation that receives funds directly from the DfE, or indirectly from educational institutions, to refrain from taking part in, or funding anything close to political activism. Abolish completely any institution, such as the Office Of The Children’s Commissioner that seems to have been set up almost entirely to push an ideological agenda.

The tragedy of grades based on predictions

August 16, 2020

When I wrote about an exam announcement last week it was out of date before I’d finished typing. This post too may now be out of date if the appeals system allows major changes, but I have seen so much false information that I thought I’d better get this out there.

Exams were not sat this year. The decision was made instead to predict what grades would have been given. This is probably the decision that should have been debated. Instead the debate has centred on how grades were predicted with much talk of an evil algorithm crushing children’s hopes. Some wished to predict grades deliberately inaccurately in order to allow grade inflation to hide the problems. Because opportunities such as university places and employment are finite, grade inflation actually doesn’t solve any problem. What it does is make sure that when people lose out on opportunities, it would not be clear that this year’s grades were the problem. I argued against the idea that grade inflation solves problems here and will not be going into it again now, but it is worth noting that most disagreement with any opinions I express in this post will be from advocates of using grade inflation to solve problems, rather than anything else. In particular, it needs to be acknowledged that the use of teacher assessment would have on average led to more grade inflation.

However, because people seemed to think inaccuracy in grades would justify grade inflation, and because people objected to specific grades when they arrived, there has now been huge debate about how grades were given. Much of this has been ill-informed. 

I intend to explain the following:

  1. How grades are predicted.
  2. Why predicted grades are inaccurate.
  3. What claims about the process are false or unproven.

Normally, I’d split this into 3 posts, but things are moving so fast I assumed people would want all this at once in one long post.

How grades are predicted.

Ofqual produced a statistical model that would predict the likeliest grades for each centre (usually a school or college). This used all the available data (past performance and past grades of the current cohort) to predict what this year’s performance would have been. This was done in accordance with what previous data showed would predict grades accurately. A lot of comment has assumed that if people are now unhappy with these predictions or individual results, then there must have been a mistake in this statistical model. However, this is not something where one can simply point at things one doesn’t like and say “fix it”. You can test statistical models using old data, e.g. predict 2019 grades from the years before 2019. If you have a model that predicts better than Ofqual’s then you win, you are right. If you don’t, and you don’t know why the Ofqual model predicts how it does, then you are probably wrong. In the end, proportions of grades were calculated from grades given in recent years, then adjusted in light of GCSE information about current students, then the number of expected A-levels in each subject at each grade was calculated for each centre. Centres were given information about what happened in this process in their case.

Although the model came up with the grades at centre level, which students got which grades was decided by the centres. Centres ranked their students in each subject and grades were given in rank order. Some commentary has overlooked this, talking as if the statistical model decided every student’s grade. It did not. It determined what grades were available to be given (with an exception to be discussed in the next paragraph), not which student should get which grade. As a result the majority of grades were not changed and where they were, it would often have been a result of the ranking as well as the statistical model.

Finally, there was an exception because of the problem of “small cohorts” taking exams i.e. where centres had very few students taking a particular exam (or very few had taken it in the past). This is because where there was less data, it would be harder to predict what grades were likely to be given. Centres had also been asked to predict grades (Centre Assessed Grades or CAGs) for each student and for the smallest cohorts these were accepted. Slightly larger cohorts were given a compromise between the CAGs and the statistical model, and for cohorts that were larger still, the statistical model alone was used.

It is important to understand this process if you think a particular grade is wrong. Without knowing whether the cohort was small; why the statistical model would have predicted what it did; how the distribution was calculated for a centre, and where a student was in the ranking, you do not know how a grade came to be given. For some reason, people have jumped to declare the evils of an “algorithm”. Didn’t get your result? It’s the result of an algorithm.

As a maths teacher, I quite like algorithms. Algorithms are the rules and processes used to solve a problem, perhaps best seen as the recipe for getting an answer. Every year algorithms are used after exams to decide grade boundaries and give grades. A mark scheme is also an algorithm. The alternative to algorithms deciding things is making arbitrary judgements that don’t follow rules. This year is different in that CAGs; a statistical model (also a type of algorithm), and centre rankings have replaced exams. The first thing that people need to do to discuss this sensibly is to stop talking about an algorithm that decided everything. If you mean the statistical model then say “the statistical model”. There are other algorithms involved in the process, but they are more like the algorithms used every year: rules that turn messy information into grades. Nobody should be arguing that the process of giving grades should not happen according to rules. Nobody in an exam board should be making it up as they go along.

Why predicted grades are inaccurate.

Predicted grades, whether from teachers or from a statistical model, are not likely to be accurate. That’s why exams are taken every year. The grades given will not have been the same as those that would have been given had exams been sat. Exam results are always influenced by what seem like random factors that nobody can predict (I will discuss this further in the next section). We can reasonably argue over what is the most accurate way to predict grades, but we cannot claim that there is a very accurate method. There are also situations where exam results are very hard to predict. Here is why I think this year’s results will be depressingly inaccurate.

Some students are exceptional. Some will get an A* in a school that’s never had an A*. Some will get a U in a school that’s never had a U. Predicting who these students are is incredibly difficult and remains difficult even where historic A-level results are adjusted to account for the GCSE data of current students. Students will have often unfairly missed out (or unfairly gained) wherever very high or low grades were on the table (i.e. if students were at the top and the bottom of rankings). This is the most heartbreaking aspect of what’s happened. The exceptional is unpredictable. The statistical model will not pick up on these students. If a school normally gets some Us (or it gets Es but this cohort is weaker than usual) the model will predict Us. If a school doesn’t normally get A*s (or it does but this years cohort is weaker than usual) the model will not predict A*s. This will be very inaccurate in practice. You might then think that CAGs should be used to identify these students. However, just as a statistical model won’t pick up an A* or U student where normally there are none, a teacher who has never taught an A* or U student will not be able to be sure they have taught one this time. In the case of U it might be more obvious, but why even enter a student for the exam if it was completely obvious they’d get U? The inaccuracy in the CAGs for extreme grades was remarkable. In 2019, 7.7% of grades were A*; in 2020, 13.9% of CAGs were A*. In 2019, 2.5% of grades were Us; in 2020, 0.3% of CAGs were Us. Both the CAGs and the statistical models were likely to be wrong. There’s no easy way to sort this out, it’s a choice between two bad options.

As well as exceptional students, there are exceptional schools. There are schools that do things differently now, and their results will be different. Like exceptional students, these are hard to predict. Ofqual found that looking at the recent trajectory of schools did not tell them which were going to improve and so the statistical model didn’t use that information. Some of us (myself included) are very convinced we work in schools that are on the right track and likely to do better. However, no school is going to claim otherwise and few schools will admit grades are going to get worse, so again, CAGs are not a solution. Because exceptional schools and exceptional students are by their very nature unpredictable, this is where we can expect to find the biggest injustices in predicted grades.

Perhaps the biggest source of poor predictions is the one that people seem to be reluctant to mention. The rankings rely on the ability of centres to compare students. There is little evidence that schools are good at this, and I can guarantee that some schools I’ve worked at would do a terrible job. However, if we removed this part of the process, grades given in line with the statistical model would be ignoring everything that happened during the course. Few people would argue that this should happen, so this hasn’t been debated anywhere near as much as other sources of error. But for individual students convinced their grades are wrong, this is likely to be incredibly important. Despite what I said about the problems with A*s and Us, a lot of students who missed out on their CAG of A* will have done so because they were not highly ranked, and a lot of students who have got Us will have done so because they were ranked bottom and any “error” could be attributable to their school rather than an algorithm. 

Finally, we have the small cohorts problem. There’s no real way round this, although obviously plenty of technical debate is possible about how it should be dealt with. If the cohort was so small that the statistical model would not work, something else needs to be done. The decision was to use CAGs fully or partially, despite the fact that these are likely to have been inflated. Inflated grades are probably better than random ones or ones based on GCSE results. But this is also a source of inaccuracy. It also favours centres with small cohorts in a subject and, therefore, it will allow systematic inaccuracy that will affect some institutions very differently to others. It is the likely reason that CAGs have not been adjusted downwards equally in all types of school. Popular subjects in large sixth forms are likely to have ended up with grades further below CAGs than obscure subjects in small sixth forms.

Which claims about the process are false or unproven

Much of what I have observed of the debate about how grades were given has consisted of calls for grade inflation disguised as complaints about inaccuracy, or emotive tales of students’ thwarted ambitions that assume that this was unfair or unusual without addressing the cause of the specific disappointment. As mentioned above, much debate has blamed everything on an “algorithm” rather than identifying what choices were made and why. Having accepted the problems with predicting grades and acknowledged the suffering caused by inaccuracies, it’s still worth trying to dispense with mistaken, misleading or inaccurate claims that I have seen on social media and heard on the news. Here are the biggest myths about what’s happened.

Myth 1: Exams grades are normally very accurate. A lot of attempts to emphasise the inaccuracies in the statistical model have assumed that there is more precision in exam grades than there actually are. In reality, the difference between a B grade student and a C grade student can be far less than the difference between two B grade students. Some types of exam marking (not maths, obviously) is quite subjective and there is a significant margin of error, making luck a huge factor in what grades are given. Add to that the amount of luck involved in revising the right topics, having a good day or a bad day in the exam, and it’s no wonder grades are hard to predict with accuracy. It’s not comforting to think that a student may miss out on a university offer because of bad luck, but that is not unique to this year; it is normal. The point of exam grades is not to distinguish between a B grade and a C grade, but between a B grade and a D grade or even an E grade. It’s not that every A* grade reflects the top 7.7% of ability, it’s more a way of ensuring that anyone in the top 1%, say, should get an A*. All grades are a matter of probability, not a definitive judgement. That does not make them useless or mean that there are better alternatives to exams, but it does mean everyone should interpret grades carefully every year. 

Myth 2: CAGs would have been more accurate.

As mentioned above, CAGs were higher than they should have been based on the reasonable assumption that a year group with an interrupted year 13 is unlikely to end up far more able than all previous year groups. There’s been a tendency for people to claim that aggregate errors don’t tell us anything about inaccuracies at the level of individual students. This is getting things backwards. It is possible to have inaccuracies for individual students that cancel each other out and aren’t visible at the aggregate level. So you could have half of grades being too high, and half too low, and on average the distribution of grades seems fair. You could even argue that this happens every year. But this does not work the other way. If, on average, grades were too high it does tell us something about individual grades. It tell us that they are more likely to be too high than too low. This is reason enough to adjust downwards if you want to make the most accurate predictions.

Myth 3: Individual students we don’t know getting unpredicted Us and not getting predicted A*s are examples of how the statistical model was inaccurate.

As argued above, the statistical model is likely to have been inaccurate with respect to the extremes. However, because we know CAGs are also inaccurate, and that bad rankings can also explain anomalies, we cannot blindly accept every story about this from kids we don’t know. I mention this because so much commentary and news coverage has been anecdotal in this way. If there were no disappointed school leavers that would merely tell us that the results this year were way out compared to what they should have been, because disappointed school leavers are normal when exam grades are given out. Obviously, the better you know a student, the more likely you are to know a grade is wrong, but even then you need to know their ranking and the justification for the grade distribution to know the statistical model is the problem.

Myth 4: The system was particularly unfair on poor bright children.

This myth seems to have come from two sources, so I’ll deal with each in turn.

Firstly, is has been assumed that as schools which normally get no A*s would not be predicted A*s (not quite true) this means poor bright kids in badly performing schools would have lost out. This misses out the fact that even with little history of getting A*s previously, they might still be predicted if the cohort has better GCSE results than usual, so the error is less likely if the poor bright kid had good GCSEs. It also assumes that it is normal for poor kids to go to do A-levels in institutions that get no A*s which is unlikely for big institutions. Additionally, schools are not uniform in their intake. The bright kid at a school full of poor kids who misses out is not necessarily poor, in fact because disadvantaged kids are likely to get worse results, they often won’t be. Finally, it’s not just low achieving schools whose A* students are hard to predict. While a school that usually gets no A*s in a subject, but who would have got one this year makes for a more dramatic story, the situation of that child is no different to the lowest ranked child in a school that normally gets 20 A*s in a subject and this year would have got 21. 

The second cause of this myth, is from statistics about downgrading from CAGs like these.

Although really this shows there’s not a huge difference between children with a different socioeconomic status (SES) it has been used to claim that poorer students were harder hit by downgrading and, therefore, it is poor bright kids that will have been hit worse than wealthier bright kids. (Other arguments have looked at type of school, but I’ll deal with that next). Whether this figure is a result of the problem of small cohorts, or from the fact that it is harder to overestimate higher achieving students, I don’t know. However, we do know the claim these figures reflect what happened to the highest achieving kids is incorrect. If we look at the top two grades, the proportion of kids who had a high CAG and had them downgraded is smaller for lower SESs (although because fewer students received those grades overall the chance of being downgraded given that you had a high CAG would show the opposite pattern).


Myth 5: The system was deliberately rigged to downgrade the CAGs of some types of students more than others

I suppose it’s probably worth saying that it’s impossible to prove beyond all doubt that this is a myth, but I can note the evidence is against it. The statistical model should not have discriminated at all. The problem of small cohorts and the fact it is easier to over-estimate low-achieving students and harder to over-estimate high achieving students seem to provide a plausible explanation of what we can observe about discrepancies in downgrading. Also, if we compare results over time, we would expect those types of institutions who on average had a fall in results last time to have a rise this year. Take those three factors into account and nobody should be surprised to see the following or to think it sinister (although it would be useful to know to what extent each type of school was affected by downgrading and by small cohort size).

If you see anyone using only one of the above two sets of data, ignoring the change from 2018 to 2019, or deciding to pick and choose which types of centre matter (like comparing independent schools with FE colleges) suspect they are being misleading. Also, recall that these are averages and individual subjects and centres will differ a lot. You cannot pick a single school like, say, Eton and claim it will have done well in avoiding downgrading in all subjects this year.

Now for some general myth-busting.

The evidence shows students were affected by rounding errors. False. Suggestions like this, often used to explain unexpected Us, seem entirely speculative and not necessary to explain why students have got Us.

Some students got higher results in further maths than maths. True. Still a tiny minority, but much higher than normal.

No students at Eton were downgraded. Almost certainly false. This claim that was all over Twitter is extremely unlikely; denied anecdotally and there is no evidence for it. We would expect large independent schools to have been downgraded in popular subjects.

Something went wrong on results day. False. Things seem to have gone according to plan. If what happened was wrong it was because it was the wrong plan. Nothing surprising happened at the system level.

Students were denied the grades they needed by what happened. True for some students, but on average there is no reason to think it would have been more common to miss out on an offer than if exams had taken place, and some institutions might become more generous, if they can, due to the reduced reliability of the grades.

Results were given according to a normal distribution. False.

Rankings were changed by the statistical model. False. Or at least if it did happen, it wasn’t supposed to and an error has been made.

The stressful events of this year where exams were cancelled show that we shouldn’t have exams. False. Your logic does not resemble our earth logic.

And one final point. So many of the problems above come down to small cohort size, that next week’s GCSE results should be far more accurate. Fingers crossed. And good luck.


Grade inflation is not the way to resolve an exam kerfuffle

August 13, 2020

This year, it was decided that exams would be cancelled due to COVID-19, and grades for years 11 and 13 in England (and, as I now know from the news, for higher students in Scotland) would be decided by a mixture of centre assessed grades (CAGs) and a statistical model based on rankings provided by centres. Both elements of this have their limitations, and that is why a combination is necessary. It remains to be seen how effectively this will be done. In England, I suspect it will work well for GCSEs, but I’m not sure about A Levels. In Scotland, the Scottish government gave in to pressure and accepted CAGs as grades, despite them being much higher and results this year now being massively different from previous years. There is a widespread misconception that in normal years, exams represent an objective standard and luck does not play a role in allocating grades. For people who believe this, this year’s system is completely broken no matter how accurately it might predict what students would have got. Moreover, there is also a belief that when an exam system has a problem, grade inflation is a solution.

I would argue that inaccurate grades create their own problems, and that honesty, by which I mean maximising accuracy in predictions, is the best policy. I am aware that there are unavoidable difficulties. Schools and individuals whose success (or failure) this year is unprecedented will not get the grades they would have got. I’ve also worked in schools where assessment was poor, and I hate to think how their rankings will be compiled. But for large cohorts, CAGs will not be more accurate than a model that corrects the tendency towards over-estimation. It flies in the face of mathematics to deny that if grades are inflated, they are less likely to be accurate, although there appear to be many involved in education who claim a large systematic bias in a single direction is not a source of inaccuracy. It’s been reported that A-level grades at A-A* would have gone up from 27% to 38% if CAGs had been used. Nobody can argue that such grades would have been accurate.

Grade inflation is not a victimless crime. It does have real, negative effects. Firstly, devalued grades take opportunities away from those who have received them in the past, as their grades start to be interpreted according to lower standards. Secondly, inflated grades create inconvenience for employers and educational institutions who will find them harder to interpret. Thirdly, some of those who receive grades they never would have achieved without grade inflation will find themselves in courses and jobs for which they are unsuitable. Fourthly, if the rate of grade inflation is not uniform across the system, some will lose out relative to their peers. This is particularly noticeable in Scotland, where there is evidence that grades were inflated more for some socio-economic groups than others. Finally, students in the following year will lose out if the higher pass rates are not maintained, particularly if students can defer for a year before going to university. I would expect there to be pressure in Scotland to keep the much higher pass rates from this year for next year – although a cynic might wonder whether such pressure is easier to resist further away from an election.

There is also a bigger picture here. This might seem like a one-off event, but this is not the first exam kerfuffle for which some have advocated massive grade inflation as a solution. When a new modular English GCSE exam resulted in grade boundaries moving drastically in 2012, there were those who advocated a huge shift in C grade pass rates. When grades are revalued, the direction is almost always the same: more passes without any underlying improvement in achievement or ability. Recent stability in pass rates is the exception, not the norm. It has only being achieved through a deliberate policy effort to hold the line after years of constant grade inflation. If we discard this policy this year, it will be easier to abandon it in other years too.

Whether or not grading goes well today and next Thursday (and I know some will inevitably lose out compared with exams), we would be fools to give up on maintaining the value of grades.

An additional couple of notes.

Firstly, good luck to all students (and their teachers) getting results today and next week. Secondly, the grade allocation might go completely wrong, but remember, anomalies will be reported from schools even if it goes really well. Don’t jump to conclusions when the first angry school leaders appear on the news or on social media. We won’t know if there’s a problem for certain until somebody checks the maths for those schools, which is easier said than done.


Mock results are not a good prediction of final exam grades

August 12, 2020

The government has announced last minute plans to let students use their mock exam result as a grade this year following the cancellation of exams. Although, I have just heard Nick Gibb say mocks could be used for an appeal. so maybe the proposal is not what we thought. Just in case I’ll explain now why it would be insane to allow mocks to count for the following reasons.

  1. There is no consistent system of doing and recording mock exam results with schools doing drastically different things. Schools would definitely have done them differently if this had been on the cards.
  2. Mock exams don’t have boundaries. Schools just make up the boundaries.
  3. Some schools deliberately play down mock results; some even play them up. It’s completely unfair for such arbitrary decision to have any effect on students.
  4. Some students with private tutors “accidentally” see the paper the before sitting the mock exam. Schools then have to sort out how a child surprisingly got almost everything right sometimes on topics that have they’ve never studied.
  5. This new system creates a precedent. Schools will want to have dodgy over-inflated mock results on the system in future.
  6. Schools do mocks at completely different times of the year so they are not comparable between schools.
  7. Nobody wanted this. I’d bet Ofqual don’t want this.
  8. Some subjects, like A level English literature and language, have very long exams which might not be practical to do rigorously as mocks. (And let’s not even mention art A-level)
  9. Schools have already done teacher assessed grades while these are unlikely to be reliable, there is no reason they should less accurate than mock exams.
  10. Making last minute decisions like this makes the job harder for everyone.

Update: It does appear to be the case that mocks will only be used for appeals. Looks like last night’s announcement was incorrect, thank goodness.


Could Fad CPD Harm Your School?

July 29, 2020

A difficult question for any school leader is how best to use the time allocated for Continuing Professional Development (CPD) with some schools conspicuously getting it wrong and no easy answers as to who gets it right. One tendency that I have noticed, which I consider to be a mistake, is to ignore the context of one’s school and the needs of one’s own staff, in favour of what is currently fashionable. Sometimes this is just responding to the ideological climate of the moment, but at other times schools can respond to some gimmick that will soon be forgotten about, or something that has just been in the news.

All CPD runs the risk that, even if it seems fine on the day to the people in charge, it might make no difference in the longer term. There is also the ongoing problem of CPD that passes on false information (like learning styles or the predictive power of attachment theory) and bad practices (like Brain Gym or discovery learning). These difficulties are compounded when CPD is based on the latest fad. There simply may not have been time to evaluate the ideas or the effects of the training. At least with something well-established, you can ask prior recipients of the training if it was helpful; with the next big thing in CPD you might turn out to be the school that discovers its effects are unarguably harmful.

There are two current fads I am hearing lots about at the moment that I think are both partially based on myths and also potentially harmful.

1) Mental health training based on pandemic trauma.

There has been an overwhelming amount of nonsense about a mental health crisis in schools following the pandemic. For instance, this article in Schools Week claimed “child development experts are predicting a ‘national disaster’ as lockdown threatens to create a generation with mental health problems.”

Why might the ideas be false?

We have good reason to be sceptical of those claiming that lockdown has traumatised children. There was already a mental health fad in education, and a trauma fad. During the pandemic a number of people I had previously associated with the idea that schooling caused children to be mentally ill, began arguing that lack of schooling would cause children to be mentally ill. Anyone making claims about the psychological effects of lockdown based on attachment theory, developmental psychology or anything else with no proven record of predicting the prevalence of mental health problems in the real world can be assumed to be a charlatan. Psychiatric epidemiology – the study of the causes of mental disorders in society – is an academic discipline not a hobby. While the mental health of some children may have been harmed by bereavement; being confined to a home that was already a psychologically unhealthy environment, or reduced support for existing mental health conditions, there is good reason to be sceptical of any claims about a Covid mental health crisis.

Why might the training be harmful?

I don’t want to overstate the risks here, as far I know nobody has good evidence that even the most extreme and alarmist talk about mental health in a school causes harm. However, we can’t rule out that children’s mental health could be affected by their perception of mental health disorders in their school. We know that suicides can cluster in a community; that there is an ongoing debate about emotional contagion, and there are studies suggesting that there is some level of peer contagion for depressed states. There is also the Nocebo effect: evidence that telling people that they will be harmed by something, causes them to experience harm. Additionally, even among psychiatrists, there is concern about fad diagnoses. Perhaps worst of all, if teachers and students are told it is normal to have suffered mental health difficulties as a result of lockdown, it might cause teachers to see warning signs of mental illness as “normal” and students with genuine mental health symptoms to think that everyone has them and they are not a reason to seek help.

None of these concerns are a reason not to want staff to be aware of potential mental health problems among students, but it is a reason why we shouldn’t just assume that misinformation and panic about mental health is harmless and that if your intentions are good you won’t make things worse. .

What’s the alternative?

School leaders should be aware that schools already contains experts in looking after children. They already have access to training in safeguarding that includes dealing with mental health. The best option is to avoid assuming any radical discontinuity in children’s mental health before and after lockdown, until there is good evidence for it. School leaders should make use of their existing expertise, and their knowledge of their students and their community. They shouldn’t be asking outsiders to tell them how to react to problems that might not even exist in their school. And, of course, there are good practices such as keeping schools safe, supportive and free from bullying, that will always be best for mental health.

2) Racism training based on unconscious bias.

In the aftermath of the Black Lives Matter protests there seemed to be a rush by some school leaders to abandon all critical thinking regarding racism in society and its causes. A particular focus, perhaps because of its potential for replacing meaningful action with gestures and opportunities to judge one’s peers, has been on the idea that unconscious attitudes are a significant cause of discrimination and unfairness.

Why might the ideas be false? 

There are two main sources of evidence that I have seen presented to educators as evidence of the power of unconscious bias. The first is from research involving the Implicit-Association Test (IAT), a psychological assessment that is meant to uncover one’s hidden prejudices. The IAT has now been established to be neither a reliable test nor a valid predictor of anyone’s actual behaviour, but is apparently still common among corporate diversity trainers*. The second source of evidence for unconscious bias I have seen in education debates is less specific, but more open to interpretation. Any and all evidence of racism in society, particularly in outwardly progressive organisations, is presented as evidence of unconscious bias. This cannot ever be ruled out, however, it cannot be assumed. Anyone who sees evidence of prejudice, for instance in how often people are offered job interviews, might be seeing the results of unconscious bias. However, this is only one explanation among several. There may be conscious prejudice, unless you happen to believe that deliberate racism no longer exists. There may be institutional racism, with rules and procedures that lead to racist outcomes. There may be discrimination based on ignorance and misconceptions which, while unintentional, is still very much a matter of conscious beliefs and actions that could be challenged.

Why might the training be harmful?

If discrimination is happening for any of the reasons I just mentioned (conscious racism, institutional racism, ignorance) then blaming it on unconscious bias will make it harder to prevent. Any institution wanting to perpetuate a racist status quo will find a belief in unconscious bias a convenient excuse for taking no action to identify actual racists, reform discriminatory practices, or identifying where false beliefs about race are leading to discrimination.

Even if it was not a potential distraction from dealing with actual problems, there is also a possibility that the worst anti-racism training might be harmful in its own right (and this goes beyond just training based on unconscious bias). There is evidence that some types of training might have “ironic effects” and that some efforts to address stereotypes might reinforce them. As with mental health training, one should not assume that because one has good intentions, one isn’t making the problem worse. A further complication, that this blogpost discusses, is whether some ideas about race may be so heavily politicised that, if brought into schools and passed on to students, they could conflict with the legal duty to be non-partisan.

What’s the alternative?

Despite theories of “white supremacy” that are intended to explain everything from slavery to the colour of sticking plasters, my experience of teaching in a variety of schools is that actual racism manifests itself in different ways in different schools. There is no single explanatory theory of racism in society that covers every problem and 99% of the time it seems that if somebody suggests such a theory it describes the United States more closely than it does England.  Forget looking for racism in unconscious minds, you should be finding out about racism in your school right now. Is there racist bullying among students? Is there discrimination in pay and progression? Is there anyone, staff or student, who feels less safe and less valued in your school because of their race? Is there an expectation that some ethnic groups cannot be expected to behave or learn? School leaders should know of the problems in their own schools. They should be making sure there is zero tolerance of racism and zero opportunity for racism to spread, even if that means punishments and exclusions for kids and disciplinary action and dismissal for staff. They shouldn’t be asking outsiders to tell them how to react to problems that might not even exist in their school.

Perhaps the allure of fad CPD is something to do with the widespread belief that schools are there to solve society’s problems rather than to educate. It might be that this makes it almost addictive to look for the latest analysis of social problems and the latest ways to address them. But if the CPD needs of a schools are assessed by looking at what’s in a newspaper or being discussed on Twitter, then who is actually addressing the problems and challenges that already exist in that school?

Read the rest of this entry »


Two Podcasts Featuring Me

June 19, 2020

I was interviewed for a couple of podcasts last week.

You can find the relevant episode of Greg Ashman’s Filling The Pail podcast here.

You can find the relevant episode of Naylor’s Natter in association with TDT podcast here.


Achievement For All is bad for kids

June 9, 2020

I’m not a huge fan of the Education Endowment Fund, partly because they’ve allowed some pretty shoddy research in the past, and partly because they have a history of being partisan on certain issues. However, they do fund RCTs that test certain education initiatives and, at the very least, that should enable them to spot some popular initiatives that have no effect or even a negative effect.

The latest emperor with no clothes is Achievement For All, which according to the EEF website

 …is a whole-school improvement programme that aims to improve the academic and social outcomes of primary school pupils. Trained Achievement for All coaches deliver a bespoke two-year programme to schools through monthly coaching sessions which focus on leadership, learning, parental engagement and wider outcomes, in addition to focusing on improving outcomes for a target group of children (which largely consists of the lowest 20% of attainers). The programme has cumulatively reached over 4,000 English schools.

Their evaluation of the programme found that:

In this trial, Achievement for All resulted in negative impacts on academic outcomes for pupils, who received the programme during five terms of Years 5 and 6 (ages 9-11). Children in the treatment schools made 2 months’ less progress in Key Stage 2 reading and maths, compared to children in control schools, in which usual practice continued. The same negative impact was found for children eligible for free school meals. Target children (those children the intervention specifically aimed to support) also made 2 months’ less progress in reading, and 3 months’ less progress in maths. The co-primary outcome finding (whole-group reading, and target children reading) had a very high security rating, 5 out of 5 on the EEF padlock scale.

Given the size of the effects and the consistency of negative findings, these results are noteworthy. Of particular importance is the impact that the programme had on target children, and children eligible for free school meals.

A report in Schools Week filled in some details.

The findings rank AfA as the joint worst-performing of more than 100 projects reviewed by EEF since 2011, with only three other projects earning an impact rating of negative two months.

Of these it is the only one to have the highest possible evidence strength of five – which indicates EEF “have very high confidence in its findings”.

They also reported the laughable response of the founder of AfA, Professor Sonia Blandford:

Blandford pointed out that disadvantaged pupils within the AfA trial schools still “achieved above national expectations, which was our key aim in the intervention”.

She added “it was an error to agree to a trial that attempted to evaluate the effectiveness of our broad and yet bespoke approach through the narrow lens of two school improvement parameters”.

Does this matter? I think it does. Since it started in 2011, it’s entirely possible that 4000 schools have harmed their students’ learning or at least wasted resources on something that is more likely to be harmful than helpful. And it’s worth asking how. Probably the single biggest reason this disaster lasted for so long is because the DfE endorsed it with a report assessing its positive effects on SEN children with data collected through:

  • teacher surveys
  • academic sources
  • interviews with strategic people
  • longitudinal case studies of 20 AfA schools
  • mini case studies of 100 pupils and their families
  • AfA events

In other words, the kind of “research” that costs money but nobody can reasonably believe is a fair way to evaluate an initiative of this kind. So the first thing we can learn from this is that the DfE should not be endorsing projects in this way. Particularly when the chances are some teachers, like Mrs S below, could have given a more accurate evaluation.

But another point, is the extent to which the people who run organisations like this become a vested interest, eagerly telling politicians and the public that schools are getting it wrong. There is a huge amount of expertise in the system in teachers and school leaders. Yet, it is staggering how often the AfA’s Professor Blandford was a voice in important debates. I have particularly noticed how such people, whose position means they don’t have to deal with the consequences of dangerous and out of control schools, seem to dominate the debate on exclusions. Professor Blandford was a particularly loud voice on this issue:

Calling for fewer exclusions in response to the Timpson Review.

In the TES claiming schools could do without exclusions

Talking on exclusions at Kinsgston University

Addressing LA conferences on reducing exclusions

Any one of these would have been far better used as an opportunity for a successful school leader to explain why exclusions are necessary. But our education system as a whole promotes the voices of “experts” whose ideas don’t work over the voices of practitioners with a proven track record.

And I won’t ever forget, as I reported here, back in the days when the Chartered College Of Teaching was still pretending it was going to be teacher led, Professor Blandford was one of the first non-teachers to be given a leadership role that, if promises had been kept, would have gone to a teacher.

I’ve always defended the right of non-teachers to help and advise schools, but we need a system where schools look first to a) practitioner expertise and b) what has been proven to work. Not a system where it’s only after 4000 schools and 9 years that we actually realise that we’ve been listening to the wrong people.


Rules and exceptions

May 26, 2020

The weekend’s news was dominated by the story of the prime minister’s special adviser, Dominic Cummings, and his long trip to Durham during lockdown which he justified, at least partially, in the following way:

The rules make clear that when dealing with small children that can be exceptional circumstances and I think that was exceptional circumstances.

I suspect I’m in the majority in not considering this an acceptable interpretation of the rules. However, given that I don’t feel any particular prior animosity to Cummings, and given that I could easily imagine other exceptional circumstances that I would have thought made his actions acceptable, I find myself considering precisely what the problem is.

This also led me back to many debates about rules in schools where the topic of exceptions have come up. To hear some people talk about the evils of “no excuses” or “zero tolerance” behaviour policies you could be forgiven for thinking that there were schools that make no exception to the rules at all. It is more common than it should be to hear people, usually not working in schools, claim that a rule against letting kids out to the loo means kids soiling themselves, and a rule against letting kids out of the room every time they say they feel sick means that it is normal for kids to sit in class vomiting into a bucket. I don’t think these claims describe any schools at all. I think, if anything, there is far more of a constituency of people involved in education who always make exceptions and will justify almost any rule-breaking as something a child couldn’t help doing. I could probably rant for hours about the most ridiculous excuses for breaking rules I’ve heard from both kids, and, depressingly, from the kind of adult who believes children should be freed from adult authority.

So how do we distinguish between valid and invalid exceptions to rules?

Here are some considerations.

1) Does the exception make the rule pointless?

If making a particular exception to a rule renders the entire rule pointless then it’s not a valid exception. This might seem obvious, but schools often have rules that are rendered pointless by the exceptions. If your break duty is to keep kids out of a school building unless they have a reason to be inside, you may quickly discover that the only kids who don’t have a reason to be inside are those who didn’t realise they weren’t supposed to be in the building.

Giving endless chances before any sanction is given can mean that a rule of “don’t do X” quickly becomes “do X as much as you like, until a teacher tells you to stop”. Not confronting a child’s behaviour because they will respond badly to being confronted, can mean that rules are essentially guidelines to be followed by choice.

2) Is the exception actually exceptional?

Related to the last point, the sheer number of exceptions can make a rule pointless. A rule of “nobody leaves the lesson” becomes pointless if there are exceptions for medical reasons that are self-reported. In one school I worked in I taught a large class where a quarter of the students had a medical reason to leave the classroom to go to the toilets signed by a member of staff. After I reported this to the head of year and he made it marginally harder to get permission without recent parental contact or in serious cases a medical note, this immediately fell to no students at all.

A lot of the debate over strict rules and special needs is in this category. There are people who use a wide variety of SEN as a reason a child should not have to obey rules or as a reason particular rules should not exist. Many people argue as if being labelled SEN alone was a reason not to obey rules, even though in some cohorts 44% of children (51% of boys) have been labelled as SEN at some point in their time at school. A particular concern is those who begin their claims with “Children with autism can’t…”. These claims are virtually never true, given the nature of autism and how different kids with autism are. A special need should always be special and an exception should always be exceptional.

3) Is the exception obvious and uncontroversial?

One thing that always amazes me about debates on strict rules, is how willing some people are to believe that schools will not make obvious exceptions. Strangely enough, schools that have a rule that students stand up when a member of staff enters the room, don’t actually punish a student in a wheelchair for not following it. Those schools that encourage eye contact when greeting a member of staff cannot be assumed to be attempting to force out those whose SEN might make that difficult. And those who claim that rules against shouting abuse are discriminatory against those with Tourette’s, not only don’t understand schools, they don’t understand Tourette’s either. It’s debatable how clearly school rules have to say “this rule doesn’t apply in exceptional circumstances” as I can only think of one case in my two decades of teaching where an obvious exception wasn’t made (and that was a member of support staff, not a teacher who refused to make it) but we could probably move the debate on behaviour on a lot if everyone could admit that obvious exceptions are made all the time, and the controversial bit is over when we stop making exceptions.

4) Were “non-routine decisions” thought through?

A child has a medical note saying they should be let out to use the toilet. They use it every Thursday afternoon, usually as soon as they are given written work.

A child is being investigated for autism. Their paperwork says clearly that they will not be able to cope with being shouted at and will walk out. They walk out one lesson when told firmly they need to get on with the work.

A child uses a homophobic term that is not common these days to address a friend and they are overheard by a member of staff. When they are told that it is homophobic, they are shocked (strangely enough they usually have to hear this from their friends rather than just their teacher) and it is clear that they did not intend to be homophobic and have agreed that what they said was unacceptable.

All these cases are ones where, at the very least, there is room to consider for a moment, whether the normal course of action is the appropriate one and/or whether some follow up action will be needed later. I almost used the term “tough calls” for this section, but actually I’m not sure all the situations described above are “tough” for the experienced teacher, they are just not the sort of routine judgement teachers make without thinking and forget about a second later.

It should be the case that a teacher making a non-routine decision has thought it through and can explain it. This may even require delaying the final decision or making a provisional decision “e.g. I believe this is against the rules, I will look into it before issuing the detention”. It should be the case that schools make situations that require that extra thought as rare as possible by making sure rules are clear and widely understood. This is why it matters if people involved in writing the rules start interpreting them in ways that are unexpected.

5) Were “debatable decisions” considered by the appropriate people and clear guidance given?

If a teacher, even after some thought, is still not 100% sure they are right in their decision, what their decision should be, or even if they are confident but they think a parent might contact the school asking for the justification for a decision, it’s best to speak to somebody else. Firstly, it makes it clear that the teacher is aware of the issue; they didn’t make a mistake, and then try to cover it up. Secondly, this might bring to light more information or even a better understanding of the rules that resolves the problem. Thirdly, it means that if, eventually, it is decided the decision is the wrong one, then they are not personally hung out to dry, it becomes the responsibility of the school to get it right. The other side of this is, if a teacher’s decision is decided to be wrong, they should be informed immediately and the reasons why explained clearly. I have left schools because it was normal for teacher’s decisions about sanctions to be overruled without the teacher being informed or where managers would deny responsibility for the things they made teachers do.

I hope these are useful considerations about rules and exceptions in schools. I’ll leave it as an exercise for the reader to decide whether any of these points also apply to the case of Dominic Cummings.


Do schools permanently exclude too often?

March 15, 2020

There’s been a narrative seen in the media in the last few years suggesting that there is a problem with the level of permanent exclusions.

This strikes me as a typical example:

Particularly common is the suggestion that a head with a low level of exclusions has something to teach us (in this case we have every reason to disagree) and the idea that it is reasonable to comment on the level of exclusions without commenting on the level of unacceptable behaviour in schools. There are some things we can say about the level of exclusions without looking at behaviour. We have heard a lot about rising exclusions recently and you can easily find people complaining that something they personally dislike, like an academic curriculum or schools actually enforcing rules, is causing exclusions to increase. The latest figures show a general rise in permanent exclusions since 2012 when Michael Gove abolished the right to appeal, but this is a rise from a level which was already a historic low, and is still lower than 2006/7.

And if we look even further back, we see just how low permanent exclusions were before the rise.

The latest exclusion figures show that the number of permanent exclusions in the 21000 state schools 2017/18 was 7905. This is less than 1 in every 1000 pupils, i.e. 0.10%. The rate of permanent exclusions in primary schools was 0.03 per cent. The rate of permanent exclusions in secondary schools was 0.20 per cent. As well as having a much lower rate of exclusions, primary schools are smaller, so an individual primary school will exclude far less often than a secondary school. On average, a secondary school excludes 2 students a year, whereas, on average, a primary school will exclude a student once in a period of almost 14 years. Strangely this is overlooked by the anti-exclusion lobby, and we see some primary heads praised for avoiding exclusions as if it was an achievement, rather than normal for primaries.

The primary/secondary difference also largely explains statistics like this from the Guardian.

Overall, 85% of all mainstream schools did not expel a single child in 2016-7, while 47 individual secondary schools (0.2% of all schools) expelled more than 10 pupils in the same year.

We would expect the vast majority of schools to be primary schools expelling nobody in a given year, and permanent exclusions to be concentrated in secondary schools.

We also often hear claims that permanent exclusions are given for less serious misbehaviour. Often this is a result of confusing permanent and fixed term exclusions, but where it isn’t, it’s based on implausible and unverified anecdotes or on an interpretation of the way permanent exclusions are categorised.

Those who wish to reduce or prevent exclusions, point to the fact that the largest category “Persistent disruptive behaviour” is very vague, and, perhaps by focusing on the word “disruptive” rather than the word “persistent”, interpret it to cover less serious, or easily preventable, offences.

Examples of this:

The ambiguity here is not really cleared up by the guidance on exclusions, which specifies only that the category of “persistent disruptive behaviour” refers to:

• Challenging behaviour
• Disobedience
• Persistent violation of school rules

Like so much of the debate about exclusions, I find a real disconnect between my experience as a teacher which is that exclusions are infrequent and always for something that schools cannot tolerate without endangering children’s safety and learning, and claims like the above. In my experience, “persistent disruptive behaviour” is not less serious than the other reasons for exclusions, it is just more persistent. Children I’ve encountered who have been excluded for this are usually out of control, and often will have repeatedly committed the offences described in the supposedly more serious categories.

While there is some evidence, to be discussed in a later blogpost, that teachers do not think schools are too quick to exclude, we would struggle to find direct evidence of whether permanent exclusions are used sparingly and only for the most serious behaviour. However, what we can do is look at the scale of the worst behaviour in schools, and ask whether it can sufficiently account for the number of permanent exclusions that take place. Of course, it could be argued that schools are permanently excluding for trivial offences while tolerating extreme offences, but such a claim would, at the very least, be implausible enough to require very strong evidence. If the number of exclusions is low compared with the amount of extreme behaviour taking place, it is unlikely that schools are too quick to exclude, and if the behaviours in the “more serious” categories for exclusions are far more common than the exclusion figures would suggest, it is unlikely that the “less serious” categories such as “other” or “persistent disruptive behaviour” (which between the account for the majority of permanent exclusions) are being used for trivial offences.

These are the numbers of permanent exclusions, by type, from 2017/2018.

Permanent exclusions
Physical assault against a pupil 1,037
Physical assault against an adult 845
Verbal abuse/ threatening behaviour against a pupil 338
Verbal abuse/ threatening behaviour against an adult 652
Bullying 32
Racist abuse 13
Sexual misconduct 100
Drug and alcohol related 643
Damage 77
Theft 40
Persistent disruptive behaviour 2,686
Other 1,442
Total 7,905

Judging the level of bad behaviour in schools is hard to do accurately. Almost any source will be either an estimate or a partial picture. Fortunately, we don’t need to be very precise to see how low the above figures are. Teacher Tapp surveys teachers by using a large sample weighted to be representative. They asked teachers about their experience of physical and verbal abuse in the last year.

There are 453400 (Full Time Equivalent) teachers in state schools. This means the best estimate we have of teachers experiencing physical abuse from pupils in a year is 95000. The best estimate of teachers experiencing verbal abuse from pupils would be 249000. Both terms are defined by the respondents, and obviously any sample will not be perfect, but this is enough to give some idea of the gap between the permanent exclusion figures and the likely number of incidents serious enough for teachers to consider them to be verbal or physical abuse. Permanent exclusions for physical assault against an adult, and verbal abuse/threatening behaviour against an adult are 845 and 652 respectively for the last year on record, out of 7905 permanent exclusions in total. The mismatch between these figures and those derived from a representative sample of teachers, make a mockery of the idea that schools are quick to permanently exclude, and the idea that we can assume without evidence, or on the basis of vague categories, that permanent exclusions include lots of trivial offences.

Another source of evidence is that in recent years journalists have reported on Freedom of Information requests to police forces about crimes reported at schools. These figures are likely to be incomplete as not all police forces respond, and some will include crimes by adults in schools, and figures from Scotland and Wales. We can also assume that many incidents (I’d say from experience the vast majority of incidents) in schools that, technically, can be considered criminal are not reported to the police. However, if all we are looking for is whether permanent exclusions reflect the frequency of serious incidents in schools, it is worth noting how they compare with what incidents in schools are reported to the police.

Comparing these with the exclusion figures, we again get the clear impression that both the total number of exclusions, and the exclusions for the “most serious” offences (if one wishes to claim that the “persistent disruptive behaviour” and “other” categories are made up of less serious offences) are far lower than expected. We have a situation where the threshold for permanently excluding for a single offence appears higher than the threshold for reporting the single offence to the police as a crime.

The view that permanent exclusions are currently high, seems completely dependent on ignoring the realities of behaviour in schools. When you consider what teachers say they experience, and what is reported to the police, we should be asking why the rate of permanent exclusions is so low, and what can be done to make schools safer.




Behaviour is not all about relationships

March 1, 2020

It doesn’t take long in a discussion about behaviour for somebody who should know better to claim that the key to good behaviour is good relationships. It’s true that a bad relationship could undermine you with a class. It’s true that sometimes a good relationship with some of the dominant personalities will have a really positive effect on a class. It’s true that it’s better to have good relationships than bad with your students. However, we should not confuse cause and effect. It’s much easier to have a good relationship with a well behaved class. It’s easy to con yourself that the good behaviour you get from a class is because the class like you, but it’s far more likely that the class like you when your lessons are safe and orderly and you are not having to constantly tell them off. In my experience, if you visit schools with strict and effective behaviour management, you also see really good relationships between staff and students. This is because relationships thrive where staff and students are happy and flourishing, and that happens best where there are boundaries and students are safe. The opposite of this is the school where good behaviour occurs mainly when students have been “won over”; where strangers and new staff may be treated with contempt and life is hell for the teacher who isn’t liked.

Most of the teachers with the strongest relationships with students have earned them the right way: through firm discipline and commitment to their students’ well-being. In an environment where winning over students is a prerequisite for an absence of abuse and defiance, there will be some adults who have prioritised these relationships above establishing the right expectations. In tough schools I have encountered teachers who have “good relationships” who earned them by never confronting a student. I have sat observing in lessons where the teacher had the most friendly and respectful conversations with even the most difficult students, but never said a word as the students subjected each other to abuse and harassment. Appeasement is a key strategy to surviving in a school where behaviour is based on relationships; rather than relationships allowed to develop due to good behaviour. Because relationships are a two way street, and students can choose who they like, schools where good behaviour is conditional on relationships, shift power to those students who want it. Those students can make it clear to teachers: “If you want an easy life, don’t get in my way”. At best this just means a lowering of academic standards, but often it means the departure of adult authority from the classroom. While this may be empowering for the ringleaders, it leaves most children unprotected from the mob, as staff fear the bullies among the children as much as their peers do.

Another aspect of schools where only good relationships will prompt good behaviour, is the effect on new staff. Typically, it takes a long time to establish yourself. These schools are not a nice place to start teaching, and even experienced teachers will find themselves treated badly when they move schools. Students will have a perception of who they need to obey and who they don’t. Supply teachers will be driven out; new staff will frequently go under, and classes will boast of the teachers they reduced to tears. Worse still, students will learn to coordinate their disruption. New staff are the obvious target, but sometimes a particular part of the curriculum will become known as the one to disrupt. Sometimes staff will be targeted for their gender, sexuality or ethnicity. There are schools where good behaviour depends on good relationships, but you almost certainly won’t have that good relationship if you teach French; if you speak with an accent, or you happen to follow the wrong religion. 

Finally, let’s accept that teachers are all different. Some are more introverted than others. Some like football and crude jokes; others like opera and subtle wit. Not everybody likes small talk. The culture of having to win over students turns teachers into superficial people, more interested in playing to the crowd than imparting something profound. Halfway between a politician and a game show host, the teacher with the most winning personality may “succeed” despite poor subject knowledge and little skill at imparting knowledge. They may even have the tricks of the demagogue: knowing how to manipulate individuals and how to lead a mob. Any teacher who is introverted; any teacher who is on the autistic spectrum; any teacher who cares more about their subject than being liked, is not welcome in the school where good behaviour depends on relationships. And it’s worse still for the misfits among the students. Expectations vary rapidly between classrooms as boundaries shift according to relationships. There’s no chance to learn good habits and follow routines; every lesson will be about navigating the social relationships between the teacher and the class. Instead of learning the useful skill of cooperating with people you don’t like; instead students are encouraged only to learn where the class has tacitly decided the teacher is “fun” enough for them. You wouldn’t want to be an autistic child in a school where the only rule is, “Don’t get on the wrong side of the mob”. Ironically, SEND students are frequently used as an excuse to justify the fuzzy boundaries and relationships first approach to discipline. You don’t have to be a teacher for five minutes to see how often these are the kids who are failed most in these schools.  

Behaviour is not about relationships. Good relationships with your students are worth having whether it will help behaviour or not. Good relationships are, however, no substitute for an orderly and secure environment where every teacher, and every student, can flourish.

%d bloggers like this: