Early this month, it was pointed out to me that there was a planned release from the Department for Education that would be “Examining the education and children’s social care background of children who have been cautioned or sentenced for an offence”. I feared the worst, because almost everything written on children’s offending seems to find reason to blame schools. Usually, it’s claimed that schools are at fault for expelling or suspending pupils, and not acting as amateur prisons keeping the most dangerous young criminals in classrooms where they can’t scare the public and can only harm teachers and children.
I was wrong. What has actually been published is not the usual attack on schools. “Education, Children’s Social Care and Offending Descriptive Statistics” is a rather thorough analysis which is careful not to promote the usual narrative.
I have been writing about behaviour and exclusions for some time now, and there is a lot in the report that is relevant to those discussions. I think this report will be informing a number of future blogposts as well as this one. I will be looking at a number of points that I think will be of particular interest to teachers. Please be aware though, that I am selecting a few points of interest, not attempting to summarise the report.
The research is produced jointly by the Department for Education and the Ministry of Justice. It looks at the data for 3 categories of children selected from the 3 cohorts who were in year 11 in the years 2012-2015. It explains:
….three offending groups are identified in this publication: approximately 77,300 children who had been cautioned or sentenced for an offence, which is equivalent to 5% of the total pupil cohort; approximately 18,000 children who had been cautioned or sentenced for a serious violence offence (equivalent to 1.1% of the total pupil cohort), and approximately 12,300 children whose offending had been prolific (equivalent to 0.8% of the total pupil cohort).
This is not a sample, but as far as possible, an attempt to collect all the data from the pupils in those cohorts who fit the relevant criteria.
The first thing to notice is that, when you consider the size of an average state secondary school is about 1000, these pupils are not uncommon. The data on offending covers the children when they are in the 10-17 age group, but most first offences are within the age group 14-16. Ignoring changes in crime rates since then, do we secondary teachers think of our pupils as a population where 1 in 20 will have been cautioned or sentenced before the age of 18? Do we realise there is likely to be 1 violent young criminal in every 90 children? We might teach that many children before lunch some days. Are we aware that we can expect there to be one prolific young criminal in every 125 pupils? That’s likely to give you an average higher than 1 in each year group. If you account for the fact that these pupils will not be evenly distributed and you can expect to have a lot more of them in a boys school or a school in a disadvantaged area, its actually quite frightening. It’s also worth considering the fact that these are only the young criminals who were caught and sentenced or cautioned. Secondary school populations can include a lot of young people who have committed, or will soon commit, criminal offences, including both violent offenders and habitual criminals. This should be a fact teachers should be reminded of in every CPD session on behaviour and I intend to blog further thoughts about this at some point in the future.
The second thing I noticed about the report was how thoroughly it made an effort to discourage readers from making assumptions about causation.
Care should be taken when interpreting this analysis as the findings do not imply a causal link between the educational or children’s social care characteristics and being cautioned or sentenced for an offence. Future work using these data will aim to build upon this analysis to better understand the relationships between the outcomes and characteristics in this publication.
The education variables included in this paper have generally been analysed independently of each other. It is important to note that there may be links between these key variables which have not been factored into the analysis, and other factors which could not be taken into account.
Children who had been cautioned or sentenced for a serious violence offence and children whose offending had been prolific represent a small, atypical group of young people; their results should not be assumed to be representative of all children who have been cautioned or sentenced for an offence or young people more generally.
It is rare for any writing in education to be this clear in stating that a risk factor does not indicate causation. It also makes it clear that just because the risk of an outcome is increased for one group of pupils it doesn’t mean that the outcome is very likely. For instance, it points out that while 76% of those cautioned or sentenced for a serious violent offence have been eligible for FSM (Free School Meals), only 2% of those who have been eligible for FSM have been cautioned or sentenced for a serious violent offence. Not only is FSM not demonstrably a “cause” of violence, it does not allow us to predict who will offend in that way. Unless people understand that distinction, they will be prone to thinking they have found the cause, or causes, of criminality.
Another way in which this report is clearer than most is the way it spells out the difference between suspensions and permanent exclusions.
A suspension is where a pupil has been temporarily removed from the school, whilst a permanent exclusion is when a pupil is no longer allowed to attend a school.
Perhaps it is just that a low bar has been set previously, but it is a relief to see anyone making a clear distinction between the two and analysing them separately rather than talking about “exclusions” and leaving the reader to look for contextual clues as to which is being referred to. This enables the report to reach clear conclusions about the correlations between offending and suspensions and permanent exclusions.
There is a clear connection, but there is no inevitability about it. There is no school to prison pipeline here. The suspended may be more likely to be cautioned or sentenced, but they are not likely to be cautioned or sentenced. The permanently excluded are likely to be cautioned or suspended (unsurprising when you consider that only the most extreme 1 in a 1000 pupils are permanently excluded each year), but the vast majority will not be cautioned or sentenced for a serious violent offence or for a prolific number of offences.
Another point that perhaps should inform discussion is that the offending figures above for permanently excluded pupils are less than for pupils in Alternative Provision.
Looking only at the pupil cohort which had ever been registered at a state or non-state funded AP setting, 41% had ever been cautioned or sentenced for an offence. (This rises to 45% for those that were registered at state funded AP). The rates for the other offending groups are much lower: 14% of those at any AP setting had ever been cautioned or sentenced for a serious violence offence, and 15% of those whose offending had been prolific.
In some of the narratives about a “school to prison pipeline” AP is one of the villains, with claims that attending AP is a mechanism by which permanently excluded children are turned into criminals. These figures (almost a third lower than among permanently excluded pupils) should be unsurprising because not everyone in AP is there for behaviour, and where they are, AP is meant to provide specialist help suited to those pupils. However, it seems important to note that pupils in AP are not at greater risk of offending than permanently excluded pupils.
Another issue raised by the report is Special Educational Needs.
80% of those who had been cautioned or sentenced for an offence, and 87% of those cautioned or sentenced for a serious violence offence, had been recorded as ever having SEN. 95% of those whose offending had been prolific had been recorded as ever having SEN. 45% of the all-pupil population had been recorded as ever having SEN at some point up to the end of KS4.
As mentioned in a previous blogpost, FFT datalab had looked at the scale of SEN labelling in a blogpost entitled More pupils have special educational needs than you might think, Looking at the cohort of students who were in year 11 in 2016/17 they found that “44% of the cohort had ever been classified as having SEN by the time they reached the end of Year 11”. Here we have a similar figure for the combined cohorts for three earlier years. This is a system where it is very easy to be labelled SEN, but also one where the risk of being labelled seems connected with the risk of offending. Again, this is something I will hope to look at in a future blogpost.
An unsurprising part of the data is the effect of gender on offending.
Male pupils were over-represented amongst children who had been cautioned or sentenced for an offence, with children whose offending had been prolific containing the highest proportion, at 84%. This is marginally higher than children who have been cautioned or sentenced for a serious violence offence, which is also 84% to the nearest whole number. In comparison, 76% of all children who had been cautioned or sentenced for an offence and 51% of the pupil cohort was male
I think this raises two issues. Firstly, why does this risk factor for criminal behaviour, often get ignored, while others (FSM, SEND, exclusions) are used not only as explanations for offending, but reasons not to hold offenders responsible? Secondly, there is increasing pressure to use the word “gender” to mean gender identity not biological sex. This data predates the push to use the word “gender” this way, but it seems important to make sure that in the future we know precisely which one we are referring to.
I’m not going to go into the key findings on ethnicity for two reasons. Firstly, there seems to be a lack of useful detail in the report. There may be some in the accompanying data, and I may look at that at a later date. Secondly, school demographics have changed a lot since these cohorts, so I’m not sure how useful the analysis would be to schools at the moment.
Finally, I should point out there is a lot more interesting data about the connection between offending and exclusions and suspensions. I think this will have to be revisited in a future blogpost.
Update 3/4/2022: It’s been pointed out to me that in the paragraph about how many young offenders there are, I mentioned the size of an average secondary school, but then talked about the frequency of offending using the whole cohort figures which would have included those in AP of one sort or another. I don’t think it substantially changes my argument, and it is impossible to modify the frequencies accurately. However, we can use the figures on page 26 showing how many of those cautioned have ever attended AP to get an upper bound for how much I could be out.
I asked:
…do we secondary teachers think of our pupils as a population where 1 in 20 will have been cautioned or sentenced before the age of 18?
This was based on 5% of the pupil cohort having been cautioned or sentenced. It remains correct if “population” refers to the cohort rather than the population of an average secondary. But we can consider how it would affect the figure if we only consider those not in some form of AP. According to the report, 26% of those who have been sentenced or cautioned attended AP at some point. If every one of those 26% had been in AP for their entire time at secondary school that would reduce the figure from 1 in 20 to 1 in 29. According to page 41) at least 54% of those pupils who attended AP did so after their first offence, so it seems likely the 1 in 20 figure is a lot more more accurate than 1 in 29, but the true figure is presumably somewhere in between.
I also asked:
Do we realise there is likely to be 1 violent young criminal in every 90 children? We might teach that many children before lunch some days.
Again, the first sentence remains true when considering the whole cohort. However, if we assume we are talking about pupils not in AP, then (again according to page 26) up to 37% of those who are suspended or cautioned for serious violence have attended AP at some point. Again, it’s unlikely that will be anywhere near 37% at one time, but if it was that would change the figure from 1 in 90 to 1 in 143. Although this is now likely to be a drastic under-estimate (two fifths of those who are cautioned or sentenced for serious violence and attend AP, only attended AP after their first offence) we can observe that 143 is not that many pupils, and many teachers regularly teach 143 children in a day.
Finally I said:
Are we aware that we can expect there to be one prolific young criminal in every 125 pupils? That’s likely to give you an average higher than 1 in each year group.
This is the figure that probably holds up least well. Prolific criminals are the group most likely to have been attended AP. Page 27 says that 57% of them have attended AP. Again, it is unlikely that 57% would all be in AP at one time, but if that were the case that would change the figure from 1 in 125 (actually that was with a bit too much rounding, it should have been 1 in 133) to 1 in 309. That would bring us nearer to 1 per key stage in an average secondary rather than 1 per year.
As I said, I don’t think this affects any of my points in this, or my later post, but I thought I’d better acknowledge it. Thanks to Kat Stern for pointing it out.
Chasing Up Another Fake Statistic About Exclusions
March 30, 2022The Guardian frequently prints false information about suspensions and exclusions from school. I’ve just found an example from earlier this month. In this article they claimed the following:
This struck me as unlikely, as there has been very recent research, from the Department for Education and the Ministry of Justice, on the link between exclusions and offending . That showed that for roughly 5000 young people who received a custodial sentence before the age of 18, the majority had never been permanently excluded, although most had been suspended at some point.
This is broadly consistent with other sources. For instance, FFT Education Datalab, found fewer than 20% of 11000 young people who had been in custody between 16 and 18 had been permanently excluded. I’m not aware of any credible source that gives a higher figure, although because suspensions were for many years called “Fixed Term Exclusions” there are sources (eg. this) that give a figure for “exclusions” that almost certainly includes suspensions and is, therefore, much higher.
So where did the Guardian’s figure come from? As with so much bad information in the media on this issue, it comes from a report by campaigning organisations. This one was from a report by Agenda and Alliance for Youth Justice. (The aptly named, charity Agenda previously featured in another blog post I wrote about false claims about exclusions.) In their report they claim:
The source they give for this is a report on Education in Youth Custody from 2016, published by The Parliamentary Office of Science and Technology (POST). This report states that:
This gives two sources, which is a bit of a warning if one is using statistics to make comparisons. The first is an article from British Journal of Community Justice from 2015. Researchers distributed questionnaires to just 85 young people in YOIs and got 47 responses (not all complete) and found that:
The POST research appears to have considered a survey of 45 to be a reliable data point. Also, the writers of that research appear to have missed that it is 63% of 89% (about 56%) not 63% who have been permanently excluded. The second source is this 2012 report from HM Inspectorate of prisons. This included a survey of around a thousand inmates in YOIs. This included the following.
That “n=19” refers to the number of responses from young women to the question about exclusions. Only 25 young women were surveyed, so presumably 6 did not answer this question, and 5 said they hadn’t been excluded. The use of the phrase “at some point” indicates this refers to suspensions (then called “Fixed Term Exclusions”), as does the fact that elsewhere it states that:
So, it would appear that POST’s statistics include multiple errors; ludicrously tiny sample sizes; surveys not completed by all recipients, and it ignored a data point that clearly contradicted one of the claims being made. POST’s parliamentary website states:
Not very impressive. Also, it’s not impressive that their obviously false statistics were repeated by two further sources. All I can say is: Never believe any claim about exclusions that appears in the media or comes from a charity or campaign group. They are frequently incorrect.
Share this:
Like this:
Posted in Commentary | 2 Comments »