
Statistical Data and the Education Debate Part 1: Effect and Cause
June 12, 2013There is a lot of debate over what counts as evidence in education and I have barely begun to read up on the topic, but there are a few errors that I keep seeing made again and again in the use of statistics in educational debate that need to be emphasised. So this is the first of a trio of posts about statistical issues that have come up when I’ve been discussing education.
To begin with, statistical evidence is open to interpretation, and one of the most common errors is to interpret a statistical relationship as showing cause and effect the wrong way round. It is so common that I have become a habitual promoter of this particular song explaining the mistake.
Where I see this error in the education debate is where somebody dismisses an obvious reaction (R) to a problem (P) by saying “P happens where R happens, therefore R causes P”.
So we have:
-
Schools which exclude a lot of pupils have more behaviour problems. Therefore exclusion causes poor behaviour.
-
Where schools are really concerned with behaviour (perhaps shown by having extensive discipline policies) there is a lot of bad behaviour. Therefore, concern about behaviour and strict discipline policies causes bad behaviour.
-
Teachers who shout a lot/are stressed/dislike their classes have badly behaved classes. Therefore teachers cause the bad behaviour.
-
Schools which set their classes have lots of low ability students, therefore setting causes low ability.
- Countries with effective education systems don’t have a lot of systems for accountability, therefore unaccountable schools lead to educational success.
-
Teachers who expect that students will behave, have students who will behave. Therefore, if you expect students to behave, they will.
I could go on. Set out like this I think the error is obvious, but if the original claim is simply presented as “what the data shows” it can distract from the error being made in the interpretation.
Moving on from correlation though, most mistakes seems to be based around probability and the role it plays in interpreting data. People who are unfamiliar with statistics have a habit of assuming that evidence either proves a point absolutely, or indicates nothing at all. In fact all data, indeed all evidence, can only indicate that something is more or less likely. When we have no opportunity to defer judgement then even evidence that may be inconclusive might be useful and should not be dismissed. Often people think probability can be left out of evidence-based decision making entirely. I will look at this in the next blogpost in this series.
Reblogged this on The Echo Chamber.
That all makes complete and obvious sense. Well nearly all. Not sure I get this bit: “Schools which set their classes have lots of low ability students, therefore setting causes low ability”.
I’ve never heard it claimed that schools which set their classes have lots of low ability classes. I’m assuming you have heard this claimed which is why you’ve included it. But it doesn’t even need the obvious idiocy of “setting causes low ability” to make no sense at all. Why would you particularly want setting if you had lots of low ability students?
You would particularly want setting if you had lots of low ability students because while having one or two students unable to access the work is widely tolerated, it can get tricky if it’s, say, a third of the class.
You may have noticed some grammar schools hold off on setting for a lot longer than comprehensives.
I hadn’t noticed that. I’ll look out for it
[…] written before about how a big problem in reaching conclusions from data is being able to identify causation from […]