Archive for June, 2013


Should Language Students Learn to Translate?

June 16, 2013

As you may be aware, as well as blogging here, I also run (with not inconsiderable help from others) another website – The Echo Chamber – which provides links to other education blogs. Although part of the ethos of the site is to publicise blogs from teachers whose opinions are not widely represented in education debate outside of the blogosphere (i.e. people like me) the criteria for inclusion in the site are fairly broad and I frequently share blogs that I do not necessarily agree with.

One interesting blogpost that I shared yesterday, although I didn’t particularly agree with the conclusion, was about translation in language teaching. It has always surprised me when reading about the school days of people who were educated in the first half of the 20th century that language teaching (both ancient and modern) often seemed to include translation of passages of English into another language. Frequently this was with the intention of preserving the style and genre of the original, so poems were to remain poetic even after translation. This always surprised me because the level of fluency required for such a task is far, far beyond anything I was ever taught at school, despite getting a grade “B” in my particular language qualification.

I should have realised that this is another example of dumbing down and that translation (in either direction), the most obvious test of mastery of the written form of a language is out of educational fashion. The post I shared describes the debate in these terms:

Here are reasons usually mentioned for not using translation:

  • It is radically different from the four skills which define language competence; listening, speaking, reading and writing
  • It takes up valuable time which could be used for the four skills and comprehensible input in the target language
  • It discourages students from thinking in the foreign language
  • It is a bad test of language skills
  • It produces interference from the mother tongue
  • It tends to be text-bound, focusing only on reading and writing
  • It only focuses on form and accuracy
  • It is too hard and boring for many learners
  • It encourages lazy teaching, with teachers being able to practice without fluency
  • It is really only appropriate for training translators

Of these, I would argue that the prime reason for limiting translation is that it takes away valuable time from communication in the target language. In saying this, I am assuming that learning takes place primarily by natural acquisition processes.

On the other hand, some theoreticians argue that translation has a valuable role to play. Some reasons they put forward are as follows:

  • Translation helps expand a learner’s vocabulary
  • It helps students understand how the language works
  • It consolidates structures which can then lead to greater comprehension and fluency
  • It takes advantage of students’ knowledge of their own language; why not profit from this advantage which very young children do not enjoy?
  • It is the most efficient way to improve grammatical accuracy
  • Many students enjoy it
  • It helps students to monitor their accuracy
  • When done orally it provides opportunities for listening and speaking practice

Needless to say, attempts have been made to provide evidence for and against translation. Some of these can be found by doing an online search. There is, for example, evidence that when parallel groups of students are taught with or without translation into the target language, those who practice translation show improved accuracy.

The following is a response to the post which I received from a reader of this blog who works in education, but isn’t a teacher, which I found mirrored a lot of my own thoughts. (Before anyone asks: no, it isn’t from Michael Gove.)

I read French Teacher’s blog with great interest and not a little surprise, as I am not a foreign language teacher and had not realised that translation was in general so frowned upon. However there were two points in his post that particularly scratched at the edges of my brain.

First, one of the reasons he says is often advanced against translation: “It is radically different from the four skills which define language competence; listening, speaking, reading and writing”

And secondly, in his summing up of these reasons: “Of these, I would argue that the prime reason for limiting translation is that it takes away valuable time from communication in the target language. In saying this, I am assuming that learning takes place primarily by natural acquisition processes.”

When I started to think about these in the context of everything I know about language acquisition and learning to read (thank you Diane McGuinness and many others) I became very uncomfortable. First of all, how can rendering a French text into English be seen as something different from listening or reading, or an English text into French as something different from speaking or writing? And try as I may, I cannot understand how learning to translate into a foreign language can be said to take time away from communication in the target language, especially in writing. Is this not where the understanding of differences in idiom and ways of structuring thought should be developed, especially as the child moves from literal beginner renderings to more sophisticated expression?

And then I went further. What if there is a parallel between teaching children to read and write in their own language and teaching them to understand and communicate in foreign languages? Think about it.

For years we trained teachers in the mistaken belief that children could learn to read and ‘make meaning’ without going through the apparently tedious business of learning to decode efficiently. Now we know that fluent readers do decode: they simply do it so efficiently that it has become an automatic process of which they are completely unaware. What many teachers thought was a distraction that got in the way of ‘making meaning’ was in fact the essential pathway to highly skilled reading.

In the same way, are we deluding ourselves in thinking that children can simply be trained to think directly in a foreign language? Perhaps there is a parallel with decoding and encoding in learning to read and spell: perhaps what skilled second language speakers actually do is to translate their thoughts from their first language (ie encode) so rapidly and efficiently that they don’t even know they are doing it. If this is in fact the case, then by attempting to limit or avoid translation we could be doing our utmost to prevent children from developing the automatic processes they most need. Could this be why we find it so hard to turn out truly fluent speakers of other languages?

I would love to hear more on this from those who know more about language teaching than I do.

(And by way of an aside: my 12 year old, whose school French is clearly being taught in the prevailing mode, has just discovered Duolingo and can hardly be dragged away from the structured translation practice it provides – her comments are along the lines of ‘why didn’t anyone tell me this was how it all worked’.)

I would also be interested to hear from any language teachers about this, particularly if you do spend class time on translating and, even more so, if you encourage students to translate from English into another language.


A Maths Teacher writes…

June 15, 2013

This comment appeared below the line on my reblog of Joe Kirby’s review of Daisy Christodoulou’s book “Seven Myths about Education”. It refers to that book and the analogy of educational methods as a “cargo cult”. I liked it so much I thought it worth giving you a chance to see it above the line.

This looks like a very interesting book and one which I’m sure will confirm all my prejudices concerning the pedagogical model now being pushed by OFSTED. I’ve been teaching for over twenty years, and have encountered some fairly incoherent and damaging ideas from ‘experts’, yet it’s only over the last couple of years that I find myself literally stunned by some of the words coming from the mouths of inspectors and ‘consultants’; to the point where in the last week alone I’ve had to ask them three times to repeat what they’ve said just to make sure I heard them properly. I simply can’t accept that rational human beings can believe in a non-conflicted manner that good teacher explanation hinders learning and progress if it strays past ‘the 5 minute limit’, which was a phrase that was thrown at me six times while being given feedback.

I’d been observed introducing vectors to a year 10 class of relatively able students. It’s not an easy school by any objective measure. I was given a 3. Apparently it would have been a 2 with outstanding elements except that my introduction, all told, with modelling, questioning and mopping up a couple of misconceptions lasted 8 minutes and 34 seconds! (Seriously) This means I require improvement. Short of recording my introduction and playing it back at double speed, I fail to see what I do. The consultant, who was Maths specialist, told me how he’d have done it. His explanation lasted 25 minutes. In fact, he eventually conceded that he couldn’t actually have done it himself any faster, so suggested maybe I should have broken it up over two lessons, despite having commented that all the class had grasped the concepts and made good progress. When I pointed out that his idea would halve the rate of progress he sort of smiled apologetically and gave a little shrug.

This man was not unintelligent. I think the shrug was a tacit acknowledgement that he was giving me inconsistent and contradictory advice. It was by way of an apology, but, in the name of consistency, he had to come out with this bullshit. He’s helping implement our new teaching and learning strategy.

Now, other than the fact that all this stands in direct opposition to everything Wilshaw has said about no fixed teaching models and the acceptability of a didactic approach, it is the sheer lunacy that sticks in my craw. I could not believe what I was hearing. I nearly grabbed him and shook him just to see if he was actually real and that I was not temporarily delusional. It’s just not acceptable that I should be forced to suffer such blatant assaults to my intelligence. Wilshaw makes all the right noises, but he seems to be spending too much time composing sound bites and none at all in ensuring his message is reaching the ‘frontline’.

The book looks great, but I can’t see its message ever getting through. OFSTED is now precisely the problem in education. I’m not entirely sure the cargo cult analogy is apt. Certainly, it’s a cult now; a cult whose dogma and ideology is far from fixed. It shifts according to whims of fashion and the subjective interpretation of the local priesthood. But it seems that even when its catechisms demand the impossible, the self-defeating or the contradictory, it’s very much a case of extra Ecclesiam nulla sulus.


Which ideas are damaging education?

June 15, 2013

Just in case you weren’t aware, there’s a book out that I would recommend highly.

Joe Kirby's blog


“Education must resolve the teacher-student contradiction, exchanging the role of depositor, prescriber, domesticator, for the role of student among students.

Paulo Freire, Pedagogy of the Oppressed, 1968


Education still hasn’t learned that poorly designed curricula generate poor performance in both teacher and students.”

Siegfried Engelmann, Academic Child Abuse, 1992



Confused cargo cult ideas are damaging education


In their early encounters with Westerners, Pacific islanders saw cargo being delivered to islands from the sky. What seemed to them to draw in cargo were headphones, handsignals and landing strips. To attract deliveries of goods, they set up ‘cargo cults’ to build crude imitation landing strips and mimic the handsignals they observed of the people operating them, using coconut shells as headphones. They were then puzzled when the goods failed to materialise.

Some time in the late 20th and turn of…

View original post 1,976 more words


Blogs for the Week Ending 14th June 2013

June 14, 2013

Statistical Data and the Education Debate Part 2: Why we can reach conclusions from limited data.

June 13, 2013

I have brought this post forward as I have just seen a number of people react to this OFSTED report by making some of the errors described here.

As I said last time, people often think probability can be left out of evidence-based decision making entirely. The most common version of this is when we dismiss people’s descriptions of their experiences as unrepresentative or (perversely) anecdotal. Probability is at the heart of how we reason from evidence (particularly limited evidence) to more general observations. If we see something happen, then (unless we are mistaken) it is impossible that it actually never happens, it is less probable that it is rare, and it is more probable that it is common. The more often we see that something happen, and the more people we know who also see it happen, then the more unlikely it is to be rare and the more likely it is to be common.

The use of probability to go from a limited set of data to a more general claim is part of how opinion polling works. Although opinion polls don’t ask everyone in the appropriate population for their opinions, if they ask enough people and there is no reason to think those people were unrepresentative of the wider popular, then the opinions they express to the pollsters are likely to be close to the opinions of the wider population. Opinion polls usually give a margin of error indicating just how close to the opinions of the entire population their numbers are likely to be, which is calculated from the number of people in the pollsters’ sample.

Now, our reasoning is affected if our tendency to see things happen, or the pollsters’ way of finding people to poll, is not random. The results are likely to be less accurate. But while some sort of bias in how the sample of people asked might affect the probabilities involved, it still remains a matter of probability, and it can only change the probability, it doesn’t mean we know nothing at all. Biased sampling makes polling less reliable, but not necessarily so unreliable that it tells us nothing. The same goes for small samples. While asking less than a thousand people might make it far less likely that an opinion poll represents the opinions of the whole population to the nearest 3%, it might still tell us within 10% or 20%, and if it is claimed that “nobody” or “hardly anybody” or “only a minority” of people think something then that might be enough to settle the matter.

Once the role of probability in interpreting data is understood, we need to be careful about how easily opinions or experience are dismissed. There are those who reject any survey evidence outright for being only a tiny proportion of those who could have been asked. This is a big mistake. Because polling is based on probability, then, given random sampling, the size of the total population is not usually a major factor in the accuracy of a poll. 3 thousand people is a very good sample of 5 million people or 2 billion people.

A more common error is to assume that a small number of data points must tell us nothing which brings us back to how easily the genuine experiences of real people are dismissed as “anecdotal” or “unrepresentative” because they are not based on information gathered from a large random sample. Apparently seeing something frequently (like poor behaviour or bad management in schools) is no reason to think it is commonplace.

I think the best example I can give as to the usefulness of even a small sample is to imagine testing a coin to see if it is biased towards landing on a particular side when thrown. Now the population of possible coin throws is probably infinite. If the coin was to be kept for the purpose of coin throwing then the actual number of throws could be enormous. Yet if you were testing it for bias and it landed on heads every single time, how many times would you take to be convinced it was biased? A million times would not be enough to prove it for certain. There is a tiny probability that an unbiased coin could land on heads a million times. But you could be sure beyond reasonable doubt long before that. You wouldn’t even need the scale of an opinion poll sample, say 1000 throws. The chance of getting heads every time when throwing an unbiased coin 10 times is 1 in 1024. The chance of getting heads every time when throwing an unbiased coin 5 times is 1 in 32, which statistically speaking, makes throwing 5 heads out of 5 throws a reliable indicator that a coin is biased. Now all this hinges on the strength of the result. It would need a lot more throws to determine whether the coin was biased if there were a minority of tails among the heads. But the more consistent a result is, the less likely that it occurs by chance. On the other hand if the claim to be disproved was not that the coin was unbiased, but that it was biased towards tails to some stated degree, it would take even less throws to show this to be implausible. This also hinges on there being nothing biased about the throws which are recorded, if there is a chance that the person writing down the throws is more likely to see when the coins land on heads, rather than when it lands on tails, then it might take more throws to get reliable evidence. However, if we know the probability of missing a tails then that can be factored into the calculations too. That sort of bias, if understood, does not ruin the experiment.

Now let’s imagine a teacher chooses to teach at 5 different schools in their career and sees that behaviour is really bad in all 5. We can be reasonably confident, by the same maths as above, that (assuming no bias we haven’t accounted for creeps into the calculation)  that behaviour is really bad in at least half of the schools that this teacher could have chosen to work at. Depending on the way the schools were selected, and the opportunities the teacher had, this could also tell us about a lot more schools, possibly a whole sector or all schools. On this basis it is simply not unreasonable for teachers to conclude things about the whole system from just a handful of experiences, if those experiences are likely to be representative. Slightly different results (say one good school) or the possibility of non-random choices of school, might make the result less reliable, but going to more schools or listening to other teachers, will increase the reliability again. And if the claim is that schools with really bad behaviour are rare (rather than just 50% or less) then the reliability of that teachers’s experience as evidence against the claim goes up (or equivalently, the number of schools needed to indicate the claim is unlikely can go down).

Now the reason I focussed in on this, is because one of the most common responses from the various forms of denialists who infest the education debate is to dismiss personal experiences. Now if the claim was that personal experiences told us about all schools, perhaps even most schools, then there is a problem. However, if I merely claim that the sort of thing I have seen is common then, if I am not deluded, I can feel I am wholly justified  speaking from my own experiences, in claiming that the sort of things I describe in my blog are common in our secondary schools.


Statistical Data and the Education Debate Part 1: Effect and Cause

June 12, 2013


There is a lot of debate over what counts as evidence in education and I have barely begun to read up on the topic, but there are a few errors that I keep seeing made again and again in the use of statistics in educational debate that need to be emphasised. So this is the first of a trio of posts about statistical issues that have come up when I’ve been discussing education.

To begin with, statistical evidence is open to interpretation, and one of the most common errors is to interpret a statistical relationship as showing cause and effect the wrong way round. It is so common that I have become a habitual promoter of this particular song explaining the mistake.

Where I see this error in the education debate is where somebody dismisses an obvious reaction (R) to a problem (P) by saying “P happens where R happens, therefore R causes P”. 

So we have: 

  • Schools which exclude a lot of pupils have more behaviour problems. Therefore exclusion causes poor behaviour.

  • Where schools are really concerned with behaviour (perhaps shown by having extensive discipline policies) there is a lot of bad behaviour. Therefore, concern about behaviour and strict discipline policies causes bad behaviour.

  • Teachers who shout a lot/are stressed/dislike their classes have badly behaved classes. Therefore teachers cause the bad behaviour.

  • Schools which set their classes have lots of low ability students, therefore setting causes low ability.

  • Countries with effective education systems don’t have a lot of systems for accountability, therefore unaccountable schools lead to educational success.
  • Teachers who expect that students will behave, have students who will behave. Therefore, if you expect students to behave, they will. 

I could go on. Set out like this I think the error is obvious, but if the original claim is simply presented as “what the data shows” it can distract from the error being made in the interpretation.

Moving on from correlation though, most mistakes seems to be based around probability and the role it plays in interpreting data. People who are unfamiliar with statistics have a habit of assuming that evidence either proves a point absolutely, or indicates nothing at all. In fact all data, indeed all evidence, can only indicate that something is more or less likely. When we have no opportunity to defer judgement then even evidence that may be inconclusive might be useful and should not be dismissed. Often people think probability can be left out of evidence-based decision making entirely. I will look at this in the next blogpost in this series.



A Brief Comment on OFSTED and Teacher Talk

June 7, 2013

One of the recurring themes of my looks at OFSTED has been their blanket hostility to teachers actually talking. OFSTED guidance for PE states that inadequate teaching  involves “too much teacher talk” (from here). My first big trawl through the OFSTED reports found ten different reports complaining that teachers talked too much. Even in an outstanding school teaching can still be criticised because “In a few lessons, teachers talked too much” (from here). This is from an organisation, you may remember, which has no actual guidance as to how much talking is appropriate and, according to their leader (who thinks a “didactic” teacher can be outstanding), doesn’t  require a particular type of teaching.

When I have raised this before, people have suggested that inspectors may simply have observed teachers talking too much in those particular schools, but this still assumes that it is the role of OFSTED to judge quantity rather than quality in teacher talk. While there are many times when less talk (of the wrong kind) might create greater learning, there are equally likely to be times when more talk (of the right kind) might improve it. Yet such instances seem completely absent from the reports. Talking on the part of teachers is a mistake to be avoided, not a skill to be nurtured, in the eyes of OFSTED.

But  just in case you think that this interpretation of OFSTED’s requirements is idiosyncratic on my part, it was pointed out to me on Twitter this week that one consultancy company has noticed it too and is making money out of it.


More details can be found here.

According to the promotional material, the course will help teachers:

Prepare for Ofsted’s new minimum teacher talk expectations

Clarify what must be done to succeed under Ofsted’s revised framework

This apparently involves “talking less and meaning more” and so those being trained will be taught to:

Build talk-less teaching into all lessons

Apply talk-less systems to develop pupils’ learning

Hard to blame the company for taking this opportunity. Clearly this is what schools expect OFSTED to be looking for.

Beyond the usual criticisms which I’d make of OFSTED dishonesty and dogmatism. there is an important point to be made here about what is being lost as a result of the assault on teacher talk. Harry Webb (the pseudonymous writer of the Webs of Substance blog) wrote an excellent defence of teachers talking earlier this week which I would recommend reading if you haven’t already. In particular he identifies explanation as crucial to teaching:

I recently found myself in a discussion with a consultant who wished to replace my use of the word “explain” with the word “tell”… I would … allow that the definition of “telling” could be expanded to include the idea of explanation; that there is no clear demarcation between the two. However, I do not believe that this was the consultant’s intent. By trying to reduce all exposition to merely “telling” I believe that he was trying to perform the same trick as those who argue that to teach knowledge is to teach rote lists of disconnected facts. By diminishing and trivialising the concept, it provides a vacuum in which alternative conceptions may flourish, such as those that have failed with tragic regularity throughout the twentieth century; such as those that this consultant was promoting.

Teachers must continually seek to improve the quality of what they say but teachers must always talk and talk a lot. It is absurd to suggest otherwise and I despair that our profession seem so at ease with such absurdities.

Explaining is about as fundamental to teaching as it gets. In many ways explaining is teaching. Not only would I argue that it is wrong for OFSTED’s unofficial agenda to attack explanation by attacking teacher talk, even the official material which claims no such bias misses any hope of identifying good teaching by not identifying explanation as something to be judged. The OFSTED criteria allow inspectors to condemn a teacher if they cannot mark books, ask questions or assess progress effectively, but it doesn’t suggest inspectors  judge whether a teacher can actually explain their subject or not.

%d bloggers like this: