Yesterday I had the good fortune of attending the Impact Conference 2015, which gathered a truly stellar cast of speakers together in London. Trying to measure the effect of interventions when it comes to teaching and learning is a fascinating area and one that I have come to via Professor John Hattie’s Visible Learning books, especially The Science of How We Learn. As a keen scientist (okay biologist) I am also interested in the idea that by observing something you can’t help but change it; certainly the Hawthorne Effect is a useful reminder that many interventions will work by just being a source of increased attention to detail and raised level of effort. So how do we sift the useful from the useless and how can we prevent the proliferation of educational homeopathy? These were just two of the questions I approached the conference with. Below is an outline of what I took from two of the excellent speakers on show, Professor Rob Coe and the aforementioned Professor John Hattie. Sadly once again I cannot rein in my inefficient verbosity (some might call it verbal diarrhoea) thus I will not be writing up the fantastic Sam Feedman and Philippa Cordingly or the hilarious genius that is Dr Ben Goldacre who all also spoke at the conference.
Professor John Hattie (talk I) I took a great deal from both of Professor Hattie’s talks, in particular challenging the assumption that just because something works it is good and therefore shouldn’t be changed. Reassured that there is very little you can do to decrease attainment (although interestingly labelling students has an effect size of -0.61) it was heartening to hear him say that in the UK success is all around us; it is impossible for everyone to be in PISA’s top 5. Certainly one of the main takeaways from this talk was a reemphasis on seeing learning through the eyes of the learner and using criteria from the learner’s point of view to measure impact. This was nicely summarised as “teachers who learn to be learners and students who learn to be teachers”. Another key point I eagerly agreed with was that the job of a teacher is not to help students realise their expectations but to help them “exceed what they think they will do”. During the first session Professor Hattie was also at pains to point out that he did not say that “teachers should not be researchers” in a recent interview. Instead he urged us to be “evaluators”, a not so subtle change in semantics. I actually fully agree with this. As teachers we are not trained in research so it seems a task for which we are not optimally suited. One might argue that there should be more emphasis on research on the ITT and other training courses, but my own opinion is that this would be detracting from learning the skills needed for actually teaching. However, that’s not to say that the area of research should not be revisited at some point in a teaching career.
Professor Rob Coe As joint author of What makes great teaching Professor Coe did not disappoint during his witty and
insightful talk. He made it clear that it isn’t enough to just know the effect sizes for certain interventions but, as teachers seeking to improve education, we should always evaluate what we do. I liked his point that using motivation to enhance attainment is putting the carriage before the horses in that convincing a student that your lesson is a “game they can win” to raise attainment will have the knock on effect of increasing interest and motivation. Professor Coe confirmed my belief that we do not allow enough time to elapse for an answer after asking a question, but extended this to waiting another 3-5 seconds after an answer to elicit even better responses. This is something I aim to do immediately, both with my classes and colleagues! The final point I’d like to pick up on from this part of the conference is the question “do we know a good lesson when we see one?” There is no doubt in my mind that grading of lessons is an absurd practice but it is wonderful to hear evidence to back this up:
- When two teachers observe the same lesson and one grades it “Inadequate” the probability that the other will agree is just 10%. Even with thorough training in how to observe a lesson the probability increases to just 40%.
- When an observer judges a lesson “Outstanding” the probability that the pupils are really making sustained, outstanding progress is just 5%.
Moral of this story? Do not grade lessons.
Professor John Hattie (talk II) Opening on a slightly controversial theme of “neurotrash” Professor Hattie argued that this area of educational research is interesting but does not actually get us anywhere. So often you can simply replace the term “brain” with “learner”. Turning his sights next to the cause célèbre known as “twenty-first century skills” he suggested that by themselves they are irrelevant as they are devoid of content. In fact it is only where they come with content that you start to establish a transition from surface to deep learning. Indeed he spoke on how critical thinking and problem solving quite simply should not be taught outside of subjects. Another fantastic quote, this time on the importance of learning from failure, was “the second time it’s a mistake, the first time is a learning opportunity”.
I was introduced to a concept that I had not heard of before; James Nottingham’s Learning Pit, see here.This is another instant takeaway that I will look to explore and guide my classroom work. Again the words spoken by Professor Hattie were particularly pertinent in that “feedback feeds on error” and we must not stigmatise failure, instead encourage a thorough reflection and analysis of what went wrong and why. Overall I found yesterday’s conference to be a fantastic INSET / CPD opportunity and would recommend the January event to anyone interested in looking at ways to measure and evaluate impact. Additionally the discussions with other delegates will also be worth the cost of the ticket!
Header image taken from Wikimedia Commons.