Where were you at 10pm on election day, a year ago this week? I was in the office and the TV screen with the exit poll was a long way away, and suddenly seemed even further away as the words “hung parliament” drifted across the open-plan desks.
Once again, most of the opinion polls had got it wrong: 2015, the 2016 referendum and now this. Because I take the view that opinion polls are the worst way of finding out what people think, apart from all the others, I had got it wrong too. I assumed that Theresa May was heading for an increased majority, even as I tried to keep an open mind to the possibility of a different outcome.
The important thing about mistakes is to learn from them, so what have we learned in the year since that shock? This week I took part in two attempts to learn the lessons. One was a conference organised by Professor Patrick Sturgis of the University of Southampton, to consider a report on opinion polling by a committee of the House of Lords.
Prof Sturgis carried out the inquiry into why the opinion polls got it wrong in the previous election. In 2015 they suggested a hung parliament and we got a Conservative majority – the opposite of 2017. There hasn’t been the same postmortem this time, partly because the reasons for the error are more obvious, so the House of Lords report has been the focus for debate instead.
The Lords inquiry, chaired by David Lipsey, allowed politicians to let off steam. The report expresses their frustration at the way the polls dominated election coverage. This meant, for example, that, as Jeremy Corbyn was given no chance, his policies were not subjected to same level of scrutiny as Theresa May’s.
The report also served a purpose in coming out against a ban on opinion polls before elections. This would be the wrong way to respond to the frustration. As Joe Twyman, the former YouGov director who has set up a new outfit called Deltapoll, told the conference, “I would love it – absolutely love it. Pollsters like me would be rich.”
If polls were banned then the elite would have the information, while the poor would have rumours. It would be impossible to prevent polls being carried out, so a ban on publication would mean hedge funds would move the markets on secret information. That was why France, which used to ban polls in the fortnight before elections, cut its ban to 24 hours.
Lord Lipsey’s report wisely concluded that education and openness is the way forward. It is hard, because polls will always influence journalists. We ought to try to reflect uncertainty better, but it goes against human nature. We know we shouldn’t pay more attention to an outlier than to a boring poll that shows no change, but people are interested in drama and the horse race.
This is not a conspiracy by the mainstream media, because the same thing happens on social media. Last week Remainers were more likely to share a YouGov poll showing a seven-point lead for “wrong to leave the EU” over “right” than they were to share this week’s poll showing them neck and neck.
But we journalists need to try harder to prevent assumptions about the horse race from dictating coverage. In 2017, for instance, Corbyn should have been asked tougher questions about how he would have dealt with the Scottish National Party in a truly hung parliament.
We also need to try to understand better how opinion polls work. So I took part in a second attempt to learn the lessons of 2017 this week – by reading Anthony Wells’s excellent summary of why the polls got it wrong.
In short, the polls got it wrong in 2017 because they got it wrong in 2015. The pollsters over-corrected and got it wrong in the opposite direction. The main cause of the error in 2015, Prof Sturgis found, was that samples had become unrepresentative. The pollsters tried to fix the problem, and, after the 2017 election, we found that they had succeeded. If they had left it at that, they would have got the result roughly right.
Before the election, however, they could not be sure, so most pollsters made further adjustments to try to predict turnout better – that wasn’t a big problem in 2015, but it was something that threw the polls out in the 2016 referendum.
But you can never know until after the votes have been counted. This is particularly true of innovations. The great success of the 2017 election was YouGov’s model, a constituency-level projection based on a large sample. It suggested a hung parliament – as indeed YouGov’s conventional poll would have done if it hadn’t been adjusted.
When it came to polling day, YouGov’s bosses, including Joe Twyman, didn’t believe their new model and chose to be judged on their (adjusted) normal poll. And got it wrong, as everyone else did except Survation.
Herein lies the great danger for next time. At the next election, everyone will want a model like YouGov’s. But we simply don’t know if it will perform as well a second time. Lord Ashcroft last year had a model built on similar principles to YouGov’s and it showed a Tory majority of 60.
In the end, pollsters and journalists will always be fighting the previous election while politics moves on. We ought to be striving constantly to get it right, but in the end we should expect elections to surprise us. There would be something terribly wrong if we could predict perfectly how people will behave.