Much has been written about the inaccuracy of the polls during the campaign. Pollsters’ focus was on the difficulty of predicting the ‘challenger parties’, but in fact most polls accurately estimated the SNP, UKIP, Green and Liberal Democrat share of the vote. The problem was the two main parties: some got close to the Conservative share (certainly within the margin of error), but all overestimated the Labour share.
It is important to remember that the Exit Poll was very close to the actual result – this uses a very different methodology to other polls and is arguably the only one that is designed to be a true prediction of the election outcome. In part this is because in traditional campaign polls the intricacies of the UK’s political system make it very difficult to translate a projected share of votes to the number of seats a party would win with those votes.
But clearly the polls were not as accurate as in past elections. So what happened? Why was the Conservative share under-estimated and the Labour share over-estimated? What does accurate polling rely on?
1. A robust and representative sample
As a pollster, the number one question you get asked is “how can I trust your results… I’ve never been polled”.
George Gallup famously said “in order to appreciate the taste of a soup, you don’t have at all to eat the entire cauldron and to scrape the bottom. It is sufficient to mix the soup well enough and to eat one spoonful.” The most important part of this is the mix, and it may be that for a number of possible reasons the sampling approach was systematically at fault, or the results were not weighted appropriately.
2. Asking questions in the right way
A well-documented hypothesis for the inaccuracy of the election polls is that there were a significant number of people who intended to vote Conservative, but for a range of reasons chose to not disclose this to pollsters. This is thought to be the primary driver of the 1992 polling calamity (when the polls were even further out of line with the result than this time). The 2010 general election suggested that, rather than shy Conservatives, there was in fact a shy incumbent factor, with most polls underestimating the Labour share of the vote.
3. Accounting for late swing
The theory goes that the polls were right, in as much as they reflected the voting intention of those polled as representative but that on mass ~100,000 people changed their minds as they walked into the polling station.
Some commentators, including Keiran Pedley of the Polling Matters podcast suggest that the writing was on the wall before the results due to the Conservatives’ lead on the economy, and Miliband’s consistent trailing of Cameron in terms of who people thought would be the best Prime Minister (including a significant number of Labour “intenders”).
Other theories include a ‘lazy Labour’ phenomenon, in which Labour supporters did not turn out as expected. Many pollsters significantly overestimated turnout, which would have favoured Labour.
4. Interpreting the polls
The final, and potentially most alarming, explanation comes from the economist John Maynard Keynes: “It is better for reputation to fail conventionally than to succeed unconventionally.”
After the results were announced, one polling company (Survation) announced that it had done a poll with the “right” result but not published it due to fears that it was an outlier. This herding or publication bias is hugely important and in no way implies a conspiracy. There are lots of reasons to be sceptical about outliers – as Ben Page of Ipsos MORI is fond of saying “if it’s surprising, it’s probably wrong”, but each pollster acting rationally and tinkering with their sampling and weighting schemes to remain “in-line” certainly has the potential for consequences like last Thursday.
This debate will rage on.