Why the Opinion Polls were Wrong (and an inquiry won’t help)

It’s official, the opinion polls conducted in the run up to the UK general election were wrong – even the polling companies have said so.

The good news is that there is going to be an inquiry into the pollsters’ failures. The bad news is that this is going to be carried out by the British Polling Council – an organisation that clearly has a vested interest in perpetuating the use of opinion polls. This is a much like asking the Astrological Association of Great Britain (and yes, there is one) to conduct an inquiry into why your horoscope has no useful insight into your future. If your starting assumption is that there is a relationship between astronomical phenomena and events in the human world you aren’t going to be looking for answers with complete objectivity. Similarly, if you believe that people can reliably predict their future actions and will report them accurately in market research interviews you are guaranteed to miss the reasons that the polls got it “so wrong”.

In 1992, after another general election polling failure, the industry engaged in a “very comprehensive” investigation in to the methodologies used. In the intervening period what have we learned that will make this review better? Or is this just going to be a case of introducing a fiddle factor that would have made the last results right and will perpetuate until the next time the polls are wrong again?

To understand the polling failure you need to understand the nature of the human mind and, in particular, the nature to which our unconscious mind drives our behaviour. Findings from experiments conducted by behavioural psychologists and behavioural economists have demonstrated time and again that we don’t think in the way we think we do.

One key issue for pollsters is working out who will actually vote. Lots of people think they will, but in reality not everyone does; only two thirds of people who were eligible to vote did so at the last election. Looking back over the last few decades turnout has varied between 79% in 1974 and 59% in 2001. Whether or not someone will actually vote has the potential to be a significant source of error.

Another problem is the issue of what feels OK to say in a survey. When opposition politicians and some of the mainstream media heavily associate one party with self-interest, people may adapt what they say when asked about their voting intentions so as not to risk feeling selfish; it may only be an online survey but our poll response feels witnessed in a way that our actual vote does not. One of the problems with the so-called ‘Shy Tory’ theory that has been used to explain this phenomenon is that the people who peddle it are often convinced that their simplified political dichotomy is fundamentally true. To quote Peter Kellner, president of YouGov in his recent article “We Got it Wrong. Why?” …

“This year, as in 1992, the Tories have a weak image. They are widely thought to be out of touch and for the rich. But, at the margin, there may be some people who both have a poor view of the party but nevertheless think it will run the economy better than Labour. They are “shy Tories” not because they are unwilling to admit their choice of party to a stranger but because they really would like to support someone else but, faced with a ballot paper in the privacy of the polling booth, simply can’t.”

How, you might well ask, does Peter Kellner have such a powerful insight into people’s minds that he knows many have voted for a party they don’t want to support? Because of other answers to questions in surveys and focus groups! Of course, this research is the 100% reliable type – not like the opinion polls that are the subject of an industry-run inquiry (although quite what makes them different at a psychological level is anyone’s guess).

We should consider too the nebulous nature of voting behaviour – again, a reflection of the unconscious mind’s role in our behaviour. A recent article in Research Digest detailed studies that have identified the influence of factors such as candidate appearance, personality, the choice of polling station venue, the weather, non-political bad news (like shark attacks) and voter emotions in people’s behaviour at the ballot box. Aside from the time-sensitive nature of some of these factors that mean they couldn’t be appropriately represented in a pre-election poll, it is unclear whether others will be unconsciously referenced in the same way in the context of an innately rational question and answer survey situation and at the polling station.

On the day of the election, exit polls were conducted that got much closer to the actual result of the election itself (although it should be noted that they too have been wrong in the past). I was watching television at the time that the exit poll results were announced. A collection of pundits and polling experts debated which they should believe; one politician, Paddy Ashdown, said he would eat his hat if the exit poll turned out to be right rather than the pre-election polls. There is a really important point to note here for anyone who uses a tracking study or other continuous market research: the consistency of the pre-election polls’ results was taken as an indication of their accuracy, IT SHOULD NOT HAVE BEEN! The observed consistency is a reflection of the consistent way in which people answer questions in surveys, not the capturing of an underlying truth.

In terms of psychological validity, asking people what they have just done (i.e. the exit poll) is significantly more reliable than asking them what they will (or might) do in the future – even if that future is just a day or two away.

When I published Consumer.ology in 2010 I suggested a way of gauging the psychological validity of any piece of market research. Is the insight an analysis of behaviour? Are people in an appropriate frame of mind? Is the environment present? Is the focus of the research covert? Does the timeframe in which the response is acquired match the real-world one? Pre-election opinion polling rates really badly on these criteria and we have to conclude that it has poor psychological validity. Exit polls do much better.

Regrettably opinion polls will continue to be used because they create a story for journalists. The fact that this story is one based on how people answer questions about voting intentions or a particular issue, rather than a reflection of what they will actually do is a petty detail that few will consider when they have believe they have a good story.

Unlike Petter Kellner, I would rather look at what people really do and take it as a reflection of their thinking, than superimpose one flawed piece of analysis over another in an attempt to reconcile what people say they will do with what they actually go on to do. There is much of interest to be learned about how people think that can be used to understand what they might do in the future, but this is a very different proposition from claiming to be able to predict the future based on what people say they will do.

Political parties who want to understand the mood of the electorate really need to go back to the drawing board. There is a real risk that we will end up with policies that are influenced by an opinion-polling yardstick that is simply not fit for purpose (however much retrospective tweaking is done in an attempt to mask its inherent failings).

 

 

References:

http://digest.bps.org.uk/2015/04/the-psychology-of-voting-digested.html?utm_source=BPS_Lyris_email&utm_medium=email&utm_campaign=%5Brd%5D+The+Latest+Psychology+Research

http://www.bbc.co.uk/news/election-2015-32650742

Consumer.ology, Graves, P. (Nicholas Brealey Publishing, 2013)

https://yougov.co.uk/news/2015/05/11/we-got-it-wrong-why/

 

Image by secretlondon123 (Flickr)

 

Leave a Reply

Your email address will not be published. Required fields are marked *