In the next four months Britain will be inundated with opinion polls. As the Leave and Remain camps gear up for Britain’s first referendum on its relationship with Europe for four decades, the stakes are high.

But this time last year the nation also pored over an array of polls during the general election campaign, and yet those polls proved unreliable.

How should a cautious FT reader know what to make of polling about the EU referendum? Here are five points to bear in mind …

 

1) Sample size.

How many people do I need to ask before I’ve got a reliably accurate poll? The basic answer is 1,000. But that needs to be caveated.

A poll of 1,000 people might tell you that 40% of them support Britain’s continued membership of the EU. But if you want to know specifically what young men in the Midlands – or any other subset of your sample – think about Brexit, then you’re going to need a bigger pool of respondents.

For most pollsters, this bifurcation of sample sub-sets rapidly becomes unmanageably large.

Instead they may choose to seek out specific sub-sets of people and canvass them more proactively. That has its own problems, of course – looking in the places where one would expect to find young men in the Midlands will tend to lead you to people who may not be representative of the wider group. “If you’re sampling in areas with relatively high concentration of people in that category then that’s self-selecting certain types of people,” says Ben Page of Ipsos Mori.

Once a question has been asked and respondents found, pollsters use a variety of methods to adjust their data and make it, they hope, more representative of the population they are targeting. Sometimes they’ll tweak their methodology, which can also affect the outcome.

For example YouGov recently dropped weighting its respondents by newspaper readership and adding highest educational qualification and the amount of attention paid to politics as criteria in its filtering process.

 

2) Error margin.

As Big Data becomes ever more mainstream, spurious accuracy is a rapidly growing problem. Whenever you hear a statistic, the first question you should ask is, ‘What’s the error margin on that?’.

Unfortunately the media will rarely tell you. “We have tried producing polls with a margin of error and the media just doesn’t want to show charts with that on,” says Ben Page.

The FT has checked the methodology of all national polls on the EU referendum carried out since the start of the year. Most have a sample size of between 1,000 and 2,000 responses, which means they have an error margin of +/-3 percentage points – in other words, if a poll finds that 40% of people say they want Britain to leave the EU, the actual result will be somewhere between 37% and 43%. Here’s what that looks like on Brexit polls since last summer, charted:

source: WhatUKThinks.com and FT research; bars reflect +/-3pp margin of error

“The fact that something is 1% up or 1% down is neither here nor there,” says Mike Smithson, editor of Politicalbetting.com.

In addition to error margins, it’s also important to bear in mind statistical certainty. Opinion polls of 1,000 responses have a 95% confidence interval – in other words 19 in 20 polls will be within the margin of error. But 1 in 20 won’t be – and no, you can’t tell which one.

 

3) Other biases.

“People do treat polling figures as though they’re more accurate than they usually are, but focusing on the margin of error can mean you overlook all the other sources of error,” says Anthony Wells from YouGov.

These range from statistical complexities such as the design effect to practical issues – for example the wording of the questions being asked by different pollsters. Most polling companies are now using the actual EU referendum wording, but some are also polling on a slightly different wording which they have built up a long data series on. How a poll is worded can make a difference to its outcome.

It’s also important to understand what the wording means: Anthony Wells cites “asking the wrong question or interpreting it wrongly” as an example of another, very common error. And Ben Page suggests that, particularly when a vote is months away, questions focusing on economic security are more likely to predict the eventual outcome than the actual question: ‘How will you vote?’.

“A referendum is normally a one-off so opinion is very volatile and most people haven’t thought about it this far off,” he says. “I’d pay far more attention to the questions about ‘what makes you feel better off or safer’ rather than Remain or Leave [the EU].”

 

4) Poll of polls.

If the accuracy of individual polls can be questioned, perhaps aggregating them might improve things? Well, yes and no.

Here’s the FT’s Brexit effort, using results from all the major polling companies:

source: FT.com

A ‘poll of polls’ does give a better idea of the range within which the polls are clustering – but looking at the average line rather than the range can be misleading, some pollsters say.

“In theory it could help remove individual outliers and that could be helpful,” says Ben Page. However, how a poll of polls is calculated can make all the difference.

“Almost everything gives a spurious sense of accuracy, but taking a crude average is probably the least bad way,” says Anthony Wells. But simply using the most recently-published numbers from a variety of pollsters can introduce false volatility, meaning the average line “jumps up and down” simply because of polls falling into our out of the calculation over time, he warns. To mitigate this a poll of polls could use the most recent result from each pollster, he suggests. Alternatively a poll aggregator could look at the spread between each poll’s ‘Leave’ and ‘Remain’ vote.

 

5) Research method.

On the EU referendum question, polls conducted by phone have recently produced a notably higher vote for ‘Remain’ than those carried out on the internet, an effect which Anthony Wells calls “very strange and intriguing”:

source: WhatUKThinks.com and FT research; red dots are phone polls, blue dots are internet polls

The root of the problem seems to lie in the fact that the type of people who pick up a landline phone and are willing to answer pollsters’ questions are very different to those who sign up to online polling sites and answer questions. “Phone pollsters have a reputation for greater accuracy but have problems reaching people so the respondents they get are not particularly representative,” says Mike Smithson. “Whereas internet polls tend to have a magnifying effect on current events – for example the great Nick Clegg surge of 2010 was largely driven by internet polling.”

Added to this is the problem – at the root of last year’s election polling debacle – that polls tend to over-count politically enthusiastic voters. As part of the fall-out from that shock result, pollsters are busily searching out the disengaged and apathetic in a bid to rebalance their survey samples.

—-

To sum up, polling is pretty difficult and even the best, most professional efforts by pollsters with decades of experience should be treated with caution. Here are three top tips when interpreting the EU referendum polls:

- On poll-of-polls trackers, “look at the spread rather than the average,” says Ben Page;

- When contemplating the possible outcome, “look at polling for the trends, not the absolute numbers”, says Mike Smithson;

- And always, always “remember that small differences between polls are probably not meaningful”, says Anthony Wells.

—-

In case we haven’t emphasised it enough, here’s how not to do it – a self-selecting readers’ vote:

Which produced an involuntary reaction in many who saw it …

'An exclusive online poll of https://t.co/1YtJTwBsAF'
No. Go away. Voodoo poll.

— Britain Elects (@britainelects) February 4, 2016
Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments