TIM HARFORD: Why pollsters so often seem to get it wrong
Polls do generate information but cannot do away with the uncertainty
Irving Fisher, who a century ago was one of the world’s most famous economists, once declared: “The sagacious businessman is constantly forecasting.” Well, perhaps. But how sagacious is it to be constantly forecasting, when the forecasts seem so often to be wrong?
Significant amounts of money, not to mention incalculable reserves of intellectual and emotional energy, were invested in the problem of figuring out who was going to win last week’s US presidential election. The polls repeatedly and consistently suggested a huge win for the Democrats’ Joe Biden. That is not how things have panned out.
What did we know beforehand? That if the polls were wrong in the same way as in 2016, the election would end up with Donald Trump very close in Florida and Pennsylvania. A polling error fractionally bigger than 2016 would put us exactly where we found ourselves — on the edge of our collective seats, if not losing our collective minds.
No-one is really that surprised. Yes, Biden’s lead was larger and more stable than Hillary Clinton’s in 2016. Yes, pollsters had in principle corrected for their earlier mistakes. Yes, while polling errors could still be expected, it was as likely that Biden would overperform and grab Ohio and Texas as that he would underperform, failing to win Florida. Yes, yes, yes. But no-one could quite believe the polls. And it seems we were right to doubt.
The state-level polls were off in much the same way and in much the same places as they were in 2016 and the 2018 midterms. Pollsters do not want to be wrong, and they particularly dislike being wrong in the same way twice in a row. So while polling errors are common, it is a surprise that lightning struck twice in the same place.
At this early stage one can only guess at what went wrong, but it is worth underlining the difficulty pollsters face. Consider the situation in Florida, where polls suggested Biden would win 51% of the vote and Trump 48%-49%. The actual result was the reverse.
But step back. The pre-election numbers suggest that in a typical poll with 500 positive responses, 255 went for Biden and 243 for Trump. But typical response rates are five in 100 — often lower, says Andrew Gelman, a statistician and prominent election modeller. He says only about one in 100 people responds to opinion polls. So now picture 10,000 people, 255 who called their vote for Biden, 243 for Trump and 9,500 who never responded. How confident are we feeling now?
Worse, the people who do reply will be systematically different from those who do not: older and whiter, more likely to be women. Pollsters may try to correct for these factors to ensure the demographics of the poll match the demographics of the census. Perhaps Cuban-Americans in Florida are underrepresented in the poll by a factor of three. Fine: let’s say the Cuban-Americans who do reply count triple.
But does this help? What assurance do we have that the tiny minority who bother to respond are a good proxy for the vast majority who do not? One pollster told me: “Half the time, our adjustments make things better. Half the time they make things worse.”
Complicating matters still further is the question of turnout. Someone may tell the pollsters that they are planning to vote. But will they? This caused problems for forecasting the Brexit referendum in the UK. Older, less educated voters told pollsters they would show up in force to vote Leave. Prior elections suggested otherwise. Pollsters who placed more weight on history than on their own raw data were tripped up. Turnout in this US election has been unusually high, giving pollsters another headache.
We shouldn’t exaggerate the problem. Polls do generate information. Every state the Financial Times confidently predicted would vote for Biden voted for Biden. Every state the FT confidently predicted would vote for Trump voted for Trump. Those calls were not made by leaps of political intuition, but by looking at where the polls predicted a safe margin. Trump needed to win most of the marginal states to have a chance, and promptly bagged three of the four big ones, Florida, Ohio and Texas, denying Biden the quick and decisive victory for which he might reasonably have hoped.
It’s not that the polls told us nothing. It’s that they could not tell us what we yearned to know. We want certainty, but we can’t always get what we want. In a close-run election where most people refuse to speak to pollsters, opinion polls cannot do away with the uncertainty.
Fisher was right to highlight the need to think about the future. We must, after all, weigh up our chances and make our decisions. But, as his contemporary John Maynard Keynes famously remarked, sometimes “we simply do not know”. And since Fisher was eventually ruined, while Keynes died a millionaire, a little agnosticism comes in very handy.
In any case, we must learn to live with uncertainty. Perhaps we should obsess less about the question “will it happen?” and devote more thought to what we would do if it did.
© Financial Times 2020
Would you like to comment on this article or view other readers' comments?
Register (it’s quick and free) or sign in now.
Please read our Comment Policy before commenting.