We noticed you're using an ad blocker. Democracy Pulse is free to use and relies on ads to keep running. Please consider disabling your ad blocker to support us. Thank you! 🙏

Polling Fundamentals

Why Polls Sometimes Miss Elections by a Mile

Late swings, shy voters, turnout models, sampling frames, and herding — a tour of the structural reasons polls have historically gotten elections wrong.

11 min read · published February 3, 2026 · by the Democracy Pulse editorial team

For most of the post-war period, a quiet professional consensus held that election polling was, broadly, a solved problem. Pollsters had their occasional embarrassments, but national polls in stable democracies tended to land within a few points of the eventual result, and in close races they correctly called the winner more often than not. That consensus has been under strain for a decade. The 2015 United Kingdom general election, the 2016 Brexit referendum and United States presidential election, the 2019 Australian federal election, and a string of more recent contests have all produced polling misses large enough to embarrass the industry. Understanding why is not just an academic exercise; it is essential for anyone who wants to read modern polling intelligently.

The two ways a poll can be wrong

It is helpful to draw a hard distinction between random error and systematic error. A random error is the kind the margin of error captures: pure sampling noise, the inevitable result of asking 1,000 people instead of 30 million. Random errors cancel out across many polls.

A systematic error is something else entirely: a bias in the polling process that pushes results in the same direction across multiple firms and multiple polls. When pundits talk about “the polls being wrong,” they are almost always talking about systematic error. Those errors do not cancel by averaging. They are why an entire industry can produce confidently wrong numbers right up to the moment the votes are counted.

Late swing

Polls are snapshots. They measure opinion during their fieldwork window, which usually ends two or three days before publication. In a stable race, opinion barely moves in those final days and the gap between the last poll and the actual result is small. In a volatile race, opinion can move sharply.

Late swings are notoriously hard to detect because, by definition, the polls capturing them have not yet been fielded. The few late-deciders who change their mind in the final 48 hours can be enough to flip a knife-edge contest. Several of the high-profile misses of the past decade had a late-swing component: the public moved, and the polling industry, having stopped fielding too early, missed the move. Some firms now run late or even election-eve polls specifically to catch this, but they are expensive and rare.

Shy voters

The “shy voter” effect describes systematic under-reporting of support for a particular candidate or party. Voters either lie to pollsters because they fear social judgement, or refuse to participate at all, leaving the sample unrepresentative of their views. The phenomenon was famously cited after the 1992 UK election, when polls underestimated the Conservatives, and again in various forms in the 2015 UK and 2016 US contests.

The empirical evidence for shy voting is mixed. Studies after both the 2016 and 2020 US elections found little direct evidence that respondents were lying; the bigger effect appeared to be that certain types of voter were systematically harder to reach in the first place — not shy, just absent from the sample. The distinction matters because the fixes are different. Lying is almost impossible to correct for; absence can sometimes be corrected with better sampling and weighting.

Turnout modelling failures

Almost every poll is filtered through a model of who will actually vote. That model is built on past behaviour: who voted last time, in what proportions, by what demographic groups. When the next election produces an electorate that looks very different from the previous one, the model breaks.

This is one of the most common technical reasons polls miss. A candidate or campaign that mobilises previously disengaged voters — younger people, working-class non-voters, populist sympathisers outside the political mainstream — looks under-supported in the polls because the model assumes those voters will not show up. When they do, the result lands several points away from the averages. The reverse can happen too: an enthusiasm gap that depresses turnout among a party's core supporters can be invisible to pre-election polling and obvious on the night.

Sampling frame collapse

The technical underpinning of probability polling — the assumption that you have an enumerable population frame from which to draw a random sample — has been quietly eroding for years. Landlines have been disappearing. Mobile-only households are over-represented in certain demographics and under-represented in others. Online panels are self-selected by definition. Address-based sampling works better in some countries than others.

The result is that even the best-resourced firms are stitching together coverage from multiple modes, each with its own biases, and trusting their weighting to glue the seams together. When the weighting works, the published numbers are reasonable. When it does not, no amount of post-hoc apology can recover the missed signal.

Herding

Herding is the tendency for late-cycle polls to cluster around a consensus number rather than spread out as their methodology would normally produce. It happens because publishing a poll that diverges sharply from the consensus is professionally risky: if you are right, you look brilliant; if you are wrong, you look incompetent. The safer play is to nudge the methodology, the weighting, or the assumptions until your number lines up with the herd.

Empirically, polls in the final two weeks of major elections cluster more tightly than statistics alone would predict — strong evidence that herding is a real phenomenon. The implication is unsettling: the apparent precision of a final-week polling consensus is partly an artefact of professional incentives, not a sign of accurate convergence on the truth. When the consensus is wrong, everybody is wrong together.

Question wording and the framing of choice

In binary referendums and head-to-head presidential races, the question is usually clear. In multi-party parliamentary contests, it is messier. Should the polling question prompt all party names or just the major ones? Should it offer “none of the above”? Should it list parties in random order? Each choice moves the numbers, sometimes by enough to matter.

New parties and outsider candidates are particularly vulnerable to wording and prompting effects. A populist movement at 5% in a poll that prompts only the established parties might be at 9% in a poll that lists every contender. Both numbers are defensible; both will produce different headlines.

Black swans and the limits of measurement

Some polling misses are not really polling failures at all. They are events. A late scandal, a televised debate that flips a narrative, a war or a market crash in the final week — these are genuine shocks to the system that no pre-shock poll could be expected to capture. Calling these “polling errors” is unfair to the discipline. The polls measured the public mood before the event; the event then changed the public mood. That is not the polls' fault.

It is worth distinguishing this category clearly because it is the one most frequently mistaken for the others. When a poll “misses” because public opinion genuinely moved after fieldwork closed, the right response is to admire the difficulty of the measurement, not to despair of it.

What this means for the way you read polls

None of this means polls are useless. The vast majority of polls in stable democracies still land within a few points of the actual result. The post-2015 polling errors, while genuinely embarrassing, were typically of the order of 3 to 5 percentage points — large enough to flip a close election, small enough that the polls were still in the right neighbourhood.

The lesson is not to discard polling but to read it with the humility it deserves. Treat the published margin of error as a floor, not a ceiling, on the real uncertainty. Pay attention to averages, not individual polls. Be especially sceptical in close races, in rapidly moving campaigns, in elections featuring large new parties or candidates with no track record, and in countries where the polling industry has had recent misses. Above all, remember that a polling average two weeks out is a good guide to the current state of the race, not a prediction of the result.