We noticed you're using an ad blocker. Democracy Pulse is free to use and relies on ads to keep running. Please consider disabling your ad blocker to support us. Thank you! 🙏

Polling Fundamentals

How Election Polls Actually Work

A plain-English walkthrough of how opinion polls are designed, sampled, weighted, and reported — and what each step means for the numbers you read.

9 min read · published January 15, 2026 · by the Democracy Pulse editorial team

Almost every news cycle in a democracy is now punctuated by polling numbers. A party is “up two”, a leader has “crashed below thirty”, an underdog is “closing the gap”. For readers who do not work in research, those numbers can feel like weather: just there, generated by some invisible machine. They are not. Every poll you read is the end product of a long chain of design decisions, statistical adjustments, and judgement calls. Understanding that chain is the difference between being informed by polling and being misled by it.

What a poll is actually trying to do

A political opinion poll is an attempt to estimate what an entire electorate thinks by talking to a small slice of it. The full electorate of a mid-sized democracy might be thirty or forty million people. A typical national poll talks to between 800 and 2,000 of them. The whole intellectual scaffolding of polling exists to answer one question honestly: under what assumptions can these few thousand answers stand in for the country?

This is not a rhetorical exercise. Pollsters are trying to do statistical inference: to take a measurement on a sample and produce a defensible estimate, with a stated uncertainty, of a population value they cannot observe directly. The tools they use are decades old. The execution is anything but standard.

Step one: defining the population

Before anybody picks up a phone or pushes a survey link, the firm has to decide who counts. This sounds trivial. It is not. Are tourists included? Citizens overseas? Sixteen and seventeen-year-olds in countries where they can vote? People without a phone, or without an internet connection, or without a fixed address? Should the population be all adults, all citizens, all registered voters, or all likely voters?

The choice matters because each definition is a different country. “All adults” tends to lean younger and more progressive in most democracies; “likely voters” tends to lean older and more conservative because older voters turn out at higher rates. Two polls fielded on the same day, with the same questions, can disagree purely because they defined the population differently.

Step two: drawing the sample

Once the population is defined, pollsters need a way to reach a representative slice of it. In the textbook world this is a simple random sample: every eligible person has the same known probability of being contacted. In the real world, this is almost impossible.

Random-digit dialling worked reasonably well when households shared landlines and answered them. Both of those assumptions collapsed in the 2000s. Mobile-only households, call screening, and outright hostility to unknown numbers pushed phone response rates from around thirty percent in the 1980s to often below five percent today. To compensate, firms have moved to mixed modes: phone plus SMS plus online panels, sometimes with door-to-door interviews in hard-to-reach regions.

Online panels are now the dominant mode in much of Europe, North America and Oceania. A panel is a pre-recruited group of respondents who agree to take occasional surveys, often in exchange for small rewards. They are convenient and cheap; they are also self-selecting, which is the original sin of internet polling. Reputable firms invest heavily in panel quality control, but a panel is never a true probability sample, and the gap has to be papered over with statistical modelling.

Step three: writing the questions

Wording matters more than most readers realise. Asking “If an election were held tomorrow, which party would you vote for?” produces different numbers from “Which party are you most likely to support at the next election?”, and both differ from “Which party best represents your views?”

Question order matters too. If respondents are first asked about an unpopular policy associated with the governing party, that party will often score lower on the subsequent vote-intention question than it would have if asked first. Good pollsters rotate question order between respondents to neutralise these effects. Less rigorous polls do not, and their headline numbers are correspondingly less reliable.

The treatment of undecided voters is another quiet but consequential choice. Some firms report raw figures with a large “don't know” bucket. Others push undecided respondents with a follow-up question (“which way are you leaning?”) and reallocate their answers. Others still apply a model to assign undecided voters to parties based on demographics. Each method gives a different answer.

Step four: weighting

Almost no poll is published in raw form. Once the responses are in, they are weighted: respondents from under-represented groups are counted more heavily, respondents from over-represented groups less so. If a sample of 1,500 contains too few young men, each young man in the sample might count as 1.7 respondents; if it contains too many retired women, each might count as 0.8.

Weighting variables almost always include age, gender, region and education level. Increasingly, they include past vote — adjusting the sample so that the share of respondents who say they voted for each party last time matches the actual previous result. Past-vote weighting was, for example, one of the responses to high-profile polling misses in the United Kingdom in 2015 and the United States in 2016. It can sharpen accuracy, but it bakes in the assumption that people accurately remember and report how they voted, which is not always true.

Step five: turnout modelling

For an election poll to be useful, it must estimate not just what people would do, but who will actually vote. In many countries turnout differs sharply by age, education, and prior voting history. Pollsters apply a turnout model: a set of rules or a statistical model that assigns each respondent a probability of voting and either weights the sample by it or filters out unlikely voters entirely.

This is, in practice, where many of the largest polling errors are born. If your turnout model assumes the previous election's electorate, and the actual electorate turns out to be younger, more rural, or more disengaged-but-mobilised than that, your headline number can be off by several points before any other source of error is considered.

Step six: reporting and uncertainty

Finally, the pollster has to publish. By convention, a national poll comes with a margin of error, usually around ±3 percentage points for a sample of 1,000. That figure represents only the uncertainty due to random sampling. It does not include weighting error, turnout modelling error, mode effects, question wording effects, or respondent dishonesty. The true uncertainty around a single poll is almost always wider than the published margin suggests.

This is why no responsible analyst hangs a story on a single poll. A polling average — built from many polls, ideally from many different firms using different methods — washes out a great deal of firm-specific noise and is a far more reliable signal of where public opinion actually sits. We unpack that idea in our guide on poll aggregation.

What good polls share, and how to spot them

High-quality polls are transparent. They publish their sample size, the dates of fieldwork, the mode (phone, online, mixed), the weighting variables, the question wording, and ideally the raw toplines before any reallocation of undecided voters. They are members of professional bodies like the World Association for Public Opinion Research or the British Polling Council, which require adherence to disclosure standards.

Low-quality polls hide their methodology, refuse to disclose fieldwork dates, sample voluntarily through social media, or present single-question voodoo polls as if they were rigorous research. A polling number with no methodology section is a number with no provenance. Treat it accordingly.

The bottom line

A poll is not an oracle. It is a measurement, made with imperfect instruments, of a moving target. Done well, it produces information no other tool can give: a defensible estimate, with an honest uncertainty, of what an electorate is thinking right now. Done badly, it generates noise that confuses voters and crowds out better sources of information. The job of a thoughtful reader is to know the difference — and to insist on it.