Polling Fundamentals
Margin of Error, Explained Without the Maths
What the ±3% next to a poll actually represents, why it is wider than most people think, and the common mistakes journalists make when comparing two parties.
7 min read · published January 22, 2026 · by the Democracy Pulse editorial team
Open any newspaper's politics section the morning after a poll drops and you will find some version of the same sentence: “the poll has a margin of error of ±3 percentage points.” The phrase is recited so often that it has stopped meaning anything to most readers. It absolutely should mean something. The margin of error is the most misunderstood number on the page, and the misunderstanding is structural: most journalists report it incorrectly, and most readers therefore reason about polls as if it did not exist.
What the margin actually is
Imagine you have a vat with millions of marbles, half red and half blue, mixed thoroughly. You scoop out a thousand. You will not get exactly five hundred of each colour. You might get 482 red and 518 blue, or 511 red and 489 blue. If you do this experiment many times, the spread of results forms a predictable bell curve around the true 50/50 split.
The margin of error is a way of summarising that bell curve. For a sample of around 1,000 people and a typical 95 percent confidence level, the margin is roughly ±3 percentage points. It says: if I repeated this exact poll a hundred times, in ninety-five of those runs the result would land within three points of the population figure. In five of them, by pure chance, it would land further away.
That is a probabilistic statement, not a guarantee. It is also narrower than the real uncertainty around any actual poll, for reasons we will come to.
Where the ±3% number comes from
The arithmetic is simple. For a binary outcome and a simple random sample, the standard error around a sample proportion of around 50% is approximately one over the square root of the sample size, multiplied by 0.5. Multiply by 1.96 for a 95% confidence level and you get the famous margin. For a sample of 1,000, that works out to roughly 3.1 percentage points.
Two consequences of that formula are worth noting. First, the margin shrinks with the square root of the sample, not linearly: doubling the sample size from 1,000 to 2,000 only narrows the margin from about 3.1% to about 2.2%. Quadrupling it to 4,000 gets you to roughly 1.5%. Pollsters do not field samples of 10,000 because the marginal accuracy is not worth the cost. Second, the formula assumes a true random sample. No real-world poll is one. The published margin therefore understates the actual uncertainty.
The most common journalistic mistake
The standard error story you read in the press goes something like this: “Party A leads Party B by two points, but the poll's margin of error is ±3, so the race is a statistical tie.” Every clause in that sentence is doing something wrong, but the worst error is the comparison.
The ±3% margin applies to each party's vote share, not to the gap between two parties. The margin around the difference between two proportions is larger, by a factor that depends on the size of each share. As a rule of thumb, the margin around a lead is roughly 1.7 times the margin around a single share. So if each party's share has a margin of ±3, the lead between them has a margin closer to ±5. A two-point lead is well within the noise. A seven-point lead is comfortably outside it.
This matters in close races. A genuine three-point lead, sustained across many polls and many firms, is almost certainly real. A three-point lead in a single poll, taken in isolation, is barely better than a coin flip.
The hidden errors the margin does not capture
The published margin of error is purely the uncertainty due to random sampling. It deliberately ignores every other source of error in the polling process, including:
- Coverage error. If the population you can reach is systematically different from the population you want to measure — for example, online-only panels missing the digitally disconnected — your sample is biased before you even draw it.
- Non-response bias. The five percent of people who will pick up an unknown phone number are not a random slice of the population. They tend to be older, more political, more opinionated. If your weighting cannot fully correct for this, the poll tilts.
- Measurement error. Question wording and order, as discussed in our guide on how polls work, can shift results by several points without changing what the public actually thinks.
- Turnout modelling error. If the assumed electorate differs from the actual one, the published numbers can be off by several points even when the underlying preferences were measured perfectly.
- Late swing. A poll measures opinion at the moment fieldwork ended. If the public mind moves between then and election day, the poll was not wrong — it just measured a different country than the one that voted.
Take all of these together and the realistic total uncertainty around a single national poll is more like ±5 to ±7 percentage points, not the ±3 the methodology box claims. Election forecasters who try to model these additional errors empirically — by looking at how far polls have historically missed actual results — almost always produce wider intervals than the polls themselves report.
Why averages are narrower
If a single poll has a wide true uncertainty, an average of many polls is much narrower — but only the random-sampling part shrinks. Combining ten independent polls roughly cuts the random sampling error by the square root of ten, down to around 1 percentage point for typical sample sizes. This is most of the case for paying attention to averages rather than individual polls.
The catch is that the systematic errors do not cancel out by averaging. If every firm is using the same flawed turnout model, or every firm is reaching the same biased pool of online respondents, averaging just gives you a more confident wrong number. This is the polling industry's recurring nightmare, and it has played out in real elections more than once.
How to read margins like a professional
The practical heuristics are simple:
- Ignore the margin around individual polls when comparing two parties; assume the margin around the gap is roughly twice as wide as you think.
- Treat any movement of less than the margin between two consecutive polls as noise unless it appears across multiple polls.
- Take the polling average more seriously than any individual poll, but remember that the average can be systematically off by several points if the whole industry is making the same mistake.
- Distrust polls that report a margin of error narrower than ±3 on a sample of 1,000 or fewer. The maths does not allow it; the number is being misreported.
The point of the margin of error is not to be a precise instrument. It is a humility check: a reminder, every time you read a poll, that you are looking at a measurement with noise in it, and that the story behind a single number is almost always smaller than the headline suggests.