...when they try to explain the meaning of any before-and-after opinion polls.

The vast majority of journalists, upon receipt of a new poll, will say something like, "Before the State of the Union speech, President X's approval rating was 50%. After the speech, a new poll shows he's up to 52%, that's an impressive/disappointing (depending on the spin they're selling) change." The better journalists have had it drummed into their heads that they must add "the margin of error for the poll is +/- 3%". But they mumble it so quickly its clear they have forgotten what it really means, if indeed they ever really knew.

A scientific, nation-wide opinion poll, such as the traditional approval rating poll conducted by Gallup, Zogby, or others, is an attempt to find out what the entire country thinks about a question, by asking only a very small (usually less than 1/100 of 1%!), representative sample of people. The reported numbers (50% and later, 52%) reflects how the sample (not the entire population) actually answered the question.

A margin of error of +/- 3% indicates that if we gathered answers to the poll question from the entire country, the true approval rating would probably (95% of the time) be between 47% and 53% for the first poll, or between 49% and 55% for the second. A 2% change is smaller than the margin of error, and is not statistically significant. The actual approval rating could have been, say, 51% for both polls. The apparent 2% change might just be a side effect of who was selected for the samples, the fact that a different investigator asked the question, different individuals' feelings about the polling process, or any number of other random factors that had nothing to do with a genuine change of opinion. In fact, the actual approval rating might actually have decreased from 53% to 49% following the speech!

The poll as constructed just isn't sensitive enough to reliably detect opinion swings of less than 6%. To get a more sensitive measure you'd have to increase the sample size. The statistical concept of margin of error is hard science and not at all controversial. I should know, I was a statistics tutor in college.

In other words, it is just plain wrong to conclude that a 2% increase in the approval rating means that people were more approving of the president after the speech!

Yet time after time, journalists from reputable outfits report such insignificant changes as front page news.

Grrr...Sigh.

e-hadj's writeup is very, very good but he makes a mathematical mistake and ignores the ever-present tracking polls.
First of all, the minimum swing detectable is actually closer to 4.5%, because you don't just add up standard deviations, you add up the squares and take the square roots. Also, even with a 4.5% swing, you're only 95% confident that there's an actual swing--i.e. there's a five percent chance that there was no swing.

Now, onto the more interesting feature of this presentation: tracking polls.
In recent Presidential elections, companies like Gallup and Zogby set up tracking polls. Unlike regular polls, which are released once a week or even less often than that, tracking poll results are released every day. Unfortunately, due to cost and other issues, polling companies do not want to have to call more than 300 or so people per day, well below the 1000 or so required for the desired 3% margin of error. So what do they do? The three-day rolling average.

The three-day rolling average is simple: Instead of just publishing last night's results(opinion polls are conducted in the evening, when people are most likely to be home, and released in the morning, snap polls being a major exception), polling companies take last night's results, the results of the night before, and the results of the night before that and take the mean of the values. A side-effect of this is that if you subtract consecutive poll results from a tracking poll, you are actually subtracting one night's result from the result three nights prior to that. This means that if you want to get an accurate picture of how big an opinion shift is you should wait at least three days for the new results to sink in. It also means that a particularly good night for one candidate can linger for three days and create "phantom shifts" in the rolling average, creating an unreasonably large bump in the opposite direction once the "good night" drops off the rolling average.

All of this is why you should never look at just one poll, but instead at a variety of polls. Sites like fivethirtyeight.com are (or rather, were, and will be next election cycle) very useful for this purpose, since it averages together a collection of polls from state and national sources.

In conclusion: If somebody is on cable news talking about polls, and his name isn't Nate Silver, he's probably overstating his case.

Log in or register to write something here or to contact authors.