Evaluating campaign polls: A reader’s guide
WASHINGTON – Here’s a survey: Is campaign polling 1) an art, 2) a science or 3) an act of statistical sleight of hand? The answer can be all of the above. As election season nears, polls are ramping up. It can be hard to interpret their terminology, let alone know which numbers to trust.
“All polls are not created equal,” said Lee Miringoff, the director of the Marist Institute for Public Opinion, which conducts the McClatchy-Marist poll. “And that’s unfortunate, because it all looks very scientific.”
So what should you look for? Are registered voters the same as likely voters? Can you define “margin of error”? How about “house effect”?
If it’s a good poll, the polling firm will disclose how the survey was conducted. These are guides to determine whether the poll is any good.
“It’s the responsibility of the reader to be smart, to be educated, to make the effort,” said Paul Freedman, a political science associate professor at the University of Virginia.
With the help of top pollsters and analysts, here’s an easy guide for wading through the coming deluge of polls about this year’s elections.
How was the sample picked?
In a reliable poll, all possible respondents have a known chance of being chosen.
Polls such as Gallup and Marist have a dialing service that selects randomly from a list of every phone exchange in the country and combines random digits to generate phone numbers. When someone answers, a person who’s working for the poll asks the questions.
One benefit of the human touch: It would be easier for callers to skip the people who say they’re busy or answer the phone angrily, Miringoff said. But if callers don’t persevere, “you end up with a sample of friendly respondents with a lot of time on their hands,” he said.
What about cellphones?
It’s illegal to let a computer automatically dial a cellphone, and human dialers are more expensive. So automated polls, which don’t use human questioners, miss cellphones. But 30 percent of Americans now use cellphones instead of land lines, so it’s important to include them, said Frank Newport, the editor in chief of the Gallup Poll.
Who’s a likely voter?
Campaign polls usually sample either registered or likely voters.
It’s more accurate to poll likely voters, because many registered voters – as many as 50 percent – won’t cast ballots.
Different firms have different ways of determining how to screen the sample for likely voters and they rarely disclose their methods, Freedman said. Common methods include asking whether the respondent plans to vote and whether he or she voted in the past, Miringoff said.
How were the questions asked?
A few words can make a huge difference. “Do you usually vote Democratic?” is more likely to generate “yes” responses than “Do you usually vote Democratic, or not?” When reporting results, Gallup includes question wording “down to the comma,” Newport said.
The order of questions is equally important, Miringoff said.
Respondents will give the president lower approval ratings if they’ve just answered a string of questions about the tanking economy than they will if the approval rating comes first.
How was the data weighted?
Even if a poll’s sample is chosen randomly, its demographics almost never match the population it surveyed. How do firms compensate if one group is over- or underrepresented?
Each firm has its own formula for adjusting – weighting – the numbers.
If a poll ends up with 400 men and 200 women, for example, the firm might multiply each woman’s response by 2 before calculating an average. The firm also might assume that half as many women as men plan to vote and leave the numbers alone.
Is that bias?
Differences in technique can contribute to a firm’s “house effect,” or tendency to produce results that are skewed in a certain political direction.
What’s the margin of error?
The margin of error can mean the difference between a candidate being ahead or behind. If 51 percent of poll respondents say they’ll vote for Mitt Romney and the margin of error is 3 percentage points, that means pollsters conclude that the percent of the total population that supports Romney is somewhere from 48 to 54.
The formula for calculating the margin of error is based on the number of people surveyed and the total population. It’s especially important to understand this when looking at a sample that’s broken into categories such as age or race, Freedman said. The margin of error for a subset will be larger than the margin of error for the total sample.
Ultimately, top pollsters remind readers that surveys are “a snapshot in time.” The best way to view polls is as a tool for understanding what voters are thinking right now, Miringoff said. They show how people are reacting to a candidate, not how they’ll behave on Election Day.