Arrow-right Camera

Color Scheme

Subscribe now

This column reflects the opinion of the writer. Learn about the differences between a news story and an opinion column.

Shawn Vestal: Bad polls, taken seriously, added up to a red herring in the Senate race

Sen. Patty Murray addresses gathered supporters at the Democratic party on Tuesday, Nov. 8, 2022 in Bellevue, Wash. 222129  (Jennifer Buchanan/The Seattle Times)

It became the main narrative of the Senate race: The upstart Tiffany Smiley was catching up to Patty Murray.

Nipping right at her heels! Stubbornly close! Red wave, baby!

In early October, a poll commissioned by KHQ-TV put Murray up by just 5 percentage points – practically a dead heat, given the 4.4% margin of error, and a shockingly slim lead in deep blue Washington.

That came on the heels of other polling that had shown the race tightening, but with Murray still ahead by 8% or 9%, and it kicked off a period of highly dubious polls that looked better and better for Smiley.

A Trafalgar Group poll put Murray up 49.4% to 48.2%. It was not the first Trafalgar poll to show such a dead heat – one of them made it into a George Will column.

Then, first thing in November, the Moore Information Group released a poll that actually showed Smiley ahead – a result amplified by Politico, which called the race “stubbornly close.”

If this all felt very hard to swallow in a state with such a strong blue drift, it was nevertheless amplified by media coverage, often uncritically, and picked up with great enthusiasm by right-wing news sources doing their best to sell the red wave.

As one conservative web site put it, “the five-term senator has been losing ground to her challenger throughout the season.”

Many a news organization in Washington state – even those that were wise enough to avoid the dubious late polling – told a similar story of Murray steadily losing ground. But as we now see, that story was more than a little wrong.

As of late Tuesday afternoon, Murray has a 14-point lead and her share of the vote was at 57% – compared to 52% in the primary. Even if a little red wavelet occurs in future tallies, it will not bring us anywhere near the result predicted by these polls and the “losing-ground” narrative.

Murray is outperforming even the predictions of the good polls – the website FiveThirtyEight, which aggregates polls and weights them for their quality, had her most recently at a 5% advantage, a figure heavily influenced by four late Republican-funded surveys – and Washington’s “partisan lean” of 12.4%.

It’s a particularly vivid reminder that we should all be skeptical consumers of polls and be especially wary of dramatic outliers and partisan wish-fulfillment surveys. Pay attention to sample sizes, poll methods, the context of other polling, and the pollster’s track record.

“I think you have to educate yourself a little about how you read the polls,” said Cornell Clayton, the director of the Thomas S. Foley Institute of Public Policy and Public Service at Washington State University.

And journalists must be vigilant about how – and often whether – to report on dubious polling, and beyond that, of the accuracy of the “story” that is produced by them.

Clayton said people should pay more attention to reliable polling aggregators like FiveThirtyEight – and noted that while there were certainly bad polls in this election, the better polling, and the aggregates, tended to be pretty accurate overall.

Almost all the polls underestimated Democratic turnout to some degree, and typical midterm dynamics were jumbled up by some of the unique elements of this vote that eroded the normal advantages for the out-of-power party: Dobbs and concern about the Supreme Court fired up Democrats and women voters in particular, and the continued presence of Donald Trump on the national stage turned what would ordinarily be a simple referendum on Biden into a choice.

“This midterm was just historically unusual,” he said.

Polling errors are nothing new, but the current era of surveys is complicated by the fact that pollsters have to make decisions about how to weight different factors – because no method of trying to capture an electorate is as simple and single-channel as in the old days of the land-line phones. This makes it harder, sometimes, to tell how a pollster came up with their result, and it makes it easier to stack the decks.

Couple that with highly partisan media outlets carrying water for the polls – and the absorption of that narrative into mainstream reporting – and the picture becomes highly susceptible to bad information. Clayton described one difference between the FiveThirtyEight aggregate model and the one used by Real Clear Politics, which is being pounded with criticism over its credulous forecasts of a red wave.

FiveThirtyEight aggregates all polls, and weights them for the quality of their methods. Its final average of a 5% lead for Murray was heavily influenced by polls that now look ridiculous in retrospect, and were highly partisan in nature. Still, you can look at the FiveThirtyEight aggregate and see how it evaluates each poll – and see the reasons to be dubious about some of them.

The RCP forecast Murray-Smiley as a toss-up, with an average Murray lead of 3%. That average was built on just the final three polls of the season – two pollsters with mediocre grades on FiveThirtyEight and one that is overtly partisan, American Greatness.

The lesson there – and for all of us, as we construct our pre-election narratives – is an oldie but a goodie: Garbage in, garbage out.

More from this author