There was a murmur rumbling through American politics over the past week. Press an ear to the wall of a newsroom or eavesdrop on campaign consultants grabbing a sandwich and you heard it: Who knows? Who knows what’s going to happen in the midterms? Could be a blowout, could be a wash, could be a split Congress, could be a landslide, could be anything.
There were voices of certainty, of course. Activists and party representatives were confident of their positions, as activists and partisans usually are. Some of them were convincing, some of them were not. But they were not all pulling in the same direction anyway, leaving the same quiet hum. Who knows?
Now we know — and we know that few people before Election Day can reasonably be credited with saying they knew what would happen. Speaking for myself, I can say that I did not: I expected Republicans to fare better than they appear to have fared. I’ve been asking myself why I had that expectation. And I’ve landed on a few contributing explanations.
Let’s begin where all of this uncertainty could have ended — with the polling.
This is a bold claim to make by itself, one that can be subjected to endless cherry-picking and challenges. So let’s consider what the polls said.
FiveThirtyEight is generally considered to be the gold standard in poll aggregation, generating polling estimates that simply average available polling or weight the polls by pollster and other factors, like fundraising. The former is the essence of its “lite” model; the latter, its “deluxe” version. So how did the estimates on Election Day compare with the current margins in the Senate?
They compared well. Notice below how, for any race, the orange circles (FiveThirtyEight’s estimates) and the purple circle (the current margin) are generally clustered together. There are places where there’s a gap, but those are usually in less-close contests where there tends to be less polling.
Across all of these contests, including those where the results weren’t very close, the “deluxe” model was closer to the actual margins. On average, it was about five points off; the median deviation was about four points. Polling is not good at predicting the winner in a close contest, but, even so, the “deluxe” model gave the advantage to the trailing candidate in only two Senate races: Georgia, which is headed to a runoff, and Pennsylvania.
It wasn’t just the Senate polling, of course. We don’t yet know how many votes Democrats and Republicans received across all of the House contests, but it seems likely that it will land somewhere near the generic-ballot polling average. The Washington Post and our partners at ABC News released a poll Sunday that gave the GOP a narrow advantage in the House — which appears to be what ended up manifesting.
How do accurate polls result in a surprising election result? Because the elections in 2016 and 2020 have instilled a near-expectation that the polls are going to be off — and presumably to the GOP’s advantage.
Again, they don’t appear to have been. Results aren’t final! Things can change! But it seems unlikely at this point that the results are going to wind up revealing a massive, structural problem with the polls — much to the relief of pollsters.
It’s important to recognize that I’m cheating a bit here. FiveThirtyEight is assiduous about how it uses and assesses polling. Individual polls were often wrong, occasionally laughably. This is one reason that averaging polls yields better results in the first place: It eliminates much of the peril involved in cherry-picking.
But there were also systemic efforts this year to adjust polls to reflect that expected Democratic bias.
Those of us who’ve been around for a while saw this precisely one decade ago. Then, polling showing President Barack Obama faring well was “unskewed” to better reflect the expected strength of Mitt Romney. That strength didn’t materialize, and “unskewing” was revealed to be a bad idea — at least for a while.
RealClearPolitics, which has traditionally had a straightforward average of polls, introduced an effort to adjust polling to reflect perceived anti-GOP shift. Perhaps because of that expectation that polls were overestimating Democrats, Tom Bevan, co-founder of the site, publicly challenged the Economist’s G. Elliott Morris’s presentation that the Senate was a true toss-up. Which, of course, it was.
Not all skepticism of the polls was born of the idea that they were necessarily advantaging Democrats unfairly. There was also the simple fact that this was a weird election cycle, one buffeted by unprecedented concerns about the stability of the democratic system, by the unusual involvement of a former president, by the Supreme Court’s decision overturning Roe v. Wade. Here it was hard to separate the rhetoric from the reality: Were people for whom abortion was a central motivating factor in their vote accurately reflecting a quiet surge in turnout that polling might not capture? Was polling, admittedly hobbled by difficulty in reaching voters, just going to miss different people this year? Were things like the abortion ballot initiative in Kansas somehow reflective of the overall electorate?
As these unanswerable questions swirled, confidence in the fallibility of polls led to a different sort of clamor.
Over the past month or two, Republicans and conservative commentators grew increasingly confident of their party’s chances in the federal races. They were generally more skeptical of polling than others, given 2016 and 2020, so it was easier for them to get a sense that things were going their way in a manner that wasn’t captured by those polls.
At the same time, polls were showing movement back to the GOP. During the late summer, the movement was consistently to the left. But that began to reverse in October (thanks in part to a spate of surveys from Republican-sympathetic pollsters), seemingly auguring Republican momentum for the midterms.
To some extent, Republicans and right-wing commentators got caught up in enthusiasm about an expected trouncing. Predicting Democratic doom is always good for attention on the right, and attention is the coin of the modern realm. Republicans were confident in private but effusive in public.
Democrats, meanwhile, were still shellshocked from 2016. (Don’t believe me? Sneak up behind a Democrat and whisper “Did you see what the needle did?” in their ear.) They, too, were ready to believe that things were going to go better for Republicans than the polls indicated. Yet it was often the case that the predictions of dominance were unmoored from any evidence and simply reflections of personal desire, the right’s driving energy or both.
In exit polling, two-thirds of respondents reported having made up their minds about how they would vote more than a month ago. But about 1 in 8 said they made up their minds within the past week. That question also doesn’t get at an important factor that’s hinted at above — when people decided to go vote at all.
Elections are often driven by partisans who vote all the time and always vote on the party line. But the results often come down to people who made up their minds late or chose to vote late — or chose not to vote. So we can’t rule out that factors that emerged late in the election had no effect. Did President Biden’s speech about preserving democracy coupled with the attack on Paul Pelosi spur more Democratic turnout? Did falling gas prices prompt people who were furious about the issue a few months ago to not bother to turn out on Election Day?
These things are hard to measure, which is one reason it’s unfair to assume that they had a definitive effect on the outcome. (The other reason it’s unfair is that it’s a cop-out for those of us who were surprised by the outcome. I wish I could just blame a last-minute shift!) But they are nonetheless real, and a real reason polls can deviate from actual election outcomes.
Speaking of cop-outs …
The Post runs a computer model to help pick out trends in the election as votes are being counted. When we first ran it on Tuesday night, though, we noticed it was underestimating how well Democrats were obviously doing.
Why? Because some of the first results it was given for its analysis came from Florida, a state that went particularly badly for Democrats. In other states, things went far better. In New York, badly. In Pennsylvania, they went well.
This was an election in which the head of the Democratic Party’s House election effort, Rep. Sean Patrick Maloney (D-N.Y.), and a right-wing firebrand, Rep. Lauren Boebert (R-Colo.), were at risk of losing their seats. (Maloney, in fact, did.) That’s not a pattern that would emerge from a consistent national movement to the left or the right.
History suggested that Democrats would lose the House by a wide margin, as I wrote Tuesday afternoon. Biden’s low approval rating combined with the past pattern of new presidents seeing their party take a hit in the first midterms suggested a lot of damage was coming. But then it didn’t. The extent of the likely (but not certain!) Republican majority in the House is still unclear, but it will probably end up being one of the better first-midterm-election results for a Democrat in the past 75 years.
Other past patterns similarly collapsed, like independents breaking against the president’s party. (Exit polls suggest they split about evenly between Democrats and Republicans.) It was just a weird, uneven year with weird, uneven results.
Yet all along, there were those polls showing what the likely result would be. The measure specifically designed to evaluate the level of support in the election did so with aplomb. Yet, because it is 2022, a year landing in a string of tumultuous, uncertain, expectation-breaking years, the simple message that the election was close often seemed hard to take at face value.
And lo: a result that was both in line with expectations — and a surprise.