So, how did the polls do in 2024? It’s complicated
By one important measure, they performed better than ever. By another, they had their worst year yet.
We understand it might not seem like the most important thing at the moment. But a big part of our mission here at Silver Bulletin is evaluating public opinion — and that mostly means polls. Even over the short run, shifting public opinion in their favor is one of Democrats’ best tools for resisting President Trump — the more unpopular he becomes, the more electoral consequences Republicans will face, starting with a series of special elections beginning on April 1 and then in next year’s midterms.
But just how reliable are the polls? Well, it depends on which polls you’re talking about. We’ve updated our pollster ratings with the results of the 2024 general election, and there are some big changes, starting with the top of the list. To accompany that, we’re giving you one of our periodic report cards, a tradition that dates back to the FiveThirtyEight days. And the verdict is decidedly mixed. There’s a glass-half-full view and a glass-half-empty one.
No matter what happened, 2024 was going to be a big year for the polls. In the previous two presidential elections — where, of course, Trump was also the Republican presidential candidate — polling error was larger than average, and the polls considerably underestimated him. Polls in the 2018 and 2022 midterms did much better on both fronts, but anxiety about another high-profile polling miss still ran high among both pollsters and pundits.
In the run-up to 2024, pollsters put in a lot of work to avoid underestimating Trump for the third time in a row. And the takes came in hard and fast. Were Republicans flooding the zone with polls that were too positive for Trump? Or were the polls still too bullish on Kamala Harris?
The short answer is that the polls were biased again — but not to the same extent that they overestimated Joe Biden in 2020. Nor, with no clear favorite heading into Election Day, was the outcome anywhere near as much of a surprise as in 2016 (unless perhaps you were consuming too much “hopium”). By some measures, in fact, 2024 was a considerably above-average year. The polls weren’t that far off the mark. The problem was that nearly all of them erred in the same direction, again underestimating Trump and other Republicans. So, let’s look at the various metrics that are part of our report card.
By one measure, the polls did great — but this isn’t the whole story
First up, we have arguably our most important chart: polling error. By error, we mean the difference between a poll’s margin and the actual margin of the election between the top two candidates. Error is a normal and expected part of the polling process; the average error for polls in our database since 1998, which covers polls in the final three weeks before general elections for president, governor, and Congress — and presidential primaries — is 5.9 points.1
But some years are better than others, and in 2023-20242, the average polling error was lower than in any cycle in our database at only 4.5 points. The 3.4-point error in the presidential general election is the second-lowest in our database, and the 4.1-point average polling error in Senate races this cycle is the lowest in our records. In fact, polls for every type of race in 2024 — except for the presidential primaries — had lower than average error.
These are seemingly impressive numbers — polling error was about 2 points lower on average this cycle than in the previous two presidential cycles. But it’s both something of a Pyrrhic victory for pollsters because of systematic polling bias — averaging doesn’t help much when almost all the polls are off in the same direction — and because of some technical factors.
Part of what’s driving this stellar performance is the dearth of presidential primary polls. The 2024 Republican primary was short-lived and noncompetitive and the Democratic primary was basically nonexistent.3 But primary polls also tend to have the largest polling error — and it’s not particularly close4 — so their relative absence this cycle helped pull down the average error.
Also, some of the strong performance by this metric reflects pollster herding. Pollsters — J. Ann Selzer aside — have become increasingly reluctant to publish outliers. Herding may yield stronger performance for individual polling firms but reduces the benefit of averaging or aggregation since the polls are all basically telling you the same thing.
Still, on balance, the polls did really well by this metric, and we don’t want to take that away from them. Coupled with their strong performance in 2022, this at least does not suggest an industry in crisis. But it does mean that it’s more important than ever to account for the possibility of a systematic polling error. Because probabilistic election models do this, that’s one advantage they have over simple polling averages — in fact, Trump sweeping all seven closely contested swing states was the most likely outcome in our model5.
Now for the bad news…
Where things look worse is statistical bias: the extent to which polls miss in the same direction and consistently underestimate Democrats or Republicans. Historically, the direction of bias has been unpredictable cycle-to-cycle. For example, you’d see polls underestimate Democrats one year (say, 2012) and Republicans the next (2014 and 2016), with no clear year-to-year pattern.
But the polls have underestimated Republicans across the past three presidential elections. In 2016, the polls had a 3-point average bias toward Democrats. In 2020, bias was D+4.7 points. And this year, it was a smaller but still substantial at D +2.9. And that bias was highly consistent across different types of races — with about 3 points of Democratic bias in presidential, Senate, and House polls, and 2 points of bias in gubernatorial polls. With less and less ticket-splitting in presidential years, that’s what you’d expect.
The polls underestimating Trump again is concerning, especially because of how much work pollsters put into avoiding this exact result. There’s certainly a lot of blue (Democratic bias) in recent years in that chart. It’s also not entirely clear what’s driving the problem. It could, for example, be a Trump-specific issue. Democratic polling bias was much lower in the 2022 midterms and polls had a slight Republican bias in 2018, when Trump was off the ballot. If Trump is the issue, the polls in 2026 and 2028 might be fine in terms of repeated Democratic bias. But the pollsters we’ve spoken with have mixed feelings about this.
Finally, we can look at how often polls “call” elections correctly — simply meaning that the candidate ahead in the poll actually won. Now, we’ve never thought of this as a great way to assess how accurate the polls are. For instance, if the Selzer poll had Trump ahead of Harris but by only 1 point in Iowa — Trump actually won by 13 — we’d argue that still counts as a bad result, whereas if Harris had prevailed by 1 point (within the poll’s margin of error) it would be a pretty good one. Still, this measure can be used as a sanity check.
Across all cycles since 1998, polls have predicted the correct winner 78 percent of the time. In 2024, call accuracy was only 70 percent — the lowest in our database. Most of this inaccuracy was driven by presidential general election polls, which called only 63 percent of races correctly — many national and swing state polls had Harris ahead —- their worst performance since 1998. House polls also weren’t great this cycle, with only a 66 percent call accuracy — well below average for that race type. (Many of these are generic ballot polls; Republicans won the popular vote for House while some polls showed Democrats leading.) In contrast, Senate polls had pretty average call accuracy this cycle, and gubernatorial and presidential primary polls did much better than average by this metric.
Now this difference isn’t an indicator that the 2024 presidential primary polls were amazing while general election presidential polls were horribly flawed. Instead, it’s a function of the races being polled in each case. The Republican primaries were (to understate things) not exactly competitive. On the other hand, polls for the general election were neck-and-neck right up to November 5th. Furthermore, polling firms are becoming more efficient at deploying their (scarce) resources: in a highly polarized political environment, they have a pretty good idea of which states are competitive. There’s not much appetite for wasting money on an Alabama or Vermont poll when you could poll Georgia instead, for instance.
Why does this matter? Because poll-based calls get a lot more accurate when the margins get wider.
Historically, polls showing leads of less than 3 points call only 56 percent of races correctly — not much better than a coin flip. Once the margin is between 3 and 6 points, they get up to almost 70 percent accuracy. A 6-to-10-point margin? 86 percent accuracy. And for completely noncompetitive races, call accuracy quickly approaches 100 percent.
In other words, the high call accuracy in 2024 primary polls is completely expected, given Trump’s commanding lead throughout the race. On the other hand, the very close and often-tied general election polls between Trump and Harris simply weren’t able to consistently call the race correctly. When the polls are that close, their main utility is to tell you that the race is uncertain and that things could break either way.
So while we’re forgiving to pollsters for their poor score by this metric, we just can’t emphasize this point enough to journalists and other polling consumers. Please don’t treat leads of less than 3 points as telling you very much at all. Polls simply aren’t precise enough — they never have been and they never will be. For all intents and purposes, such races should be treated as toss-ups.
Which polling firms get the highest grades?
And that’s about it for our aggregate review of polling performance in 2024. But we can also look at how individual pollsters performed this year. You shouldn't put too much stock in a pollster’s performance in a single cycle — our pollster ratings that account for all of their data back to 1998 are better for judging their overall quality — but 2024 shifted our overall ratings more than most cycles do.
This table shows simple average error, bias, and Advanced Plus-Minus — basically, how the pollster’s average error compares to other pollsters’ in the same election, with various other statistical adjustments that account for which races are harder and easier to poll — for pollsters who conducted at least five polls in the 2023-2024 cycle. Negative plus-minus scores are better here; they indicate that the pollster was more accurate than others once accounting for these factors.
This year, many of the usual suspects like The New York Times/Siena College and Selzer are missing from the top of our list. Instead, based on simple average error, the top performers of 2024 were OnMessage Inc. (1.2 points of error), AtlasIntel (1.5 points), and Patriot Polling (1.5 points). These firms consistently had Trump leading nationally and in all or most of the battleground states in the weeks leading up to Election Day. And these excellent performances have shifted our pollster ratings: AtlasIntel’s rating has gone from an A to an A+ — and it is now the highest-rated pollster in our database — while OnMessage moved from a B/C to an A.
Mitchell Research, Echelon Insights, TIPP Insights, and The Washington Post also performed well based on polling error, all with 2 points of error on average. All of these firms also score well on Advanced Plus-Minus, but Public Policy Polling and Saint Anselm College top the charts by that measure, although with small sample sizes.
In terms of bias, almost every pollster on this list overestimated Democrats to some extent in 2024. OnMessage Inc. and AtlasIntel had the lowest bias out of all pollsters who conducted at least 5 polls, only overestimating Democrats by 0.1 points on average — about as good as it gets. Blueprint Polling, Public Policy Polling, Harris Insights & Analytics, and J.L. Partners were the only pollsters on the list who overestimated Republican candidates on average.
By far the pollster with the largest average error in the 2023-2024 cycle was Selzer, at 12.8 points. Ann Selzer’s firm also had the largest bias — D+14.8 to be precise. Much of this was driven by Selzer’s outlier poll in Iowa that showed Kamala Harris winning the state, but the firm also considerably overestimated Democrats’ performance in House races.6
Selzer, as you may know, is being sued by Trump, an action we think is deplorable on free speech grounds, as well as in its failure to understand the difficulties of polling. Despite her travails in 2024 — and the fact that our pollster ratings weight recent years more heavily — the firm still has a B+ rating, indicating above-average performance over the long run. Selzer has often proven her doubters wrong, although now she’s moving on to other ventures. But we can’t sugarcoat the fact that this time, her polls were an outlier in the wrong direction.
Plenty of other pollsters like Siena College, SurveyUSA, and YouGov had more normal-looking but still large biases of about 5 points toward Democratic candidates. With Trump beating his polls again, firms like YouGov that had Democratic-leaning house effects performed worse, while those with R-leaning ones were better.
The takeaway from these individual performance ratings is that while it’s important to consider historical track records when evaluating polls, this can very much be overdone, and it’s a bad idea to treat one poll as an oracular gold standard. Pollsters that have been very accurate in previous elections can have bad cycles. And relatively newer firms can end up outperforming everyone else.
If this still wasn’t enough detail for you, you can visit our pollster ratings page for even more information about each pollster and to download the data behind our ratings. Our pollster ratings and post-election poll evaluations will always remain completely free in order to facilitate a better understanding of public opinion. But we still very much appreciate your support in the form of free (or paid) subscriptions.
To avoid giving prolific polling firms too much influence, polls in all of the charts listed in this article are weighted by one over the square root of the number of polls that their pollster conducted for that particular type of election in that particular cycle.
We group odd-numbered years (e.g., 2023) together with even-numbered ones (2024) because there aren’t many qualifying polls in these — just off-cycle gubernatorial races and special elections.
In fact, because there were so few polls for the Democratic primary — and basically none of them followed the various rules we apply to primary polls — there are no 2024 Democratic primary polls in our database anyway.
Why is this? Basically, there are two reasons. One is that presidential primaries usually feature multiple candidates. When voters have several choices they like, it doesn’t take much to have them switch preferences at the last minute. They may also vote tactically — gravitating toward candidates who seem likely to get a strong finish. Second, presidential primaries feature much lower turnout than general elections. So pollsters may not only mispredict which candidate voters will choose but also how many of them vote in the first place.
And Harris sweeping all seven states was the next-most likely outcome in our model.
The Selzer polls for the House did not mention the candidates by name, instead asking voters whether they planned to vote for the Democrat or Republican House candidate in their district. Still, this is the same format that most generic ballot polls use (with rare exceptions, they don’t include the candidate names) and we count those toward our accuracy ratings. These polls also had small sample sizes, but our Advanced Plus-Minus stat accounts for that. Overall, given how far off Selzer’s presidential poll was, it’s not surprising that her House polls missed the mark too.
Here's a plea for the Silver Bulletin to directly address one question that Nate Cohn who apparently oversaw NYTimes polls flagged as critical prior to the election -- the growing trend of pollsters tweaking their raw data based on recalled voting -- i.e. weighting responses from people who recalled whether they voted for Trump or Biden in 2020 based on the actual vote percentages in each state from 2020.
Given the relatively poor performance of the NYTimes which decided not to join that trend, it would seem that the decision to utilize recalled voting was a good one, and will likely be even more widespread in future election years. To the extent that the Silver Bulletin is able to determine which pollsters used the recalled vote method, (1) is that a correct conclusion; and (2) what does the SB think of that trend?
Note to Eli / Nate: your charts say “Combied” instead of “Combined”.