Actually, sometimes polls underestimate Democrats
The polling industry doesn't get as much criticism in years like 2025 when surveys overestimate Republicans — but this can be a problem, too.

It feels like just yesterday we were wrapping up our coverage of Donald Trump’s reelection in 2024. But that was more than 400 days ago, and once again, we’re back to the beginning of a new election year. That means here at Silver Bulletin, we’re busy preparing our generic ballot polling average (keep an eye out in the near future!) and getting ready to fire up the election forecast (landing date closer to midyear).
We’ll have (a lot) more to say about the 2026 midterms in the coming months, but in the meantime, we’ve updated our pollster ratings with results from the elections that took place in 2025. We think these ratings are a valuable public resource (which is why we keep them in front of the paywall), but they’re also an important part of the polling averages and forecasts we’ll be using to cover the midterms. Traditionally, we accompany our ratings updates with a polling report card in even-numbered years, but as a special treat, we’re doing a mini report card for the 2025 polls too.
Heading into these off-year races, pollsters were coming off of a decidedly mixed presidential election. On the bright side, average polling error in 2024 was lower than in any cycle since 1998. But on the concerning side, the polls underestimated Donald Trump (and Republicans in general) for the third presidential election in a row.
Off-year elections are a comparatively low-key affair (which is why we typically group them together with midterm or presidential election years for these report cards). But with about 40 qualifying polls added to our database, 2025 is still our first chance to check in on the accuracy of a tool we use every day to measure Trump’s popularity.
The big-ticket elections last year from a pollster ratings standpoint were the gubernatorial races in New Jersey and Virginia. We also saw a smattering of special US House elections due to retirements, deaths, or members taking other jobs. The New York City mayoral race, which we covered extensively, also took place in 2025. But mayoral polls don’t factor into our pollster ratings. Admittedly, that gives a mulligan to a few pollsters who had a tough race, but that’s not a good enough reason to change our methodology.
So how did the polls do in those qualifying races? Not great. The average polling error (meaning the difference between a poll’s margin and the actual margin of the election between the top two candidates) was 7.1 points in 2025. Some error is always expected, but the average error across all polls in our database since 1998 is only 5.9 points — and that figure is inflated by surveys of presidential primaries, which are typically error-prone.1
True, this isn’t a perfect comparison because we’d usually group the 2025 polls with those from the yet to have taken place 2026 midterms. It’s entirely possible that the 2026 polls will be comparatively more accurate and bring down the average error for this cycle.
But an apples-to-apples assessment isn’t much better. Since 2001, the average polling error in Virginia gubernatorial elections was 4.4 points, compared to 5.9 points in 2025. New Jersey was even worse: an average 9-point error compared to only 3.7 points historically, making 2025 the worst year for New Jersey gubernatorial polls since the turn of the century.
On the bright side, herding looked to be less prevalent last year than it was in 2024. What’s more, 100 percent of 2025 polls in our database “called” their race correctly — meaning that the candidate ahead in the poll actually won. If that statistic sounds a bit silly, that’s because it is. Correct calls aren’t a great measure of accuracy, and in a year where neither the gubernatorial races nor the two House races that produced qualifying polls were particularly competitive, picking the right winner simply isn’t that impressive.2
More meaningful measures like statistical bias — meaning the extent to which polls consistently underestimate Democrats or Republicans — paint a comparatively poor picture. Directionally, bias has historically been unpredictable cycle-to-cycle: the average bias since 1998 is just D +1.1. But in 2025, the polls had an average 6.1-point bias toward Republicans. New Jersey was again an outlier, with a whopping R +9 bias, compared to the historical average of D +0.7.
But perhaps the direction of last year’s bias could be construed as somewhat of a silver lining for pollsters. It’s easy to imagine people being more upset with the 2025 polls if they once again overestimated Democrats. Instead, the polls were too bullish on Republicans, on average.3
That reversal isn’t too surprising if you’ve been tracking polling bias over the past few cycles. Almost exactly a year ago, I spoke to a bunch of pollsters about what went wrong (and right) in 2024. Even after underestimating Trump for a third time, there wasn’t much forward-looking concern about another Republican miss in 2026 because recently, Democratic polling bias has been mostly confined to presidential cycles. Here’s Natalie Jackson — a pollster and Vice President at GQR Insights — from that article:
“When Trump is on the ballot — 2016, 2020, 2024 — we underestimate[d] him to varying degrees. In 2018 and 2022, polls [did] a pretty decent job of telling us [what was] going to happen with the congressional races. The hypothesis is that there’s something unique to Trump being on the ballot, and that he draws out voters that are not going to come out in a midterm year, and possibly not for a different presidential candidate.”
The average polling bias in 2022 was only D+0.8, and in 2018 the polls were biased toward Republicans by 0.5 points on average. Yes, in an ideal world there’d be no bias, but those results are about as good as you realistically can get. The 2025 polls were nowhere near as unbiased, but at least we didn’t see Democrats overestimated like they were in the presidential cycles of 2016 (D +3.0), 2020 (D +4.7), and 2024 (D +2.9).
Why has there been such a meaningful difference in the direction and magnitude of polling bias between midterm and presidential cycles? It potentially comes down to partisan nonresponse. Pollsters disagree about whether this is a Trump-specific effect, but polls have always had trouble reaching low-propensity voters — meaning those who might vote in a presidential race but won’t turnout for the midterms.
Trump (and to a lesser extent, Republicans in general) are doing increasingly well with that group. So in presidential years, when low-propensity voters show up at the ballot box but not necessarily in surveys, the polls have underestimated Republicans. But in a midterm cycle when low-propensity voters dominate, bias has been less of a concern.
But even in a year where polling bias was a problem, things don’t look bad for every pollster in our database: some of them had an excellent 2025. You shouldn’t pay too much attention to how any pollster does in a single cycle, especially an off-year cycle where most firms released one or two qualifying polls. Still, it’s interesting to see how our ratings fluctuate year-to-year.
The table below shows simple average error, bias, and Advanced Plus-Minus — how the pollster’s average error compares to other pollsters’ in the same election, accounting for which races are harder to poll. Negative plus-minus scores indicate that a pollster did better than other firms that surveyed the same race.
Topping our 2025 list based on simple average error are State Navigate (with two Virginia gubernatorial polls that missed by only 2.4 points on average) and YouGov (they polled Virginia and New Jersey and also missed by an average of 2.4 points). Again, though, that’s only two polls per firm.
In terms of bias, nearly every pollster overestimated Republicans last year. Trafalgar was the worst offender: they overestimated Republicans in Virginia and New Jersey by 12.6 points on average, bringing their pollster rating from a B+ down to a B. And some of the best pollsters in our database had lackluster years. AtlasIntel’s polls were off by an average of 9.9 points, and Beacon Research/Shaw & Co. Research missed by 7.4 points in New Jersey. As a result, their ratings were bumped down to an A and A-, respectively.
Why did so many pollsters struggle last year? One theory involves our old friend, weighting on recalled vote choice. In Virginia, State Navigate found that pollsters who weighted to a 2021 or 2024 electorate, which were redder years, were less accurate than pollsters who weighted their data to a modeled (bluer) 2025 electorate.4
Generally, pollsters are good at making these sorts of decisions. And they’ve had an easier time of it in recent midterms than with Trump on the ballot. But the bearish case for polling accuracy is that this has all become a guessing game. If pollsters who are worried about underestimating Trump three times end up overcompensating, the substantial Republican bias from 2025 could continue into 2026.
Our database includes polls in the final three weeks before general elections for president, governor, and Congress and goes back to 1998.
Polls that show a race with a 6-10 point margin have 86 percent call accuracy, historically. That figure rises to 94 percent with a 10-15 point margin and 97 percent when the margin is 15-20 points.
However, not weighting on recalled vote choice was less effective than either option.



Great read!
And is for the “overcompensating” theory, while it is not easy to prove this but I wouldn’t be too surprised if they worry way more about missing in the direction of understanding GOP than the other way around.
For one thing, Dems esp the ones who consume news tends to be the neurotic ones who is more paranoid about polls underestimating Trump than the other way around. They prob won’t yell and scream if the polls underestimate Dems.
For another Trump himself and Too Online MAGAs also scream when they are underestimated on the poll (like they literally sued Ann Seltzer) while they don’t call polls fake when GOP is overestimated.
So, underestimating GOP/Trump is almost a recipe for being yelled and heckled by those two groups while the other way around is not - and many polled at the end of the day are sponsored by the media, who have to cater to the demand of the audience as well as wanting to minimize the risk of unwanted lawsuits.
(The case for Trafalgar tho is prob somewhat different - I don’t think it is super far fetched to call them basically GOP hopium pollsters where the methodology is very opaque and have track record of wild misses in off year elections. Not quite sure if I go as far to call them making up numbers to match the narrative they try to push but I think there are some smokes out there?)
Rating polls by difference from election results may make sense for some purposes, but it ignores the polls' claimed standard errors. If a poll predicts a Republican victory by six percentage points, with a standard error of two percentage points, a Democratic victory is very strong statistical evidence that either the poll's estimate or its standard error were wrong. But with a four percentage point standard error, the result is consistent with an accurate poll.
For most of us not involved in polling, the most important information in a poll is the implied probability of the election result. A useful metric of relative poll accuracy is how much money you would make or lose betting using one poll's implied probabilities for payouts and the other for bets. You can also compare polls to prediction market prices and expert judgments.
Unfortunately, since polls are taken over somewhat different time intervals you can't always find direct head-to-head comparisons. But that's also true of the metric used in the post.