Silver Bulletin pollster ratings, 2025 update
All the numbers for every pollster, fully updated after 2024.
For our report card on how pollsters performed in 2024, please see here.
Do you want to know which American pollsters rank highest (and lowest) based on historical accuracy and transparency? Curious about how accurate a particular pollster might be in future elections? Well, you’ve come to the right place.
Welcome to the landing page for the Silver Bulletin pollster ratings. Below, you’ll find our most up-to-date data on how accurate and transparent each pollster in our database has been in past elections — and how accurate it might be in the future. Plus a ton of extra data on pollster quality.
These ratings apply the same methodology as the previous, Nate-era FiveThirtyEight pollster ratings.
You can find the data from our previous ratings in June 2024, which were in effect for our 2024 presidential election forecast, here.
We’ve since updated the ratings with polls and election results since June 2024, namely:
The 2024 presidential election
The 2024 congressional elections, and
The 2024 gubernatorial elections
As after any presidential election year, we've added a lot of new data — about 460 polls since the last update.1 So we’ve seen some shifts in the ratings. But the top-rated pollsters — AtlasIntel, Marquette University, The Washington Post2 — will still be familiar to longtime readers.
Since there haven’t been any methodological changes since our last update, let’s just go ahead and get to the numbers. The columns in the main pollster ratings table are as follows:
An overall grade based on a pollster’s Predictive Plus-minus rating. Grades for pollsters with sparse data are rounded (e.g., to “A/B” rather than A-) and banned pollsters automatically receive a grade of F.3
The Predictive Plus-Minus rating itself, which is how accurate the model expects the pollster to be going forward based on a combination of its historical accuracy and its transparency/disclosure standards — pollsters get a bonus if they’re either a member of the AAPOR Transparency Initiative or share their data with the Roper Center archive. Negative plus-minus scores are good and imply that we expect the pollster to be more accurate than average going forward.
Mean-reverted bias — that is, a pollster's historical average statistical bias toward Democratic or Republican candidates, reverted to a mean of zero based on the number of polls in the database. This is calculated only for races in which exactly one Republican and one Democrat are the two leading candidates (so it doesn’t apply for presidential primaries, for instance). A score of "D +1.5", for example, means that the pollster has historically overrated the performance of Democratic candidates.
Finally, the number of polls included in the calculation. The database goes back to 1998, though polls from more recent years are weighted more heavily. In general, these ratings cover all polls in the 21 days prior to presidential, Congressional and gubernatorial general elections, and presidential primaries.
But wait, there’s more. Predictive-Plus-Minus might be what we use to assign letter grades, but it isn’t the only measure of pollster accuracy we generate:
Simple Average Error is the firm's average error, calculated as the difference between the polled result and the actual result for the margin separating the top two finishers in the race.
Simple Plus-Minus is a firm's Simple Average Error minus the expected error for the race the firm surveyed, which accounts for the type of election polled, the number of days until the election, and the poll's sample size.
Advanced Plus-Minus is (as advertised) a more advanced plus-minus score that also accounts for the performance of other polling firms surveying the same races, and which weights recent results more heavily.
Mean-Reverted Advanced Plus-Minus is the Advanced Plus-Minus score, reverted to a mean of zero. The amount of mean reversion is based on how many polls the firm has conducted, but with polls in previous years discounted.
Note that for this chart and the following one, we’re only listing survey firms with at least 20 polls in our database, since most of these measures aren’t mean-reverted and are subject to large sampling errors. You can find ratings for all firms in the Excel file blow, but interpret them with extreme caution for pollsters that only survey elections irregularly.
We also calculate other data about how each pollster has performed in past elections. Including how often a pollster called the winner of the race correctly4 and missed outside the margin of error, its house effect, and how much we penalize it for potential herding.
House effects show how a firm’s results compare against other polls. For example, if a pollster shows the Republican candidate leading by 3 points in a race that every other pollster had tied, that firm would have a Republican house effect. Note that house effects are distinct from statistical bias. If the Republican candidate actually won this hypothetical race by 5 points, our imaginary pollster would still have a 2-point Democratic bias even with their Republican house effect.
Our Average Distance from Polling Average (ADPA) score measures how far a firm's average poll differed from the average of other polls in the field at the time it was released. A low ADPA is potential evidence of herding. A herding penalty is triggered when a firm's ADPA is lower than the theoretical minimum based on the sampling error in its polls. This penalty is added to a firm's Advanced Plus-Minus score before calculating Predictive Plus-Minus.
Finally, here are two files containing even more data. The first file includes all of the data shown in the tables above, alternative versions of the plus-minus calculations, and details about the methodological transparency of each pollster.
And here is the raw data used to calculate the averages, including topline numbers from more than 12,300 polls. You are welcome to use this data free of charge for any purpose, but please attribute it to Silver Bulletin.
We’ve also added a very small handful of polls that were missing from previous cycles.
These were formerly listed as ABC News/Washington Post, but ABC News has since discontinued its polling partnership with WaPo. We consider Washington Post polls to be the successor to this partnership and their ratings include prior ABC/Post data.
There’s also a third group of polls, such as those from ActiVote, that don’t meet our standards for being scientific surveys, either because they’re conducted by amateurs using cheap online survey platforms or because they rely on non-random convenience samples that don’t apply adequate methods to correct for the bias this may introduce. These polls aren’t “banned” per se — they just don’t make it into our database in the first place because they don’t meet our standards, and therefore don’t receive any sort of rating.
It’s important to note that this is generally a poor measure of pollster accuracy compared to polling error.
Hey Nate, could you clarify what got the banned pollsters banned in the first place? Not looking for an itemized list explaining the offense of each one, just trying to understand what results in a ban.
One thing about the tables is that when sorting by grade, A+ gets sorted after A, A-, and A/B. Not sure if there’s anything that can be done about that. Maybe adding spaces between the A and the plus?