71 Comments

It's frustrating because Rassmussen's strong and consistent house effects make it more and not less useful. Pollsters with consistent house effects can show what the spread of the election is most likely to be. If you have a pollster that almost always overestimates R turnout and one that almost always overestimates D turnout then you have a better idea than if you had 2 pollsters who were individually more accurate but had errors that were random.

Expand full comment

This is a really good point. Imagine how bad the aggregate polling error would have been in 2016 or 2020 if it weren't for a few right-leaning pollsters. Yes, they were probably methodologically shoddy, but the whole point of aggregation is the hope that the failings of individual pollsters will average out to some degree. People will tell themselves they're seeking methodological perfection when they're actually just rationalizing leaning into their priors.

Expand full comment

People are going to really miss Nate's approach to this problem. It's going to get very ugly if this is the kind of approach ABC is going to be taking. I can only hope the model (and maybe Fivey?) can make it out and be somewhere else in 2024.

Expand full comment

This raises a big question: If Nate owns the model and Disney owns the 538 name, who owns Fivey?

Expand full comment

I don’t want to see Fivey Fox degraded!

Expand full comment
Jul 1, 2023·edited Jul 1, 2023

What a huge loss for everyone. I understand and accept that political punditry is and always will be more popular than math, but I always found it refreshing to have a resource that would just tell me about the world as it is and not how somebody thinks I want it should be. And frankly, I'm probably inclined to agree with the new staff on a lot of political matters, but what's the point of a model if it's at all influenced by what the modeler believes politically? The point of a poll isn't to tell me what I want to hear. I hope you and your models can find a new home where you won't be pressured to manipulate based on punditry.

Expand full comment

What I most appreciate about Nate is he's always had the integrity to resist that, and when/if he does screw up he owns up to it and does an autopsy on it. And his refusal to brush off uncertainty the way almost everyone else does is vital. The sincerity is refreshing. I feel like that's all far too rare in journalism.

Expand full comment
Jul 1, 2023·edited Jul 1, 2023

Morris's "97% chance Biden wins" prediction in 2020 https://projects.economist.com/us-2020-forecast/president sums it up for me. That 97% confidence sounds like the kind of figure somebody who, well. My problem isn't that Morris is short on statistics credentials, but that I could guess as much based on that confidence level. The hot new meta-pollster can get away with overconfidence for a while, until their first big upset hits, and after that, you get a new hot new overconfident meta-pollster with a short track record -- lather, rinse, repeat.

In 2016, that meta-pollster was Princeton Election Consortium, beloved of progressives for telling them what they wanted to hear -- that Clinton was way ahead. PEC forecast a 99% chance for Clinton to defeat Trump, so they're not the hot new thing anymore.

Expand full comment

I have a front-cover sticker in my copy of "The Signal and the Noise", one of my favorite books and one I push on my new hires.

This is all so funny, but all so tragic. So many brands are torched. Needlessly torched. Not calling anybody an idiot, necessarily, but lots of decisions being made by average-valued folks.

IMHO the Nate Silver brand remains untarnished. Your mileage may vary.

Expand full comment

I always consider Silver to be a shibboleth. Everyone whose opinion is worth something has huge respect for him. Anyone who slates him is proving they don't know enough about the subject.

Expand full comment

Yeah average-value is the problem. The most damage is always done by B- people who think they are stars.

Expand full comment

I think this is an extremely uncharitable reading of what Morris is trying to do (not surprising given his history with Nate).

It seems fairly clear - and I’m honestly not sure why Nate has been so opposed to admitting this - that Rasmussen and Trafalgar and some others (that polling company run by high schoolers!) were trying to manipulate the 2022 polling averages through the polls they released. It’s one thing to have a GOP house effect, it’s another to try and shape coverage and public perception of the race via your polls (and then hope to be right in the end).

My read is that Morris’ questions come from a place of determining “are you trying to field real polls or not”. Nate disagrees with asking this question, and feels like you should just throw everything into the soup and weight it. That’s a reasonable position. But clearly the polling averages would have been more accurate if Nate hadn’t done that - so it’s a fair approach for Morris to take as well.

IMO Nate is well within his right to point out the flaws with this, but he should at least attempt to acknowledge Morris’ logic.

Expand full comment

I guess you're new at this. Remember when 538 threw R2K in the trash bin and there were a bunch of lawsuits?

https://fivethirtyeight.com/features/research-2000-issues-cease-desist/

Expand full comment

And the upshot of that was what, exactly?

I mean it’s a polling average, 538 (run by Morris or run by Silver) can make whatever choices it wants about what polls to include or not include - just as RCP does, for example.

(And for what it’s worth I’ve been following Nate since he was Poblano and am a long-time admirer, but think he’s gotten increasingly thin-skinned over time.)

Expand full comment

I'm not excited to do a deep dive into internet questions that are a decade old, but my recollection is that R2K was annihilated and ceased to exist.

More to read: https://www.dailykos.com/stories/2010/6/29/880179/-Research-2000:-Problems-in-plain-sight

Expand full comment

Sorry but I’m genuinely not sure where you’re going with this.

I thought you were saying that Nate didn’t want to poke the bear with Trafalgar given the annoyance of dealing with the R2K lawsuit. That’s something a reasonable actor might do.

But it would appear Nate was right about R2K! So why not exclude Trafalgar too?

Expand full comment

FFS.

538 torched R2K because there were very obvious numerical concerns about their (obviously fabricated) data, not because the founders/originators/operators were in the wrong political party.

I am not aware of any similar analysis that defenestrates Trafalgar, even if they are obviously very, very right-leaning.

Expand full comment

I have heard some legit criticism of Traf from people who've dug through the crosstabs and said, uh, yeah, this is fishy.

I'd extend the grading period (GEM has said there are changes coming to the grading system) so people who push these propaganda polls (and then herd during the grading period to get a good grade) wind up with lower grades. Traf being A-grade is just a joke. I would also try to improve the way house-effect adjustments are made (which GEM has also said he's doing).

Expand full comment

Then how come their polls are impossibly slanted toward the GOP? They said Whitmer would lose Michigan, she won by 11 points! They also missed PA Gov and CO Gov by 10+ points.

I don’t think they’re fabricating data per se, but they are clearly making weighting decisions to get to a certain outcome.

Expand full comment

Erm, this angle isn’t at all apparent from your previous comments in the thread. There’s not really grounds for extreme exasperation here.

Expand full comment

You've probably heard this plenty already, but I'm desperate for more podcasts with you and Galen. And Clare, while we're wishing! Good luck with the future.

Expand full comment

I would like to read this complaint in the broader context of right-wing polling outfits explicitly attempting to manipulate poll aggregators and projections models in the 2022 midterm.

Without that context, this is a bizarrely narrow critique that is either (a) entirely too credulous to the materially different behavior we saw last summer/fall and/or (b) an unconvincing personal polemic inspired by your disgust at one of "something like four people whom [you] have blocked on Twitter" being handed the keys to your kingdom. As an aside, I would recommend spending less time on Twitter.

Expand full comment

If this criticism were valid then we would have seen the polling models predicting a big Republican victory in 2022. In fact, the models were very accurate; it was the *pundits* who predicted a red wave. I agree that polling firms should be adjusted for house effects, weighted based on historical precision, and excluded if there is evidence of malfeasance, but 538 has been doing that for years. They just base it off of actual results instead of what some pundit says.

Expand full comment

The polling models, including 538, did predict a Republican (Senate) victory in 2022!

And I think more notably, the models went from predicting a Democratic victory in August, September and early October to predicting a Republican victory by November - which IMO is directly attributable to shitty GOP pollsters flooding the zone in those final weeks and the models taking those polls at face value.

Expand full comment

Of course they didn't get it perfect, you can't predict the future, but we're talking about like 1-2 percentage points here. In other years the polls have been off by 5 or 6 points in the other direction and I don't hear the same criticisms. Also, I realize it was a colloquialism but when you say "IMO [the shift] is directly attributable to [malfeasance]," the whole point of analyzing data in a methodical way is to take your own opinions out of the equation. Is it possible that that shift was due to malfeasance? Certainly. Is it also possible that the shift was due to a true shift in public opinion? Yes. Or it could have to do with assumptions made by the model based in time. My point isn't that you're wrong, it's that in order to be scientifically rigorous we have to be methodical about addressing our own biases, and I feel like that's not the case if we're only focusing on Republican-leaning pollsters here given that polls have often been biased in the other direction as well. I'm not saying that there wasn't malfeasance, but I am saying that if you only are looking for malfeasance on one side of the political spectrum and then acting on that, that's data manipulation.

Expand full comment

This is all very fair - but I think you’re missing the point that at least some pollsters made no real attempt to get the numbers right and were pushing the polls for ideological reasons. Nate already discounts internal campaign and partisan polls for these reasons - he just never tagged Trafalgar and some others as being partisan.

I feel like that would have solved this whole thing and made the models more accurate. But Nate has, to my reading, never acknowledged the possibility that he and his models were “played” by these pollsters whose only goal was to manipulate the polling averages.

Expand full comment

“at least some pollsters made no real attempt to get the numbers right and were pushing the polls for ideological reasons”

And you never see it in your bubble, but the other side say exactly the same thing about left-leaning polls. Silver takes flak like this from fools on both sides, and that's one sign among many that he's getting the balance about right.

Expand full comment

What organizations are intentionally putting out left leaning polls to flood the zone? NYT? Quinnipiac?

When a media org or a university sponsors a poll, they have a reputation for impartiality they’re seeking uphold. Rasmussen and Trafalgar actually get more business from being intentionally wrong.

Expand full comment

I guess I would just be more convinced by this if the data showed that they were being "played," but that's just not what we observed. The model was extremely accurate in 2022--much more accurate than it had been in previous years. And Nate has always accounted for house effects as well as assigning pollster ratings that will decrease if a pollster's results miss the mark in a particular year, so I think that takes care of a lot of the issue. So if Rasmussen always overestimates Republicans but they overestimate them by roughly the same amount each time, then that's already accounted for. If they were found to be falsifying data, then sure, they should be ignored, but I haven't seen evidence of that, only insinuation. I'm not saying that Rasmussen is great or should be protected at all costs, my problem is with singling out specific pollsters due to a perceived bias in a particular direction rather than applying a methodical approach equally to all of their sources of data.

Also, on a completely cynical, political note which should have no bearing on this decision, but which I find perplexing, obviously I'd like the polls to be as accurate as possible, but as a Democrat myself, if the polls are going to be off, I'd MUCH rather they overestimate Republicans rather than Democrats. I'd rather have a happy surprise on election night than to have to experience 2016 again. So from that perspective I also don't understand the outrage. I don't really know where I'm going with this, just a thought.

Expand full comment

But was the model accurate in 2022? Again, it showed the GOP winning the Senate. And it reached that conclusion based on a large volume of late polls from questionable polling orgs (it’s not like NYT/Siena was way off).

You can debate whether that is due to being “played”, but it appears undeniable that sketchy GOP pollsters were trying to move the polling averages to change the narrative (see RCP as another example - while not a pollster, they clearly make pro-GOP partisan choices about what to include in their polling average).

Expand full comment
Comment deleted
Expand full comment

Log off and go touch some grass, son.

Expand full comment
Comment deleted
Expand full comment

Touch grass, come back, learn to use Ctrl+F to fact check yourself.

Expand full comment

If someone makes a bid to take Nate's models private, we're all agreed we'll have to crowdfund to outbid them and keep them public, correct?

Expand full comment

Why in the hell would a forecaster want less data? And if the point is ideological, then isn’t it really better biz (from page views perspective) to have your forecast be more favorable to your side, and then be wrong? Is that the better story maybe from a selling a narrative approach?

Expand full comment

I am Italian, I have read 538 since 2008, and I think what happened recently is a disgrace. I have learned so much fron Nate Silver, it has helped me a lot even when analyzing Italian polls, which I do from time to time, for professional reasons. I totally agree with Silver's post. My only concern is that, given the situation in the USA in 2020, given the fact that one of the presidential candidates was preemptively questioning the fairness of the election, maybe you could imagine that a partisan pollster may have contributed knowingly to the subsequent perception that the elections were rigged. In other words, I don't think that the polls or their analysises can change the vote choices. But if one or more pollsters deliberately twist the results so as to assure that someone will win, and many people believe it, then surely the claim that the election was stolen may sound, post factum, more persuasive. I don't know enough about the situation in the USA to say if this was really the case. Moreover, I am well aware that this is a problem quite different and much more serious than methodological nitpicking, and should not be masked as such.

Expand full comment

I agree with Nate's take on the problems with this letter. Nonetheless, it's worth pondering whether, if you agree someone is publishing push polls (as Nate says he does here), it's actually "non-partisan" to promote and platform them.

Expand full comment

The worst part of this is I used to love downloading the FiveThirtyEight polling data and working with it myself. Now it's going to be ruined by Morris putting his ideological thumb on the scale. Hope Nate gets an alternative up and running pronto.

Expand full comment

I would suggest that Nate get a copy editor for future blog posts.

Expand full comment

Maybe consider using some extension like LanguageTool or Grammarly for your Substack. This post could become an important document people link to. It's a bit weird for it to contain stuff like "Instead, it’s this: one"; "he have should stated it" and 50-word sentences.

Expand full comment

Blogs are by definition casual and folksy

Expand full comment

Is there's there currently a non bias source of polling averages? I used to use 538 but this is bad news

Expand full comment