315 Comments
User's avatar
Ed Y.'s avatar

There's no world where Trump wins nationally and loses PA.

Using 2020 as the baseline, he lost PA by less than 0.5% while losing nationally by 4.

If he ties or wins nationally, he wins PA comfortably.

The pollsters are once again struggling to reach WWC voters. The Teamster survey should be a big flashing warning sign that Trump is going to dominate with WWC voters.

Replacing Biden, then compounding that with not selecting Shapiro, may have sealed the deal and given PA to Trump. It is highly improbable the WWC swing voters who chose Scranton Joe will go for a far left wing California progressive.

Expand full comment
Vertical Stripes's avatar

Did you ever adjust your forecast when Selzer’s poll had Kamala down by only 4 instead of 15 per the rumor you heard?

Expand full comment
Tom Hitchner's avatar

He attributed it to Selzer (the “gold standard” in his earlier comment) being pressured to release Dem-friendly numbers.

Expand full comment
Econometrical's avatar

Ed is the poster child for ignoring data that does not fit his narrative.

Expand full comment
Vertical Stripes's avatar

Which, as Nate makes clear, she doesn’t do. Hence the high rating.

Expand full comment
Ed Y.'s avatar

Pot calling the kettle black...

Expand full comment
SilverStar Car's avatar

🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣

🤡

Expand full comment
Thomas's avatar

Selzer also had Trump and Biden tied in Iowa in the 2020 race in a poll released September of that year. She eventually nailed the race in Iowa a few weeks before the election predicting a Trump victory in the state by +7.

Expand full comment
Vertical Stripes's avatar

Polls are a snapshot in time. Your mention of one poll a September of 2020 and one from the end of the cycle doesn’t tell us much.

Expand full comment
Thomas's avatar

Exactly my point. So Selzer polling Iowa at +4 for Trump doesn’t tell us much either. Especially when you consider that at this same point in the cycle 4 years ago, the race in Iowa according to Selzer’s polls was closer.

Expand full comment
Phebe's avatar

People keep saying that! That polls are a snapshot in time! They're no snapshot if they are wrong, wrong, wrong. I hate that phrase, because it makes an assumption that there is any value in polling these small samples that also can't catch any Republicans for love or money. "Snapshot in time" makes the assumption that the poll is accurate ---- which I very much doubt many are.

Expand full comment
Tom Hitchner's avatar

I’m happy to share some examples of polls being accurate, including in predicting Republican wins, but first I have to ask…if you don’t believe polls have any value, why are you a paid subscriber to Nate Silver’s blog?

Expand full comment
Phebe's avatar

It is a good question. I guess it's the same reason I have a weather site on my favorites bar, though I am death on the general inaccuracy of weathermen, even worse than political polling. I don't believe what they say, but at least they are talking about a topic I'm very, very interested in!

The struggles polling has had in recent years -- and since its inception, in fact -- is also fascinating, and I took my share of statistics courses, too, and have always been interested in the issues. The Silver Bulletin chews hard on these issues, and the posters are intelligent --- money well spent.

Expand full comment
SilverStar Car's avatar

That’s still a snapshot. It’s easy to misunderstand, misread the larger situation from a limited scope momentary picture.

Further, you can’t even be sure it was wrong. How are you judging if it’s wrong, the error?

Expand full comment
Ed Y.'s avatar

Polls do not shift 7 points in 2 months unless something drastic happened.

Expand full comment
Vertical Stripes's avatar

If you have a problem with Selzer’s polling from 2020, then why were you putting so much weight on it when you thought Trump would be up by 15 in Iowa?

Expand full comment
One Faceplant Short Of Wisdom's avatar

If you read Nate’s article, he clearly states that you establish the rules for your average or model, and then you stick to them.

Expand full comment
Ed Y.'s avatar

Yes, because we all know pollsters are above reproach and pressure lol. When Seltzer comes out with a Trump +10, it will be to salvage her reputation.

Expand full comment
Vertical Stripes's avatar

Again, you were the one putting so much stock in Selzer’s poll when you thought it would show Trump at +15 in Iowa.

Here’s how you put it—

“He won by 8 in Iowa. Rumors are Seltzer's new poll has him up 15. If he's over-performing his 2020 by 7 in IA, all the other blue wall states will correspondingly move to Trump. Considering Joe won by 44,000 total votes across 3 swing states, that's more than enough for a Trump win.”

Expand full comment
Matt's avatar

‘All polls showing my opponents ahead are manipulated fakes but all polls showing my supporters ahead are real’ is a really uninformed take to have.

If you’re going to do this why even bother being subscribed to a polling stats site? Just save your time and assume your supporters all are going to win.

Expand full comment
Ed Y.'s avatar

That's a strawman. It's a joke that Nate gives credibility to polls like Quinnipiac which historically misses by 7+ and always on the side of Dems, or with local polls like MassInc. I'm cancelling my subscription after Nov 5. But wanted to be here to watch the cope when Trump wins in a 300+ EV landslide.

Expand full comment
Brian Normoyle's avatar

Ridiculous. Selzer’s reputation does not require salvaging.

Expand full comment
Tom Hitchner's avatar

“Using 2020 as the baseline, he lost PA by less than 0.5% while losing nationally by 4.”

Did you see the part in the post about how you can’t always just say what happened last time will happen this time too?

Expand full comment
Ed Y.'s avatar

Which is more likely? That they fixed the issues with 2020? Or that they are still having trouble reaching the right respondents? I'll go with Occam's Razor.

Expand full comment
CJ in SF's avatar

So we get to add Occam's Razor to the list of things you don't understand.

Expand full comment
Mark's avatar

If the problem was as simple as under polling WWC, I'll bet that they have already fixed the it. It's easy: just weight WWC more.

Expand full comment
Tom Hitchner's avatar

No one can object to you saying "this is more likely," but that's very different from being certain.

Expand full comment
Phebe's avatar

Occam's Razor sez they can't poll Republicans, and not just "WWC." We don't take polls because --- of all the nastiness and blame talked at us, and also because of the meanness of a lot of polls, the type that start out with normal questions and move quickly to dirty talk slandering with obscenities the candidate the pollster is trying to defeat. I got one of those calls decades ago, and that was the last poll I ever took --- and I hung up on him, of course. I think the polls are wildly inaccurate and are only for making money for the pollsters and the aggregators. It doesn't matter that they are always wrong --- polls are like weather forecasting. People want so much to know that they'll accept ANY guess, however wrong.

Expand full comment
Tom Hitchner's avatar

Historically polls have very good predictive value, including in the most recent election (2022).

Expand full comment
Phebe's avatar

I don't see how you can claim that polls have very good predictive value, considering the utterly unexpected Reagan landslide and the completely unexpected Trump triumph in 2016. When it counts, polls are wrong. It's bad sampling, and they can't fix that so far. There may be a way, but the deep suspicion one side has against the other in this fatally divided country has been impossible to overcome so far.

Another problem is the ever-growing hatred of datamining that is building now and resulting in so many class-action lawsuits: we have learned that polling is not "just" information gathering: they are trying to get money out of us. If I were so foolish as to take ANY of the quizzes on The Hill, for instance, I'd be hit by a gazillion appeals to send money to candidates in Idaho and Montana and many other unlikely places for a Marylander. My husband fell into this and now gets several email and snail mail appeals every day. We'd get non-stop phone calls too, but we turned all the phones off and only use them for outgoing calls. Polling has become confounded with commercial datamining and propaganda campaigns and also polling is a generally leftwing spying activity that feels dangerous in a time when the right needs OpSec. Am I right? How many rightwing pollsters are there? Not many.

Expand full comment
Tom Hitchner's avatar

“When it counts, polls are wrong.”

I don’t understand why those two instances (one of which is forty years old) are the times when “it counts,” whereas the many elections that went right along with their polling don’t count. I never said polls were infallible, but they are quite accurate compared to other methods. Anyway, the Trump miss was a miss, but not an egregious one—Trump won the “blue wall” states very narrowly and still lost the popular vote.

I’m not familiar with the data-mining cases you’re talking about, but that seems separate from the accuracy question—if pollsters are doing something unethical, they should stop. You seem to be suggesting that these issues are part of why these polls are not reaching enough right-wing voters, but that’s a sampling question that pollsters can adjust for. In 2022, for instance, polls did not undercount Republicans.

Expand full comment
Arch's avatar

It's almost like you didn't read the article you're commenting under.

Expand full comment
melanie's avatar

Is your last sentence tongue-in-cheek, referring to the fact that the right is trying to paint Harris as "far left wing" and "progressive"? Or do you just not know what her actual positions are? Her policies are indistinguishable from those of a neocon Republican, minus the fact that she likes reproductive rights and LGBTQ people. She is anything but progressive.

Expand full comment
Alex C's avatar

She was literally rated the most liberal senator just a few years ago...

Expand full comment
CJ in SF's avatar

https://heritageaction.com/scorecard/members/H001075/116

Lifetime rating at 4%, so less liberal than the average or 3% among all Dems.

The Heritage Foundation (you know - the Project 2025 loons) spin-off actually looks at votes on issues they care about.

Of course you might prefer counting PR efforts like GovTrack does, because that helps your narrative.

Expand full comment
Alex C's avatar

Ew CJ get out of here! Go troll somewhere else! People can look at your profile and comments and see all the bad faith positions you've taken historically, you know that right? For those unaware, CJ just comes into threads to pick fights in bad faith - don't engage with this troll!

Expand full comment
CJ in SF's avatar

You have a problem with facts?

GovTrack only considers sponsor and cosponsor actions.

You are welcome to believe that is all that matters.

In reality cosponsoring is mostly for show.

Oh, and feel free to point out a "bad faith" position. You just disagree.

Expand full comment
melanie's avatar

Uhhh by whom?

Expand full comment
Alex C's avatar

Govtrack.

https://www.govtrack.us/congress/members/kamala_harris/412678/report-card/2019

They took the rating down (surprise surprise...) but you can put that url into wayback machine and see the rating still.

Expand full comment
melanie's avatar

Yeah, I just looked it up. There are plenty of refutations of the metric used to make that claim. WaPo uses a different metric to calculate that her record was 16th from the left. Add in the fact that her Senate career ended in 2019 and she (like most politicians) dramatically shifts her positions over time and it's obvious how little ANY of those calculations matter. You can also point to her record as a DA, which was alternatingly EXTREMELY conservative and very liberal, depending on what was trendy at a given time.

Common sense also shows she's solidly moderate. If you watched the debate, you know that her ideology is approximately that of an early-2000s Republican, with a few culture wars liberal takes thrown in. Hawkish (promises the "most lethal" military), loves Israel, wants to do more fracking than ever before, in exact agreement with the Republicans on border control, etc. She's made it clear that she serves her billionaire donors and plans to be friendly to all sizes of corporations, not just small businesses. She's explicitly stated that she has no agenda to specifically uplift Black Americans. And so on.

Expand full comment
Alex C's avatar

Wanted to decriminalize illegal immigration, ban semi-autos, supported reparations in the form of monthly tax credits, pass the Green New Deal, and stop fracking.

You can say she's flip flopped on these topics to try to appeal to moderates, but c'mon lets not lie to ourselves with this "She is anything by progressive" propaganda. None of those positions were forced on her by Republicans, she was the one to come out and publicly support them.

Expand full comment
K Tucker Andersen's avatar

The only way that both her past positions and her declaration of her current positions ( which in my opinion as well as Bernie’s is simply a political two step) is if she is an idiot, has no real values or is a complete fraud.

Expand full comment
CJ in SF's avatar

They maintain their ratings by session, and there was no "2019" session. It may be that the 2019 data was just a midway analysis for the 2020 full analysis. "Each year or two we compile all of our statistics into a report card"

Try these :

https://www.govtrack.us/congress/members/kamala_harris/412678/report-card/2018

https://www.govtrack.us/congress/members/kamala_harris/412678/report-card/2020

Why did you add the "surprise surprise" comment? Are you actually citing a rating from a source you believe is cooking the books?

Expand full comment
K Tucker Andersen's avatar

Were you really not aware of the Govtrack rating? Or did you just choose to ignore it. It had been widely publicized.

Expand full comment
melanie's avatar

Not everyone pays attention to the exact same news as you, lol. It's definitely possible that I saw a headline and rolled my eyes and didn't bother to click because it was so obviously misleading.

Expand full comment
Ed Y.'s avatar

Thank you for admitting that she pivoted to pretend she's not a progressive.

Expand full comment
melanie's avatar

What on earth makes you think she'd actually implement progressive policy in office? Like all other mainstream politicians, she's an empty suit who exists to serve special interests. She will be moderate in office because she works for the oligarchical ruling class: she will be hawkish because the defense contractors give her $$$; she will fund and arm the massacre of Palestinians because AIPAC and other Zio orgs give her $$$; her economic policy will be generous toward billionaires and corporations because billionaires and corporations give her $$$. And so on. You understand this, right? How could you not? Take a look at how things ACTUALLY work in this country. Harris will not implement progressive policy because Harris is working for the wealthy and powerful, and all they want is to maintain the status quo.

Expand full comment
Ed Y.'s avatar

She'd likely be the worst of both worlds. But I agree with your very last part. The Uniparty wants to maintain the status quo.

Expand full comment
Kenny Easwaran's avatar

> There's no world where

I can stop you right there. You obviously aren't getting much out of anything Nate Silver is saying if you reach this level of ironclad certainty about anything like this.

I agree that it's fairly unlikely that Trump wins the popular vote and loses Pennsylvania at the same time. But I don't think this would be much weirder than Trump losing Iowa or winning Minnesota, both of which are clearly in the realm of possibility. 1% probabilities happen 1% of the time, not never.

Expand full comment
Ed Y.'s avatar

It's not that each independently couldn't happen. It's that both happen simultaneously. PA always votes to the right of the nation. To suddenly think that will flip in such extremes is ludicrous.

Expand full comment
Kenny Easwaran's avatar

What do you mean “always”? Pennsylvania has only voted to the right of the nation three times since World War II. (1948, 2016, and 2020).

It’s not correct to say that I think it will “flip in such extremes”. I’m saying that you don’t seem to understand how uncertainty works. I won’t rule out the possibility of a slight change that ends up with Trump winning the popular vote slightly and losing Pennsylvania slightly, though I think it is quite unlikely. I think it is nevertheless more likely than Trump winning Minnesota or losing Iowa, both of which are clearly possible, but clearly unlikely.

Expand full comment
Ed Y.'s avatar

Gallup just released the national party ID number of R+3, pretty stunning. If the nation is R+3, and PA voted +4 to the right of the nation, ergo the top line best case scenario is Trump +7 in PA. I think that's too optimistic; likely the state will be Trump +2. Either way, the Gallup party ID and todays Quinnipiac national poll, are extremely bullish for Trump.

Expand full comment
Ed Y.'s avatar

You are correct, my bad. "Always" as in this current new electoral landscape: 2016 and 2020. But a 4 points swing left for PA vs the nation is highly unlikely. If the national popular vote is tied, Trump likely wins PA by 3 to 5.

Expand full comment
Kenny Easwaran's avatar

I agree that it’s most likely that Pennsylvania votes a few points more Republican than the national popular vote. But it would not be incredibly unlikely that this changes by 3 points in the next month and a half - in either direction.

Expand full comment
Paul's avatar

Here we go again. I'll say this Ed, it is amusing how you try to throw in some numbers to give the impression you're actually interested in data and probabilities and not one-sided partisan bluster.

Expand full comment
Ed Y.'s avatar

It's not bluster to suggest PA is right of national numbers, and not the other way around.

Expand full comment
Steven Schilinski's avatar

Funny, the post mentions why using the past as a baseline is a bad idea and the surprises those assumptions caused in 2016 and 2022, yet you immediately use 2020 as baseline with non justification and give no rebuttals to Nate’s comments

Expand full comment
Josh's avatar

I agree that I cannot see her doing well with WWC, but I can see her absolutely dominating white, educated suburbanites, and I still think she will ultimately haul in a typical Democratic overwhelming majority of black voters. I think Hispanic voters are where she will see problems, but those aren’t plentiful in the rust belt and upper Midwest.

All that said, if forced to bet, I would still wager Trump wins in a super close election. I can certainly see Harris winning the election by reestablishing the blue wall, however.

Expand full comment
Ed Y.'s avatar

The issue is there's not enough white educateds in PA. The split is like 60/40 or 65/35 WWC versus WE.

Expand full comment
Ib A.'s avatar

Wisconsin has the highest percentage of non-college educated WWC voters of the three blue wall states and is also the state most Democratic-leaning. Nothing you're saying makes any sense and constantly changing the argument every time data proves you wrong makes you look like a clown.

Expand full comment
Ed Y.'s avatar

I'm talking specifically about PA.

Expand full comment
Phebe's avatar

Your mistake here is supposing only uneducated white people will vote for Trump. Good luck with that -------

Expand full comment
Penny's avatar

Also struggling to reach all the women who are going to vote in this election….

Expand full comment
Phebe's avatar

Big issue, I agree. I think Trump has handled the abortion problem the GOP has (which is a dramatic problem!) very well, and very traditionally. I learned forever ago during the Reagan administration the way the GOP wants to handle this issue: the party throws bones to the activists early on, then shuts up, and then after they win the election they NEVER AGAIN MENTION ABORTION. Because it's a bargain they strike with Republican women, like me. You hang with us and vote for us (and do all the myriad other things Republican women take charge of --) and we'll make sure not to harm women this way.

This bargain does work, and it's exactly what Trump is doing, though I must say it's all under unusual pressure this time, with Dobbs and the widespread misogyny. Abortion has lost a lot of momentum as an issue in the polling, I've been reading in the Times. If Trump wins, this will be a lot of the reason. There are more women voters than men and we vote more than men do --- why men so often ignore us as voters I will never understand.

Expand full comment
Penny's avatar

Abortion has not lost any momentum as an issue bringing Democratic and independent women to the polls - mark my words. The 2022 turnout is not going away. In red states women are dying and we are tired of being told our lives matter less then men or fetuses.

Expand full comment
Phebe's avatar

Quite right. I think the GOP guys are slowly, painfully figuring that out.

Expand full comment
Carra's avatar

Women are dying in red states and not in blue states. What does it even mean?

Expand full comment
Jarrod's avatar

I've asked about this elsewhere, but repeating because it's relevant. What's the evidence that polls are missing WWC voters or, alternatively, that they're reaching WWC voters that they previously missed?

Expand full comment
ColonelBustard's avatar

I find it completely possible that a single state is focused on so much that it generates a different result from other similar states. The overall trend in the rust belt could be R +2 since 2020, but offset specifically in PA because hundreds of millions of dollars are being spent there right now. I know Nate says regions tend to stick together, but I see a real possibility of something like this happening

Expand full comment
Ed Y.'s avatar

Possible but not likely. Hillary outspend Trump by many factors in 2016, to no avail. Also factor in the earned media that Trump and Vance get from interviews and press conferences which translate to Advertising Value Equivalent (AVE), a metric used in PR to provide a rough gauge.

Expand full comment
Phebe's avatar

Money doesn't vote. People vote. Trump gets that. The Dems, not so much.

Expand full comment
Rex Meyer's avatar

Watch the Pennsylvania senate race if the democratic incumbent is not outright winning it will be a heavier lift for Harris

Expand full comment
Michael Howard's avatar

Is Alaska going to be the new Shapiro?

Expand full comment
Jay Arr Ess's avatar

In the sense of "The thing that consistently gets the crotchety version of Nate off the rocking chair on his front porch and waving his cane at?"

I sure hope so, but maybe that's because I think crotchety Nate is just oodles of fun.

Expand full comment
Cian's avatar

A welcome change if so

Expand full comment
Jim's avatar

We don't hear enough about how Nate thinks she should have chosen Shapiro.

(ducks)

Expand full comment
Ib A.'s avatar

Hope so, I'm tired of his genocide Josh-posting

Expand full comment
Jaxon Lee's avatar

I LOVE this post, Nate (not that I think he reads the comments, but maybe Eli will). Some unsolicited advice if one of them does read this: Twitter (X) is a cesspool of outrage that does nothing to advance a substantive conversation and tends to skew ones perception of "people" towards the gutter. Nate would do well to limit his engagement on that platform. Otherwise, GREAT AND INFORMATIVE POST! It really does help how I think about pollsters and actual polls. Thank you

Expand full comment
Trevor's avatar

You've mentioned a couple times that you need to double the margin of error to get the margin of error for the difference, but does that actually follow from the formal derivations of confidence intervals for horserace polls? I know margin of error is already a bit fudged for polling since there are boundary conditions (if a candidate polls at 1%, it doesn't make sense for there to be a ±3% MoE), and I wouldn't be surprised if just doubling the MoE to get the MoE of the difference is "close enough" when the candidates are about even, but I'm wondering how close it actually is.

Expand full comment
An Impartial Spectator's avatar

I was going to post a similar comment. In general, you can’t add together distributions like that.

Let’s say you are measuring wages. Bald people make $10,000/year, with $2,500 margin within 95%. Hairy people make $15,000 with a $2,500 margin. What is the probability that Bald people actually make more than Hairy people? $12,500 is within both margins of error, so maybe you say they’re statistical ties (a terrible term).

However, there is much less than a 5% probability of this, because there’s only a 2.5% chance of Bald people making $12,500 or more, and an *independent* 2.5% chance hairy people make less than that. It isn’t quite 2.5%*2.5% (.0625%) but it’s a pretty small number.

However, my guess is Nate’s relying on the errors in an essentially two-candidate race being perfectly correlated. If a poll underestimates Harris by 3%, it overestimates Trump by 3%. I.e., the assumption of independence of errors is false. This was less true when RFK was still in the race, but I think I see the logic. It’s a bit odd to report out errors ranges this way though.

Expand full comment
kezme's avatar

This is right - the errors being almost completely correlated when you have two candidates with 94+% between them.

(The margin of error is also at its maximum for statistics that are near 50% - the margin of error for an RFK on 5% is much narrower than for the candidates on 40+% in the same poll).

Expand full comment
Ragnarok1er's avatar

Margin of error makes no sense as a concept and should be eliminated entirely. The only thing that exists is a probability distribution centered around the result. The margin of error is an arbitrary boundary with no real purpose. (Nate says so himself btw: "And one in 20 polls will fall outside the margin of error")

Expand full comment
Tim Lawrence's avatar

Distributions have quantifiable spreads. If you have a better ideas to communicate that to the public than 95% confidence intervals, I’d like to hear it.

Expand full comment
David Abbott's avatar

I think the phrase standard error is both precise and understandable. It’s close enough to “the average distance of a result from the median.”

Expand full comment
Kevin's avatar

The problem is that when the margin of error is expressed as a %, the values for each candidate are correlated. The uncorrelated values are the number of Harris voters X and the number of Trump voters Y. The *percent* of Harris voters is therefore X/(X+Y) and of Trump voters is Y/(X+Y). These both depend on both values. If you want to know the error on the difference, you have to go through the math at https://en.m.wikipedia.org/wiki/Propagation_of_uncertainty for the function (X-Y)/(X+Y). If you compare the result to the same propagation for X/(X+Y), then a factor of 2 comes out in the derivative. So this is exact (as far as anything in statistics).

Expand full comment
Jeff Evans's avatar

It feels like Nate treats the model errors as normally distributed and symmetrical. Shouldn't they be binomially distributed in the horserace case, and multinomially distributed when 3 or more candidates are modelled? This would account for what Trevor referred to as boundary conditions, since the errors, like the projected values, should be bounded on the interval [0,1] when back-transformed tot he data space. When the projected probabilities for each candidate are close to 0.5 (50%), the errors will be nearly symmetrical, though they shouldn't be normally distributed.

Expand full comment
Alex's avatar

In your example a margin of error will not be symmetric. So you may have something like -.5 to +4. But for almost symmetric distributions around 50% it's a good shortcut.

Expand full comment
Jack Motto's avatar

I think people just grossly misunderstand how big margins of error can be. I don't know that there's an easy fix to this, especially because it takes a long time to explain and the bad arguments sound persuasive at face value.

At this stage, I'm of the opinion that the race is close, and unless something major happens there's really no need to look at any more polls. Might as well just wait for the results.

Expand full comment
Jay Arr Ess's avatar

Is there a way to explain them that's a little bit wrong but fast and gets the intuition across?

Like.. I always explain standard deviations in terms of population heights. Suppose I go pull one guy at random and he's 6'4''. Suppose I've also been told that most people are within about 6 inches of the average height. So I would conclude that the average height is.. somewhere between 5'10'' and 6'10'', most likely. But since I only found one dude, I shouldn't be very confident in that.

Actual average height for males is about 5'10''. I get much more confident once I start pulling more dudes out and I get maybe another 6'4'', but I'll start landing like some 5'9'' and eventually if I average them out I'll get closer to the true average, if I'm pulling dudes at random and not just asking guys hanging out on the street by the basketball court.

Maybe I'm too used to thinking about these things. Is that not pretty quick and easy to then analogize that the number of dudes I poll is the number of respondents, and where I find my guys is what groups I was able to get responses from?

Expand full comment
Liam's avatar

An analogy: As a test, you've flipped a possibly biased coin 1000 times and recorded 510 heads. Now we're going to flip the coin 150 million times and give control of the government to whichever side gets more flips.

Do you feel confident based on that first 1000-flip test that heads will win?

Expand full comment
Jay Arr Ess's avatar

I'm stealing this

Expand full comment
Jack Motto's avatar

It's a great question, but honestly... I wouldn't bother.

The problem is, most people don't treat arguments as "arguments." They are more like flag posts, demonstrating the persons conservative or liberal credentials. Most people who make bad arguments know the argument is bad, but they make it because their side makes the argument and they are showing allegiance to their side.

I think once you contextualize this, it really does help, because you realize people aren't dumb or misinformed, they are just partisan.

Expand full comment
Bayesian's avatar

Effect size (e.g. Cohen's d for scalar univariates) is a concept that to me is pretty intuitive, but it somehow is not for a dishearteningly large number of people (i.e. not people actively lying with statistics for gain, but just not grasping).

Would that norm of reaction were nearly as well known.

Expand full comment
Jay Arr Ess's avatar

Maybe we should just make this our go-to party conversation opener.

"Does the word ``standard error'' mean anything to you? No? Oh gosh, have I got a treat for YOU."

Switch it up a little bit by picking what example to use.

I'll probably stop getting invited to parties pretty quick, but until then...

Expand full comment
Tess Wallace's avatar

When you have to modify the results of a poll to account for its bias (house effect) you are no longer working with mathematics, you are guessing. When liberal media outlets, like WaPo produce a poll in 2020 showing Biden with a 17 point advantage in Wisconsin two days before the election, you are no longer a outlier pollster, you are a campaigner.

Address the “house effect bias” before printing any other mumbo jumbo.

Expand full comment
Tim Lawrence's avatar

That’s not how statistical models work. Regressing historic election outcomes on polls from a given house gives a y-offset that accounts for bias, a slope that shows how change in poll numbers impacts the actual vote, and a correlation coefficient that reflects how much of the variability in the polls is signal vs noise. The statistician doesn’t even care if the polls are ever right at all as long as they are predictive. A polling house that’s wrong 100% of the time might be just as useful as one that’s right 100% of the time, because the direction of the relationship washes out in the model. Mathematically, the only concern is which pollsters produce useful results.

Expand full comment
Jay Arr Ess's avatar

Made me think of an xkcd:

https://xkcd.com/2270/

Expand full comment
Tim Lawrence's avatar

LOL! There’s always an xkcd.

Expand full comment
Bayesian's avatar

When there isn't an SMBC :)

When there are both, buy futures on mind altering substances?

Expand full comment
Ib A.'s avatar

Basically Mad Money

Expand full comment
Frak's avatar

Which is why Nate had to begrudgingly bring back in Rasmussen, as they are among the top class in real world accuracy over the last several elections.

Expand full comment
Tim Lawrence's avatar

That’s a great example. If a simple bit of arithmetic makes a partisan house’s data an accurate predictor, it would be methodologically unsound not to use them.

Expand full comment
Ib A.'s avatar

Wrong. They're a relatively precise pollster, but horribly inaccurate.

https://www.antarcticglaciers.org/wp-content/uploads/2013/11/precision_accuracy.png

Expand full comment
Tim Lawrence's avatar

This precision vs accuracy distinction isn’t useful in statistical modeling. If the poll results covary enough with voting behavior, a model can be built that predicts voter behavior accurately. That’s the point of my previous comment about regression parameters.

Expand full comment
Tess Wallace's avatar

Blah blah blah! You miss the point. It’s NOT statistics if the data contains BIAS!

Expand full comment
Tim Lawrence's avatar

Why not? Because you don’t understand the math? That’s fine, but don’t make assertions about statistics if you don’t understand statistics.

Expand full comment
Tess Wallace's avatar

I am an engineer. Math and statistics are science based. Bias has no place in data. If there is bias in data, the data is spoiled!

Expand full comment
Monstera's avatar

If bias itself can be modeled, then doesn't it just become another piece of data? I'm not sure what the basis is for your idea that bias categorically spoils results. It really depends on whose bias you're referring to.

Expand full comment
Tim Lawrence's avatar

Then explain what the y-intercept is for in linear regression, if not to model a bias. For that matter, explain how a PID controller works if it can’t handle an offset. Or why my Fender amp won’t function without a negative voltage bias. If you are an engineer you know very well offsets are used in everyday science.

Expand full comment
Sharty's avatar

You would be shocked just how much many engineers know nothing about.

Source: am engineer

Expand full comment
Tess Wallace's avatar

BIAS RUINS DATA. A drop of s**t in soup ruins the soup.

Expand full comment
CJ in SF's avatar

You don't say what flavor of engineer you are, but estimating deterministic variation and adding bias to compensate is best practices in many fields.

This bias adjustment has many names ("compensation" is common in biology and EE instrumentation).

Expand full comment
Tess Wallace's avatar

Bias (noun): prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair.

Bias (statistics): a systematic distortion of a statistical result due to a factor not allowed for in its derivation.

My point is that the Bias (statistics) has been replaced by the bias (noun).

Expand full comment
Ida's avatar

There are no such thing as unbiased data, I can tell you confidently as a scientist. The only thing that stands between good science and bad science is how confidently you can understand and account for these biases.

Expand full comment
Tess Wallace's avatar

“I’m a scientist” is a silly statement and not very specific. The bias I am talking about is when a pollster has a desired outcome of the election. The liberal media wants Harris to win and they will skew the numbers for the purpose of affecting the election. THAT BEHAVIOR CAN NOT BE CORRECTED IN A MODEL. The data is bunk.

Expand full comment
Ib A.'s avatar

So you should know what residual analysis is, right? If the residual plots or "error" have a pattern / are predictable, you have a bad regression that can be fixed, i.e., bias.

Expand full comment
Cian's avatar

He does address house effects. In his list of pollster ratings, lists the bias of each pollster, based on their history of giving results that consistently lean either Republican or Democrat, and that data is fed into the model.

Expand full comment
Tess Wallace's avatar

Bias entering anything renders it non-scientific. Bias means bias, which means worthless.

Expand full comment
Kenny Easwaran's avatar

That's not what scientists mean by "bias". "Bias" doesn't mean worthless - it very specifically means something that is a reliable measurement tool if you apply a specific correction to it. A thermometer in Santa Monica is a biased estimator of the temperature in downtown LA. Most of the time, your thermometer in Santa Monica will measure a temperature that is several degrees lower than the actual temperature that is in downtown LA, because the ocean breezes keep Santa Monica cool even when downtown LA is heating up. But that doesn't mean that the thermometer in Santa Monica is worthless - if you see that Santa Monica thermometer hit 80 degrees, then you know it'll be hot in downtown, and if it hits 40 degrees, then you know it'll be cold downtown. You won't know precisely the temperature, but if you measure it often enough, you can figure out the magnitude of the bias and correct for it.

The more unfortunate part is that there's also some noise on top of the bias - it's not always precisely ten degrees warmer in downtown than it is in Santa Monica. But it's still not worthless, even though it's a noisy and biased estimator.

Expand full comment
Sssuperdave's avatar

This may be the best illustration of statistical bias I’ve read (and I work in statistics) - well done!

Expand full comment
Tess Wallace's avatar

My point is that bias is NOT the statistical definition, it has become the common usage definition…meaning bias for or against something.

The pollsters have a favored outcome and they make sure their numbers show it.

Expand full comment
A.J. Robb's avatar

Citation needed, please. WaPo, in the 90 presidential polls published, is on average only 1.1% more favorable to Dems than what the results end up being. 70% of their polls have called the race correctly, and 93% have called the race within the margin of error.

Please show us some actual evidence that WaPo is trying to wag the dog with their polls.

Expand full comment
Kenny Easwaran's avatar

That’s not what Nate is saying. But as others point out, *even if* pollsters have a favored outcome, and lean on their polls to point to it, if we look at their track record and can see that this personal bias systematically shows up as a statistical bias of (say) 3%, then we can use their polls anyway.

If their personal bias leads them to highly noisy results that aren’t systematically off by roughly the same amount, then they are much less useful. Are you claiming that this is what is happening, that sometimes they’re off by 1 and sometimes by 15 and there’s too much noise to use their results even after some correction?

Expand full comment
Tess Wallace's avatar

The data is bunk when the organization producing the results has a favored outcome. It can not be “corrected”. The polls are pushing the electorate vs reflecting the electorate. WaPo is the most egregious. It does not matter if this (or other sites) apply a correction. The electorate sees the Harris +12, +7, +6. This has a chilling effect on the electorate. It is no longer science or statistics, it’s election interference.

Expand full comment
Tess Wallace's avatar

Scientists are not completing the polls…media are! Bias in this sense means exactly that…bias. The data collectors have a vested interest in the outcome.

Expand full comment
Kenny Easwaran's avatar

Media aren't the ones conducting the polls - scientists are. It's not Dan Rather calling people up when CBS does a poll - it's scientists who specialize in measuring public opinion. Bias means bias - systematic error that can be measured. Scientists are fine with bias, as long as we try to measure it.

Expand full comment
Phebe's avatar

Nobody cares who the data collectors want to be president. All we care about is whether the data they collected accurately predicts the winner. Suppose they ask which candidate's hair color respondents prefer? If that accurately predicts who wins the election, hey, great polling question. It doesn't have to make sense, it just has to work.

Expand full comment
Sssuperdave's avatar

You’ve shown that you understand the difference between statistical bias and showing-favoritism bias, but they aren’t 100% unrelated.

If a certain media outlet is biased towards Harris (I.e. showing favoritism towards her) for example, if they do it in a way that is even remotely consistent, perhaps based their polling methods, then over time their polls will be a certain average number of percentage points is her favor, i.e. statistical bias. If you correct for that bias by adjusting their result, you can get something meaningfully predictive. Not perfect by any streatch, but much more valuable that saying “this data is garbage, throw it out.”

Now, there are pollsters out there that are truly garbage, where their methods create so much noise that there is no hope of measuring their statistical bias. I think Nate does a great job of identifying these and excluding them.

Expand full comment
An Impartial Spectator's avatar

That’s not how that works. If I have weights I know are 10 and 20 pounds, and a scale that says they’re 15 and 25 pounds (biased by +5 pounds), I can weigh a third thing to accurately using the biased scale by subtracting the 5-pound bias from the measurement.

Expand full comment
Jay Arr Ess's avatar

So I have a totally unrelated thing I'm thinking about but that's related to this broader discussion.

I like birds. I think they're super fun. I'm trying to find a way to do a project relating to the state of birds out here. It's helpful for advocating for like.. habitat conservation if we can tell how birds are doing. They're susceptible to pollution, too, so if birds are dying off then it tells us something about the state of the place people are living.

Getting data on birds is hard. There isn't much funding for it, because understandably, most people giving out funds are focusing on things that affect humans, and birds fall low on the list of priorities. Not to mention less cute things like insects which are also really important to our ecosystem.

However, a lot of people like watching birds. Somehow Cornell came out with an app where you can track the birds you saw and where you saw them. And then Cornell and the Audobon society got together, and now researchers can use the information that birders supply through a platform called eBird. https://ebird.org/home

It's super cool! Except that birders have a tendency to focus on the rare birds they see (https://x.com/RosemaryMosco/status/1316379590621892610). So the data they input is really difficult to interpret.

But there are some 200 million observations out there. That's so many birders! It goes back to 2002! That's a long time!

There should be SOME useful information in this data, right? The birders are biased because they'll jot down a yellow warbler on its migration route but not all the house sparrows they saw, but in the end, they did tell us SOMETHING.

Shouldn't a clever researcher be able to use this somehow? Not perfectly, of course.

But with a lot of humility and a heavy dose of creativity, we might be able to use this data to say something real about how birds have been managing over the past two decades. I think that's really cool.

Perhaps I'm on the optimistic side of things - I'm of the philosophy that, as long as there's SOME reflection of true responses, there are probably ways to recuperate some measure of information that's better than if we had no data at all.

Otherwise we're stuck with just vibes. Problem is, vibes have their own biases. And these are harder to magnify. I can research the data and tell you that people will be looking for rare birds, and they'll be going out when the weather's better, and they'll tend to go out when birds are migrating. I can account for this in my analyses. Not perfectly, of course. But I can at least tell you a bunch of the ways that this bird data is biased.

Almost all data that's about living creatures is going to be biased in some way. We just don't have the resources for perfect random sampling in every scenario.

So then we trade off the bias of vibes versus the bias that comes with our data. Empiricists like me are going to say that I think data speaks better for me than my own gut most of the time.

Expand full comment
Cian's avatar

I don't really understand what you think is happening with these. Nate Silver is aware of bias, he does statistical analysis on how polls from each organisation compare with eventual results. If the polls are on average 2 points more Dem-leaning than results, but they are consistently close to 2 points off the results, then he's going to give them a good rating for consistency while applying a bias/house-effect adjustment int he model. If they're Dem-leaning but totally inconsistent, he'll apply the bias adjustment but give them a low rating so they aren't weighted much at all.

Are you under the impression that these polls are basically just rolling a dice and using it to make up a result and then publishing it? And if so, why bother reading a blog based around a model which relies on polling data as an input?

Expand full comment
CJ in SF's avatar

Bias has different meanings. You seem hung up on the definition that includes personal opinions.

Transistors have bias applied to make them function as desired in a circuit.

Expand full comment
Eric W's avatar

That’s simply not true. A poll that consistently predicted the Democratic candidate would do precisely ten points better than the actual final result, would be undeniably “biased” and also the most useful poll in history. You could just subtract ten every time and you’d know exactly what was going to happen!

That’s how you can have a high-quality poll with a strong partisan house effect.

Expand full comment
Tess Wallace's avatar

Except…guess what gets reported in the same media you defend? THE FAKE NUMBERS! Search any poll news and the entire media reference the biased numbers!

Expand full comment
Sssuperdave's avatar

Okay... you've finally said something I can agree with, that the wrong numbers get reported in the media when a particular poll is biased. However, that is completely irrelevant to whether Nate uses them in his model with appropriate adjustment. As Eric referenced, if the poll is consistently perfectly biased by 10 points, I absolutely would want Nate to use it in his model, of course with a 10 point adjustment, and that is exactly what Nate is doing.

The fact that the 10-point-wrong number is reported in the media sure stinks, but it doesn't mean Nate can't get any predictive value out of it.

Expand full comment
Tess Wallace's avatar

It not just the numbers reported, it’s the language used (sometimes even from Nate). A poll is reported as: “Kamala is winning” , “Trump set for defeat”, “Harris is ahead”. All of these are wrong…no one is winning or losing without a vote cast. This is where the polls push the electorate instead of reflecting the electorate.

Expand full comment
Jim's avatar

"When you have to modify the results of a poll to account for its bias (house effect) you are no longer working with mathematics."

Just the opposite, measuring how predictive your inputs are, and how biased, is a simple and useful mathematical operation.

Expand full comment
Tess Wallace's avatar

The inputs are not predictive because the source is tainted… the pollsters have an interest in how their numbers will effect the outcome of the election.

Expand full comment
kezme's avatar

The pollsters have an interest in being accurate, because public opinion polling is just a loss-leader for their actual business that pays the bills, which is private polling on things like consumer sentiment. They're in it to demonstrate to their potential clients that they can produce a reliable result when they're polling on washing powder or hotel room preference.

The unfortunate result is that the incentives push them towards the herding behaviour that Nate mentions - so not favouring one candidate or the other, but not publishing numbers that stray too far from the common consensus in either direction.

Expand full comment
Tim Lawrence's avatar

Except the polls are predictive. Poll-based election models generally predict vote counts in national elections to within a few percentage points. So what you’re saying is demonstrably false.

Expand full comment
Tess Wallace's avatar

No they are not. Ask Hillary😂

Expand full comment
Mike w's avatar

Fascinating. Not clear to me why you appear to respond to those less informed/hyper partisan critics as often as you do.

Expand full comment
Jay Arr Ess's avatar

My guess is they're annoying and potentially damaging if they don't get slapped away?

I respond to mosquitos because they're whiny bloodsuckers and if I don't whack 'em down or go somewhere netted they'll keep coming after me and making me miserable and in some places they'll transmit malaria or West Nile. I don't pay much attention to spiders because spiders generally leave me alone, even if I do feel an admiration for their cold hunting prowess.

Expand full comment
Morgan's avatar

As Mr. Silver well knows, choices in modeling are always iffy ("all models are wrong, but some are useful:, as Box noted). Decades ago when Geoff Hinton visited my institute, we had a little argument about throwing out "bad" data vs incorporating it. I had some practical examples where it helped to toss bad stuff, and Geoff, like Mr. Silver, preferred to incorporate it, even if it is down weighted. So it's hard to say what is best. November will tell ...

Expand full comment
Cian's avatar

This isn't true though. There is data that Nate Silver considers bad enough to toss out, hence the existence of banned pollsters.

Expand full comment
Dan Homerick's avatar

If "bad" means "whoops, something went wrong and this sample is fundamentally unlike the others" then sure, tossing it out can improve both the precision and accuracy of your results. That's most commonly what people mean when they say bad.

But I'd argue that very few "bad" polls are released. That is, if the pollster has a methodology that's reasonably rigorous, and they followed it, then the results aren't "bad" in the "toss it out" sense. They may not be very good (thus pollster ratings), but that's not a case where you should toss their "out there" values and keep the ones that match the herd.

I do expect some bad polls come out, where the root cause is a data record-keeping error, or an employee fabricating data, but good luck finding those without a look at their books.

Expand full comment
David Abbott's avatar

Nate has put more work into evaluating pollster quality than anyone I know of. He uses a very savvy weighted average. The other thing is, his weightings rarely move the final win probabilities by more than a few points. They are much more useful and impactful in congressional races, which are often thinly polled.

Expand full comment
Frak's avatar

You can't decide beforehand what data is giving a good result and what data is giving a bad result. Or you end up like Larry Sabato declaring Trump has a 1% chance. Throwing out outliers does work in some situations, like when you're actually doing science and making empirically valid measurements. But that ain't public opinion polling.

Expand full comment
Kenny Easwaran's avatar

Public opinion polling is doing actual science and making empirically valid measurements. But they're noisy.

Throwing out outliers can make sense when you have a good theoretical understanding of the situation, that makes it not unlikely that a specific type of event led to a highly misleading measurement, such that it's more likely to be this sort of error than just several unexpected coin flips in a row coming up the same way. But it makes less sense when your underlying process is very noisy and could legitimately sometimes get measurements that are far from the average. If your traffic counter measured 0 cars on the 405 freeway through the Sepulveda Pass one day, it's vastly more likely that there was a road closure (which shouldn't factor into your traffic average) than that everyone independently decided to drive a different direction that day. But if you're trying to measure average income and you find someone with an income of $0, or an income of $1 billion, those are more likely real people who should be part of your average, because income is a very noisy parameter.

Expand full comment
Joseph Zapata's avatar

This is a great article, and more evidence that Nate should stick to explaining data rather than give us his editorial opinions like “Trump won the debate if you turn the sound off because he’s taller”.

Expand full comment
K. Natarajan's avatar

a priori - what a novel concept. 😂

(FTR: I have 2 degrees in Biostatisics and while I don’t do that for a living, I absolutely respect & appreciate your efforts).

Expand full comment
Damian's avatar

“Many polls have shown signs of racial depolarization this cycle, meaning that the white vote is getting bluer, while racial and ethnic minorities are growing more Republican.” Nate Silver

Just had a realization that hit me quite hard.

Voting for Harris is a vote for status quo, to maintain the current state, for little change. It is therefore most attractive to those for whom the current state is quite nice. I.e. educated elites with good jobs. In other words, voting for Harris is a sign of privilege.

Voting for Trump is a vote for change, to shake things up, to fuck the man. It is therefore most attractive to those for whom the current state sucks. I.e. the poor, the diminished, and the pissed. In other words, voting for Trump is a sign of disenfranchisement.

If I’m right, 46% of Americans feel disenfranchised. Some, like the blue collar whites, have become disenfranchised more recently, and others, like minorities, have been disenfranchised for generations.

And if it’s already fucked up for you, why not vote for Trump? Somebody who is railing against the machine on your behalf. What have you got to lose?

In reality, a lot. Trump is no champion of the disenfranchised, he’s just a vocal Don Quixote but he has captured their votes because he is the only one talking to them.

Expand full comment
farnor's avatar

Maybe if Trump hadn’t been president before you could talk about the disenfranchised. But he was. And there is a record.

Four years ago most were worse off than now. I can say this without even linking any statistics or data points because four years ago we were locked up and we had toilet paper shortage. A president postulated to drink bleach or shove UV lights up your behinds.

Harris probably reaches people who are done with old men in their late 70s in charge. And women who have seen men take away their rights. The rest is the usual fear mongering on the right and compromise proposals on the left.

Expand full comment
Carra's avatar

Whole world was locked up during Covid. US was still better. What people are comparing is the 3 years with Trump prior to covid vs now and most are not better apart from educated elites with white color jobs

Expand full comment
farnor's avatar

If we play the whole world problem, the US had the fastest recovery, strongest economy, wage growth and lowest inflation. But we don’t, do we?

Expand full comment
SilverStar Car's avatar

US has had a lot lower inflation than the rest of the world, our economy is in a better place.

You’re listening to cult propaganda that is crapping on the US to hold up Dear Leader as the only possible solution.

Same old con.

Expand full comment
Lonnie Hanekamp's avatar

No, the US is not “the envy of the world”. You should stop listening to “cult propaganda”.

Inflation rate (August 2024)

United States 2.5

Spain 2.3

United Kingdom 2.2

Indonesia 2.12

Canada 2

South Korea 2

Germany 1.9

France 1.8

Saudi Arabia 1.6

Italy 1.1

Switzerland 1.1

China 0.6

Expand full comment
farnor's avatar

Now do economic growth :-) you guys are funny. Love when you pull snapshot in time charts without context.

Expand full comment
SilverStar Car's avatar

🪞🤡

Cherry picking 🐮 💩? 🤦‍♂️Studiously avoiding inflation over time, instead a single month? The other economic metrics?

Look, it’s tough out there, no doubt. Covid kicked the azz of the economy worldwide. In 2020 there was a helicopter drop of an extra $4T into the US economy, it’s a GD miracle that the repercussions of that, and Covid created inefficiencies (short & long term) are as low as they are.

But all the blame getting shifted to the guy that wasn’t in office at the time? And now to the person that was NEVER in charge.

Oh yeah, I’m the one ungrounded from reality. 🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣

Expand full comment
Lonnie Hanekamp's avatar

The countries listed were all G20 countries, except for Spain and Switzerland. You will find that all of the G20 countries track pretty similarly, so the August numbers tracks quite well with the average of 2022 or even the entire Biden term. For example, here is 2022 average:

https://wisevoter.com/country-rankings/inflation-by-country/#google_vignette

If you want to average the entire Biden term to prove me wrong, go ahead. You won’t be able to. In terms of the G20, the US was “mid” in terms of inflation rate. You can laugh with as many emoticons as you want, but you will still be wrong.

Expand full comment
Carra's avatar

For whom though? White color workers or blue color folks. I am just agreeing with Damien as to why working class is moving towards Trump. First step is to listen which is not happening

Expand full comment
SilverStar Car's avatar

It’s pre-Covid nostalgia, and Trump’s recession that had already hit manufacturing (mid-2019), being obscured, dwarfed by the Covid tidal wave.

It’s not actual $, that’s the excuse. Otherwise the huge new tax increase Trump’s promising (a LOT bigger than his last one) would sink him

Expand full comment
Jessica Margolin's avatar

As a science-educated person, I concur, and this is why I have issues with "democratization of research" - many "researchers" do NOT do this.

It's practically a moral training that happens when. you're studying science and engineering, and other people reject it because THEIR morality is to make sure everyone is happy.

----

"Make your own average. Seriously, it’s not that hard. But I do have one stipulation: you have to publicly specify the rules ahead of time. I think you’ll find that when you’re forced to be consistent, to set standards that aren’t governed by your ad hoc sense of the vibes or by your partisan preferences, you’ll have a lot more sympathy for the polling aggregators"

Expand full comment
Brian's avatar

RealClearPolitics should not be on this list, since it has absolutely no rules for what polls it includes nor how long it leaves them in its average.

Just this week they covertly moved the Atlas Intel poll (Trump +3) on to the list after never including it before, then snuck it up their list of recent polls so it will be able to stay in the average longer, so they can drop polls from the average that are more recent but show a better picture for Harris.

Expand full comment
Lonnie Hanekamp's avatar

You are mistaken. Atlas Intel has been there the whole time. Also, the position of the poll on the list has nothing to do with the calculation of the average. The actual polling for all polls on the list took place between 9/3/2024 and 9/20/2024, while Atlas Intel polled between 9/11/2024 and 9/12/2024, which is right in the middle of that range. It is also important to point out that Nate ranks Atlas Intel as an “A” rated pollster.

You may be surprised to know that the final RCP averages in 2020 beat 538 in accuracy in 6 out of 7 swing states. This doesn’t mean that Nate’s model is bad. It just means that the unadjusted R-leaning pollsters used in the average made up for pollsters missing low propensity Trump voters. This was an issue in both 2016 and 2020 and may happen again in 2024.

Expand full comment
User's avatar
Comment deleted
Sep 24
Comment deleted
Expand full comment
BS's avatar
Sep 24Edited

I would actually say their biggest problem, even bigger than weighing all pollsters equally, is they don’t weigh at all by recency. A poll from three weeks ago they will weight just as heavily as one from yesterday. This means that their averages are slow to respond to shifts in polling. And if there’s a shift in the polling immediately before an election, that just happens to be in the opposite direction from a systemic polling error that has existed the entire time, RCP can appear to be more accurate just by dumb luck, rather than the quality of their methodology.

Expand full comment
Lonnie Hanekamp's avatar

I agree, but our argument for dumb luck will be weakened if they beat Nate on swing state polling for a third presidential election in a row. Nate seems skeptical that the systemic polling error that we saw in both 2016 and 2020 are related, so he thinks that any polling error could go in either direction in 2024. I am skeptical of this when we have the same candidate, Trump, in all three presidential elections and it is pretty clear that the pollsters have been having difficulty polling the white working class, low propensity voters in Trump 75-25 districts (for example). Over polling for those types of voters in Trump 60-40 districts instead is going to continue to cause Trump to over perform in the polls on Election Day.

Expand full comment
BS's avatar

RCP does nothing in their methodology to account for any potential polling error, beyond simply including and excluding certain pollsters. Nate actually adjusts based on math using historical data. Logic dictates his methodology is more robust than theirs.

Expand full comment
Lonnie Hanekamp's avatar

Yes, I understand this and agree with that. I am just saying that there is a good chance that RCP’s dumb luck presidential streak will continue for the reasons that I previously outlined. Also, Nate doesn’t exclude any of the pollsters that Chad mentioned.

Expand full comment
Lonnie Hanekamp's avatar

First off, 2022 is irrelevant, since Trump was not running and the “missing Trump voters” issue does not apply. It doest’t matter that the graphs took a sharp turn before the election. It doesn’t matter about their editorial bias or who owns them. It doesn’t matter that they included pollsters that Nate also includes like Trafalgar that Nate rates a B+ and you rate as junk. It also doesn’t matter that they include pollsters like Morning Consult that underestimate Republicans by 3 points. The RCP polling average has a very simple model that they follow that is transparent and simple math. As I said in my original post, they actually benefited from the plethora of right leaning pollsters. In fact, their final averages beat 538 in 6 out of 7 swing states in both 2016 and 2020. They even called some states exactly correct like Biden+1.2 in Pennsylvania in 2020. We don’t know if pollsters have finally fixed the “missing Trump voters” issue in 2024. If they didn’t, RCP will have another good year.

Expand full comment
Mike Curtis's avatar

Outlier question for you. Is it possible that the polls are significantly under counting Gen Z? As I understand it, since 2016, there are approximately 40 million additional Gen Z types that are eligible to vote. Are twentysomething adults even included in any of these polls? I realize that a lot of them may not register to vote. But to throw a number out there, if over half of them do register and actually vote, 20 million is a huge number that may not be accounted for in the current polling lexicon. And it's an easy bet to make that a majority of Gen Z that do vote will vote Democratic.

Expand full comment
Kenny Easwaran's avatar

It's possible that the polls are significantly under-counting Gen Z - but it's also possible that the polls are significantly over-counting Gen Z. Both types of errors have occurred with various generations in various past polls.

But yes, pollsters absolutely do include adults of all ages in their polls.

Expand full comment
CJ in SF's avatar

You can look at the cross tabs for any high quality poll and see that of course Gen Z is included in the dataset.

Of course that doesn't mean the likely voter calculation is right, or that it is a fair random sample.

Also, the headline grabbing enthusiasm isn't universal.

https://circle.tufts.edu/latest-research/youth-voter-registration-major-challenge-2024-election

Expand full comment
Cian's avatar

Why would you expect them to be under counting Gen Z at all though?

Expand full comment
User's avatar
Comment deleted
Sep 23
Comment deleted
Expand full comment
Tom Hitchner's avatar

Huh? No it wasn't. The left's theory of victory was that Clinton was ahead in almost all of the polling. It didn't rely on any hidden voters.

Expand full comment
User's avatar
Comment deleted
Sep 24
Comment deleted
Expand full comment
Tom Hitchner's avatar

Oh the LEFT left, yes.

Expand full comment
Nate's avatar

It seems far-fetched that respected pollsters would publish results without including anyone in their 20s.

Expand full comment
Myles M's avatar

I don’t think it’s a matter of whether or not GenZ were polled, rather, how the pollsters weight that demographic (known) compared to actual turnout (unknown).

Expand full comment
Nate's avatar

That's true, but you could say that for all demographics. I guess there might be more uncertainty for Gen Z than other generations, although I'd think it would be similar to the same age cohort in other years (20-year-olds in 2020, 20-year-olds in 2016, etc.).

Expand full comment
Jay Moore's avatar

I create sensor data fusion algorithms for the military, and I very much appreciate the Bayesian rigor of your model. The fact that some polls involve internal models that may incorporate input from your own is horrifying (from the standpoint of statistical correctness and professional standards, anyway). I spend 80% of my time designing data flows that make such abominations impossible; I don’t envy you the task of trying to eradicate this contagion from your results.

Expand full comment
Laura's avatar

The Twitter folks who get mad at your model need to channel that anger into doing something about the poor polls. Being mad at polls and doing nothing won't help, get involved in the campaign and make sure the results are wrong.

Expand full comment