Nate, I am once again asking to stop basing your model's accuracy on prediction markets. Reminder that Polymarket had Beyonce at a 96% chance to play at the DNC! They don't know any more than any of us here do. It's silly to assume that they should always mirror your model.
I've never seen Nate use prediction markets as anything other than a gut check. "If there's a big difference between the model and prediction markets, that's interesting, what could be causing it?"
Even as a "gut check", it's not a good comparison between model A and model B when you know for a fact that model A is a significant input into model B. If Nate's model were private, a comparison to prediction markets would be much more informative.
It would be *more* informative, but that doesn’t mean it’s *zero* information. If there’s a big disagreement *despite* being a major input, that tells you *something*. What precisely? That’s harder to say.
My concern with Nate’s use of prediction markets is one of causality (or is it endogeneity - sort of out of my statistical expertise here). I would think that prediction markets are influenced a great deal by Nate’s model. Consequently, it doesn’t feel right to feed prediction markets back into the model.
I stand corrected and mis-read how he uses Polymarket. That said, there is some concern with using it as validation as a few other commenters have pointed out. It would be best to see a completely independent source used to validate the model, but that may be difficult to find.
Honestly, I don't even think it's validation in Nate's view. He's a gambler and has a somewhat stream-of-consciousness style of observation. His references to prediction markets are just that - observations.
Even that is unwise. Prediction markets are data bro woo-woo. There’s no a priori reason to expect them to have any predictive value at all, and that’s before you take into account the strong likelihood of the model influencing them. They’re just conventional wisdom aggregators, and conventional wisdom is wrong all the time. Just because they can turn the same inchoate vibe of the race anyone could glean by watching enough CNN into a single number doesn’t make them actual data.
You are leaving variables on the table if you don't factor in prediction markets. It is true that prediction markets will be partially based on his model, but they will also have other interesting variables factored in that would be a mistake to not integrate in some way.
I came here to say something very similar. It’s bad practice to use prediction markets as a metric of the model’s validity, whether formally or informally, when it’s highly likely that the model is influencing the prediction markets. I’m sure Nate understands this rather simple fact, so I’m wondering whether his comment here lacked clarity.
So if Predictive Models are influenced by Nate and thus makes NO sense for him to factor them into the model, how can we use ANY polls from the main stream media (ABC, CNN< FOX, NBC...), who are in the business of influencing the electorate through their brands of journalism, which is all biased. Recently a study was done by the Media Research Center that showed that across the major networks for the past few months Trump received 92% Negative Coverage, and Biden/Harris had 85% Positive Coverage - all of which influences the same people they are polling for their opinions...get where I am going? The higher-level answer is to return to a day when networks had "balanced coverage" and did not engage in censorship by omission. The number of Democrats (and I am an Independent) that I speak to who are very smart but COMPLETELY OBLIVIOUS as to the facts of the day's new is ASTOUNDING...I often watch CNN and FOX side by side and its remarkable what is NOT REPORTED by CNN...I digress, but if Nate eliminates Predictive Markets (who are gamblers and not political hacks), then he also needs to get rid of the main stream media's polls (including FOX)...so we can get a clearer picture/snapshot - according to your premise. Does make make sense? I hope so...
Although I agree with your comments on media bias, as you say, it's a little bit of a digression. The distinction here is that media bias affects the way people actually vote, whereas Nate's model affects the ways people *think* people will vote. Voters don't look at Nate's model and say "I'll vote for Candidate X because Nate's model says Candidate X will win," but the participants in prediction markets do look at Nate's model and say "I will bet that Candidate X will win because Nate's mode says Candidate X will win." In other words, there is good reason to believe that Nate's model will strongly influence the prediction markets, even if the polls that go into the model are biased (although I think Nate's model does a better job than any of eliminating biases as much as possible.)
As others have said, any difference between Nate's model and the prediction markets could be informative, but I think that's only likely to happen around major events because the markets react much more quickly than Nate's model will ever be able to. Over a long timescale, I'd expect them to be highly correlated because Nate's model is widely regarded as the best out there, and you'd need to have a good reason to bet against it.
I like Nate's model...but I think media outlest polls should be excluded, if for anything, the appearance of the temptation of bias by the media polls. You make good points, tho, but we are almost splitting hairs. I very much think that people look at, for example, the reporting of RCP average (now often in the media) and without other information some make assumptions about where the voting group is breaking (maybe in a geography for example) - the inference being "they must be doing something right" or " their message must be resonating"...better to get rid of all media polls...they conduct them in order to report on them to get ratings (FOX is a huge culprit, here)...that, on its face should be dispositive. IMHO.
it's highly likely, so if they were significantly divergent that would send a signal someone's done something wrong. they're not, so you can at least rule that out, even if there might be more subtle issues with the model
Because polymarket only takes moneylaunderingbux and blocks US users, and PredictIt puts absurd taxes and restrictions on their markets.
This is a silly thought terminating cliché for people who think that the efficiency arguments that apply to giant liquid markets like the US equities market automatically apply to low-volume offshore crypto casinos.
It's quite trivial to buy into polymarket, actually, with a VPN or otherwise. And it's absolutely not low volume. The presidential election market has $774 million bet so far. There is a 0.1 cent bid/ask spread with insanely large liquidity. If you genuinely think it's mispriced you should absolutely pile tens of thousands of dollars into it.
The fact is that it's very difficult to beat the markets, and I think people intuitively know that, which is why nobody who complains about their accuracy ever actually puts their savings into it.
I wouldn’t convert my savings into unstablecoins and deposit them into an unregulated CFTC-banned cryptocurrency casino no matter how mispriced I thought the casino’s odds were, that’s correct. I would (and have) put my money into predictit until realizing that their winnings tax and betting limits make it not worth anyone’s time to arbitrage a mispriced contract unless it’s off by like 20%+.
I'm not really a fan of crypto either but this just feels like pretty massive cope. Polymarket is incorporated in New York and has a 9 figure valuation and isn't going to steal everyone's money anytime soon.
There's plenty of smart people out there, and only a few people who can consistently beat the markets. If one of them said that the markets were stupid, I would believe them, but people sitting on the sidelines saying so have as much value as some random dude on twitter saying that Silver's model is terrible compared to his own.
Because even if you think a data source is unreliable, it doesn't mean you know with any certainty what direction or how much it will be off by.
If someone makes a weather forecasting model that is just a monkey spinning a wheel, I know it's a bad model, but I'm not a meteorological expert, so I don't think I can necessarily feel confident that I can reliably beat it. We're ultimately both just guessing.
If someone says that a specific data source is bad, I feel that they should have a data source that is superior. If I said that election models are bad but failed to come up with any better form of forecasting, would that be a good criticism?
Wait wait, isn't there a very easy way to handle some of this? Event study!
Heuristically: if Nate wants a little gut check, compare model output to Polymarkets just barely before updates are published. To see the extent to which Polymarket is following Nate, see what the movement is just after model is published.
I think "crowdsourcing" ended up becoming the single most detrimental lingering effect of the "technocratic liberalism" rush that Nate mentions. It was absolutely worth a shot given what we, societally, knew at the time, but a decade later what it seems to have resulted in is an even more insidiously-entrenched need to pander to the lowest common denominator, because "recommended for you" algorithms basically drive everything, and those algorithms are still mostly just click-counting and keyword-sifting.
We've found that there is a sort of genre that develops through audience democracy, but unfortunately this genre sort of ends up dominating all other genres and resulting in an all-you-can-eat buffet where the lo mein sort of tastes like the General Tso's chicken. It's like if Bing Crosby still ostensibly played crooner music, but had a Beatles mop-top and peppered his lyrics with "groovy" a lot.
The problem with using a monetary incentive to Marshall the wisdom of the crowds is that the financial incentive invites gamblers. And while there are professional gamblers who smartly gamble for a profit, that is not where most of the money for gambling is coming from.
I believe the idea of the betting markets is that if you combine the consensus of all of them, the 6 biggest, then the average consensus of over $1 billion wagered by hundreds of thousands of people (possibly millions by now) will be as good if not better than most models.
And certainly better than something like "We put the odds of Harris (Trump) at 60%." that you might hear on Bloomberg or CNN.
Personally, I take the average of Nate, 538, Decision Desk, the Economist and the average of all the betting markets and put it into a daily spreadsheet.
Date Nate Silver 538 Decision Desk Economist Betting Markets Consensus
Average
8/30/24 47.3% 57.9% 57.0% 50.0% 49.5% 52.3%
8/31/24 44.6% 56.5% 56.0% 50.0% 49.3% 51.3%
So I can say with 51.3% absolute statistical certainty that Harris is an ever so slight favorite to win.
With an edge roughly equal to 3:2 single deck blackjack.
I understand what the idea of it is. The problem is that the idea is ruined by it being the consensus of a self-selected group weighted heavily towards compulsive gamblers. There is no reason to believe that consensus will be better than… anything.
The crappy spreadsheet you are describing, for instance, is very stupid and unlikely to be better than any of its sources. It might be better than the betting markets but you are even including the betting markets.
If you’re using this to play the betting markets, you are proving my point about how the consensus they come to doesn’t actually shine any light because it’s composed of ridiculous gambler logic.
Yeah I know he works for Polymarket but he consistently used to dismiss them as "the Scottish teens" on the 538 podcast, so it feels weird to now use them as a calibration, particularly when he is now connected to the point where his own model is likely a major influence on it. It's also kind of weird that he considers it a sort of validation given that the original appeal of 538 was that it was supposed to cut through the general "vibes" of election coverage and cut to the hard data, and betting market numbers are going to be very vibes-based.
"Polymarket, a venture-backed predictions market, has hired statistician and journalist Nate Silver as an adviser while it looks to build out more forecasts around news events."
Nate is an advisor, he doesn't work for them.
He's a freelancer who occasionally provides insights into their models.
What??? He was hired by Polymarket. Have you ever heard of someone being hired as an advisor to a huge company for free? What do you think employee means?
I think you made the right call being editor-in-chief.
Had you not done it, you would have always regretted not taking the shot for your website, and you eventually got to the percentage and work you felt better on.
And as EIC, you had Claire, Micah, Harry, Ben and Walt. That combo was incredible and all those folks may not have come if you weren’t EIC.
And if you hadn’t done it, transiting to your current role would be tougher. You might have taken traditional media deal thinking, “I want to run a large staff,” then you’d be miserable for another five years instead of so free to do what you want now.
100% I worked for a company where the founder/Chairman of the Board didn't want to be the President/CEO, and whoever they brought in for that role had no pull with the employees -- they'd just go over their head to the founder. And it likely kept them from getting quality CEOs in the first place so long as the founder was in the corner office.
I became a devotee of 538 when it was “wrong” on Trump winning in 2016. I remember in the run up to Election Day fretting that Trump had a 30% chance of winning. That was a huge chance that the fool would be our next President. At the time the “conventional wisdom” was that Trump’s chances were minuscule. You were wrong but quite correct. Looking back what “errors” are you most proud of?
It’s always bizarre that they like Silver for not giving Trump for enough credit for winning in 2016 was the conventional wisdom before election night was he was giving Silver for being too bullish on Trump.
I’m sorry whenever people dismiss Silver and say Silver for blowing shows that person just isn’t serious.
Nate's thoughtful explanation of the mistakes he thinks he made at the beginning of the ABC / ESPN era were enlightening. and bode well for the new enterprise. Other people can do plenty of damage but you don't have much control of their actions. But you can learn from your own mistakes and not make them again. Nate's recent post on his plans for Silver Bulletin suggest he has learned - grow slowly, small staff, narrow scope, focus on where he and the team can excel.
As a reader at the time I thought it was too hard to find high quality items worth my time among too much material of varying quality.
I signed up to Silver Bulletin the first day and became paid the first day available. So far NOT wasting my time. I'm happy paying for info and analysis I don't see in MSM.
> As a reader at the time I thought it was too hard to find high quality items worth my time among too much material of varying quality.
That was exactly my struggle. There was one specific moment in a podcast where one of the employees refused to answer a straightforward question from Galen about current issue polling because it didn't match her political preferences, and instead hit the audience with a rambling five minutes of poll unskewing. I remember thinking: "You know, I legit would pay for a service that filters out all of this bullshit."
I agree with much of your comment, and I think the content in this newsletter (and Nate’s book) has been even better than Disney-era 538 (which I did enjoy) because it’s more focused on topics that are a natural fit.
Nate’s self awareness and humility are refreshing. Many, possibly most people who went from cube dweller to 3 million twitter followers and national semi fame would think they were the ubermensh and lose their edge far faster.
Nate, I live in Arizona and will use this as an example. In Arizona, every poll that shows Harris winning, consistently under polls Republicans by roughly 6% and over polls Democrats by 2%. When the polls are closer to the actual political makeup of our state, Trump always comes out on top.
In an article by John Mclauglin last week, he stated that in many of the polls he's analyzed, he is seeing an over polling of Democrats by 5-10%. Do you agree and how does your model adjust for this.
Don’t you think pollster know this and adjust the results for the over/under polling? It’s pretty basic stuff. You’re never going to get respondent counts that exactly match demographics so you always factor that in.
Obviously, the pollsters aren't adjusting for this. If they were, I wouldn't be seeing a poll on a Monday that says Harris is winning Arizona by 3 points, then the very next day a poll comes out and says Trump is winning by 3 points. Something is seriously wrong with our polling system.
As Nate explained, pollsters normally don't do this kind of adjustment. If you assume a constant R/D ratio, then the polls would barely move at all. What's the point then? Why not just predict the election based on voter registration?
All you can hope is that other, indirect adjustments -- such as by education -- is sufficient in correcting partisan response bias.
What do you mean by “the actual political makeup of our state”? If you want to use the polls to predict the result of the election, then what you must mean is “the actual ratio of self-reported republicans to democrats that turns out to vote in November”. But I don’t think you have any more information about that than the pollsters do, so on what grounds are you criticizing the results? Are you assuming that self-reported party ID of people who turn out in the election will match registered party ID of registered voters?
I am pretty sure there are criticisms that the make up of some kd these polls doesn't match up to 2020.
It's obviously possible that there is good reason to believe that voters composition in 2024 will not march previous elections but getting a pollster to explain themselves is losing proportion. They're black boxes, which is I suppose tracking their historical accuracy is such a tempting proposition in the first place.
Why would one expect the make up of the electorate to be the same as 2020?
The makeup of the 2020 electorate was different than the makeup in 2016 which was different than the makeup in 2012, etc. 2020 was the highest turnout election since 1900, which means a lot of low propensity voters voted in that election. 29% of the people who voted in the 2020 election were voting in their first presidential election and 14% were voting for their first time in any election. These numbers were even higher in states like Arizona where they were 36% and 20% respectively. 2020 was a very unusual election from a turnout perspective.
You also have to factor in things like people turning 18 and people dying. In 2016 Gen Z + Millennials were 23% of the electorate and boomers were 51%. In 2020 those numbers were 31% and 44% respectively. That trend will only continue and the effects of covid mean the fall off for boomers+ will likely be even higher.
All that to say likely voter models are hard, but ones that expect the electorate to be exactly what it was in 2020 are likely to be wildly inaccurate.
Fwiw, this sounds extremely similar to the "poll unskewing" done in 2012 for Mitt Romney. You can look into what Nate thought of that but iirc, he was very dismissive of it as a practice
There are plenty of words written about this, that people’s self identified party affiliation or even recalled vote for the last election are different than the actual party registration and the actual vote share of the candidates. And pollsters do adjust where they can confidently adjust, such as party registration that isn’t a subjective answer at the time of being asked.
I'd be very interested in reading a response to something like this. Though, this is the first time I've read about these numbers, so I have no idea if they're true (but they sound like a recipe for bias).
nobody knows how the actual electorate will look. doing it by party reg is terrible practice. Pollsters have a very strong incentive to be right, if they were legibly oversampling Democrats they would stop doing that
I can confirm that this is what is happening in Arizona, I don't know about the other states. I'll see if I can still find the Mclauglin article, if I can I'll post the link.
In my opinion, Nate has a hidden solution to the bias problem in polls against Trump that he doesn't elaborate on here: the extensive and uncritical use of dubious pro-Trump pollsters like Rasmussen and Trafalgar.
to call the model's use of Rasmussen "uncritical" given Nate's extensive comments and data about how his model accounts for house effects is ridiculous, and either bad-faith or ignorant.
I still think his model gets overwhelmed by the sheer volume of bad GOP-slanted pollsters in a way it can’t quite correct against. It definitely impacted the Senate forecast in 2022.
The plain aggregators, specifically RCP, have had more accurate predictions in 2016 and 2020. Of note RCP got 3 states wrong in 2016 and 2 states wrong in 2020. 538 got 5 wrong in 2016 and 2 in 2020. The 7 states combined that 538 got wrong all were states called D that went R. The 5 that RCP got wrong included one called R that went D and then 4 the other way. Bottom line is that 538 had a significantly larger Dem bias.
That RCP allows more credence to the R leaning Pollsters like Rasmussen and Trafalgar pushes them to the R side. I am not claiming their methodology or lack of, is better, but in an environment where there is a built in D bias (I attribute that to non response bias where Rs answer polls less) they will be attenuating the Dem bias.
Yeah, Nate's bias correction only works if the current bias matches whatever he used to calculate the correction. RCP doesn't correct biases, they just depend on polling errors canceling each other out. So if RCP is more accurate, it is just luck.
Decision Desk has an average of every single poll for every race.
They have collected over 35,000 polls so far, including 150 national Harris vs Trump polls.
No correction, like a Super RCP.
They are showing Harris +3.8% in the popular vote.
MI: Harris +2.0%
WI: Harris +4.0%
PA: Harris +1.2%
NV: Harris +0.7%
AZ: Harris +0.1%
GA: Harris +0.1%
NC: Tied
Avg Swing State: Harris + 1.2%
Note that RCP is showing Harris +0.3% in swing states and Bloomberg/Morning Consult +2%
So Decision Desk Swing State average is exactly equal to the mid-range between them.
If the election were today and the unadjusted of all polls in every swing state were accurate then Harris wins 303 electoral votes with NC a toss up (possibly 319).
I tend to think Nate's 44.6% estimate is significantly off but he admits that at the moment it's undercounting Harris's odds and she'll bounce back.
Decision Desk model, based on all polls, is showing 56% and given the average is Harris winning in 6/7 swing states and tied in one, I would say that seems reasonable.
Yes in 2020. they both had two states wrong each but where RCP had their two miscalls going either way calling FL for the Ds and GA for the Rs, 538 called both miscalls for the Ds, FL and NC where they both went R, so though you might say the accuracy was the same, 538 had higher bias as they did in 2016 where both got MI, WI, and PA wrong but 538 had NC and FL blue.
I am not contesting whatever 538 does as not valid, just that the results ended with Dem bias. Bias is the term that 538 uses and doesn't imply that they did something wrong just that they missed the result. I might use the term over prediction, but it uses more letters.
They’re not *bad*. They are equal precision to the others but have a slightly different bias. But it’s very hard to tell where the truth is when you have several sets of estimators, each with different and unknown bias.
This is funny. Compare the accuracy of Rasmussen and Trafalgar to the massive misses that the big name polls make on a regular basis. It’s not even close. If it were, Gillum would be Governor right now and NC would be blue according to the big name polls.
Polling is difficult, and sometimes big mistakes are made in good faith. Among some Republican pollsters, there is a feeling that there is simply no good faith.
"For years, Rasmussen’s results have been more favorable for Republican candidates and issues. During the Trump administration, though, the site’s public presence became more overtly partisan, with tracking polls sponsored by conservative authors and causes and a social media presence that embraced false claims that spread widely on the right. At times, Rasmussen’s polls actively promoted those debunked claims, including ones centered on voter fraud...
Last March, for example, Rasmussen released data purporting to show that Republican Senate candidate Kari Lake (R) had won her gubernatorial election in November 2022. The route it took to get to that determination was circuitous and, to put it mildly, atypical. On behalf of the group College Republicans United, Rasmussen asked Arizona voters who they voted for in Lake’s race and, after weighting the results to exit polls — which is unusual — declared that, contrary to the certified tally, Kari Lake had won her race by eight points.
An election of 2.5 million voters is a better indicator of an election outcome than a retrospective question offered to 1,000 Arizonans four months later from a Republican-leaning pollster that is adjusting its results to a metric, exit polls, that is itself weighted to the election results. But Rasmussen trumpeted this revisionist look at the race loudly — including on Stephen K. Bannon’s podcast — as did Trump allies." Washington Post
I think this is pretty damning evidence that Rasmussen has become an unreliable pollster.
They are peddling disinformation, and attempting to undermine confidence in American elections.
That's objective fact, not opinion.
Nate is probably OK adjusting for house effects, but I think there is zero question that Rasmussen is not operating in good faith.
They are a GOP (a party now run by Trump's daughter in law) polling organization that works for Trump and that's why all the polls they have done this cycle have been tilted towards Trump.
The same could be said with the big news/university polls that are consistently wrong. Yet somehow they are never held to account. And somehow their misses always favor Dems.
Question for SBSQ #13: anything you can share on what Silver Bulletin's election night coverage may look like? The FiveThirtyEight live blogs were one of my favorite ways to follow election results, so I'm curious what your plans are this time around.
Silver isn't saying that there's no systematic bias against Trump, he's saying that it's not possible to know if there's any systematic bias against Trump. Pollsters guard their methodologies because they are trade secrets--whether or not they changed their procedures to correct for 2016/2020 and if so how simply isn't public knowledge.
Some questions aren't worth asking because the answer is not obtainable.
That said, here's some pure conjecture on my part: the shy voters in 2016 and 2020 were probably blue collar whites. The shy voters in 2024 may not be--they could be low propensity voters who are disproportionately young and non-white. It's conceivably that a hypothetical poster could try to get more white voters without a college degree while missing the new Trump 2024 voters. Eric Levitz at New York threw this idea out there as a possibility.
It’s not about missing them though, it’s about whether the respondents are representative. So while it’s probably true that you will get fewer young non-white voters who agree to take a poll than their share of the voters, pollsters can weight for that. The problem is if the young non-white voters you do get answer in a different way from the young non-white voters you don’t get. That might have happened a bit in 2020, but Nate Cohn has run some experiments that seem to suggest the effect now is very small if any.
Yes, but what weight do they set? Young and non-white voters migrating to the GOP, much less Trump, doesn't really have a historical precedent in recent memory. What's your guess and what do you use to justify it?
Keep in mind that these same pollsters were caught badly off guard in 2016 and 2020. If they're facing something new, again, in 2024 that's not exactly confidence inducing.
But unless the young and non-white voters you mention are missed in the sample or lie when polled, their answers will be reflected in the resulting numbers.
No it isn’t. Again, if you know that young non-white voters should make up 12% of your sample, and they turn out to only be 8% of who answers, you can easily adjust. The problem only comes up if the 4% you didn’t get would have responded very differently than the 8% you did get.
The issue is that different groups vote at rates.that are disproportionate compared to their representation in the general population. The young do not vote.consistently, for example, while the older are more likely to.
Plus there are issues like Trump's low propensity voters. Will they show up or won't they? High turnout will probably benefit Trump in this scenario while low turnout helps Harris.
Actually he sites a Nate Cohn article from NYT Upshot that does show that Trump voters are less likely to answer polls. To quote from Cohn's article "White registered Democrats were more than 20 percent likelier to respond to our surveys than white registered Republicans."
What I would like to see is more polls that weight for whom you voted for in the last election so that they reflect that vote, otherwise you are going to have Dem leaning bias.
NYT/Sienna’s polls actually do that. It worked extremely well in 2018 and 2022, less so in 2020. (I think that David Shor’s hypothesis that people who socially distanced in response to COVID were more likely to answer polls because they were both available and bored is pretty plausible.)
I think the problem with comparing the midterms with the presidential years is they are two very different demographics, and why I dont do that. Presidentials bring out more low information, less educated voters and people who only show when Trump is there just as Obama brought in a lot of not consistent voters.
I was unaware if Sienna controlled for prior vote but their predictions in 2020 were actually a bit more Dem biased than the average which was Dem biased. On the national pop vote their last poll had Biden up 9 where the final aggregate was more like 7 and his actual margin was 4.5, I checked just one state, the critical PA and that was way off where the RCP aggregate had it +1.2 Dem which nailed the actual vote, but NYT/Sienna had Biden +6.
Nate's older Pollster Ratings used to have the partisan bias listed very accessibly but now I cant find that metric My guess is they are more Dem biased than the norm. He actually last ranked NYT/S first but that looks at a lot of criteria. I personally am more interested in the partisan bias more than anything.
Anything is possible, I just dont see that the electorate is that different from 2016 and 2020. We can agree that a big change happened from 2012 to 2016 where many college educated Romney supporters moved to the Ds and non college Obama supporters moved to Trump. If Haley were the R candidate then I would expect a big change but Trump remains Trump and I dont think Harris' demographics are that different, though perhaps she might pull out more Black voters as Obama did, we will see
Nate, one thing I haven’t seen you talk about much is voter registration. Does the model take into account upticks in voter registration after “big” events, like Biden dropping out, or either of the conventions? I’m not sure exactly how you would want to measure the effect of voter registration, but it does seem logical to assume that things such as the uptick in registration after Harris became the nominee would have an effect on the polling and the exact makeup of the electorate on Election Day. If the model doesn’t take this into account, have you ever thought about including it, and how would you do so?
Is that the case, though, or would this be filtered out when pollsters weight for various demographic factors? For instance, off the top of my head, I believe that there's been a large increase in Hispanic voter registration and young voter registration since Harris became the nominee, so it would be a reasonable assumption that we might see an uptick in the Hispanic vote share and younger voters' vote share as compared to previous elections. If pollsters are using previous elections to extrapolate the vote share for these groups for 2024, then the increase in voter registration may not ever be reflected in the polls.
I saw an interview with a guy who collected statistics on voter registration and he said pollsters would eventually figure out how to include new voters. I suppose that means pollsters who aren't basing their models solely on past elections. Presumably a good pollster tries to figure out what changes each cycle. But I figure both cases must exist.
I think that PA has shown a major disparity in the number of R's versus D's registering recently. Might be useful information but I don't think incorporating it into the model is necessarily straightforward.
I think the attitude that so many have that Trump under polling is a persistent systemic bias is fed by how much of a rule breaker he has been. After all, if he can politically survive the Access Hollywood tape, being a convicted felon, having a mob of his supporters ransacked the Capitol, and so on, then why not this too? But Nate is a data guy so he doesn't think that way.
This would be a reasonable analysis if combined with one showing that n=2 patterns were predictive in this context. Nate's already provided analysis showing that such patterns are *not* predictive.
On the one hand, it is only n=2, on the other hand, it is 2 out of 2. And it's not exactly like flipping a coin twice and getting heads twice. The odds of getting heads twice is apparently 3 out of 8, 37.5%. And the odds of getting 3 in a row without a systemic issue is 1 in 8.
It is far too small a sample size. All I am saying is that looking around at what is happening in the rest of the world makes me wonder about demographic change, which means that models going forward could easily have issues.
It's just that the most plausible explanation for that polling error to persist, rather than having been unique to each election, is that Trump-only voters (as opposed to those who are voting in midterms too) have particularly low social trust and therefore end up being impossible to poll. The way a Democrat would phrase this is a large percentage of Trump supporters are sociopaths, and if one looks at it that way, it's not much different from what the comment above is saying.
I just don't buy the theory that there is this hyper-specific subset of people who vote for Trump but don't vote for Republicans in midterms (despite very Trumpian candidates being on the ballot), who also happen to not respond to polls. The modern Republican party is already positioning itself as anti-establishment, and Trump is more or less synonymous with the party now. As Nate mentions in his post, there are legitimate explanations for the missed in 2016 and 2020, and more broadly it's really not unreasonable to argue that something happening twice does not make a pattern.
I'm not sure how much I buy the theory but I do think that no other candidates in the party are truly very "Trumpy". Trump's appeal isn't down to some kind of coherent political ideology or strategy that can be replicated, it's down to his unique charisma and personality. Even when other candidates try to emulate his positions or beliefs or his personality, it just never really works, at best it feels like off-brand Trump from Temu, and they also just never have the mainstream recognition that he does.
I can at least entertain the idea that there are some set of low-propensity voters out there who are into Trump but just don't care about his down-ballot wannabes.
We know that white collar professionals vote in midterms at a higher rate than blue collar workers. When this cohort used to vote R the R's always had a midterm turnout advantage.
The question now is whether since the parties are swapping bases whether that still holds true for the D's.
Again, what makes Trump unique? As far as I can tell he is not: Milei, Le Pen, Wilders, Orban, Duterte, on and on and on. I sometimes describe them as "populists" but I think what they really are is outsiders--the "anti-establishment". That is why their supporters don't participate in polling. They do not necessarily lack trust in society, but rather the system.
It also suggests to me that, since demographic movements typically have a lifespan of a few decades, that Trump will not be the last populist we see in the US.
Trump appears to be the only one with the cult of personality appeal though.
45% of Americans seem ready to embrace outright fascism as long as he's the one in charge.
Ron Desantis tried to be "Trumpism without the Trump" and crashed and burned.
So did Vivak, promising to be "the most Trumpy candidate that's not Trump".
The primary race when it was Trump vs Niki Haley was 75% Trump.
Trump said "We might have to suspend parts of the constitution in order to save it." His people love him for it. They cheered.
Imagine JD Vance on the stump saying this same thing.
If DeSantis proposed ending elections, or Vance, it would be the end of their careers.
Trump is the only one with the black magic of demagoguery working for him.
Yes it can work on both sides of the isle. Huey Long ruled LA with an Iron Fist and had a plan to become president in 1940.
But it's a rare person that can manage to be a successful demagogue.
Trump's secret appears to be saying things that no one else would dare to say.
When others try to say things like that (aka Vance) they somehow lack the Charisma or celebrity to make it work.
If JD Vance said "Mexicans are rapists" it would have been a week long scandal.
When Trump said it, it was a scandal that got him a lot of free media and launched him on the road to the White House in 2016.
If JD Vance said "Migrants aren't human...Protestors are terrorists...Liberals are vermin." It would have ended his career.
When Trump said all of these things his crowds ate it up.
As far as I know, Margery Taylor Green and Karrie Lake, haven't said anything like that, they haven't had the courage to try.
For whatever reason, Trump, a man who isn't at all charismatic in the traditional sense, not in any way a great orator, has the animal magnetism of a Grade A demagogue and might become the most powerful president in American history in a few months.
David Frum said something like "if moderates fail to control the border then extremists will". I think that can be extrapolated out to governance in general.
The establishment has failed, and that has opened the door to outsider candidates.
Well in this context we're talking about polls that were very accurate in 2018 and 2022, and not in 2016 and 2020, while Trump was head of the Republican party in all four elections. I guess I'd have to see a post with the data and analysis on how polls in other countries are or aren't able to account for these anti-establishment voters.
Trump was only running in 2016 and 2020. Plus Dave Wasserman, Matthew Yglesias, etc. have suggested that the D new base (highly educated white collar professionals) are more likely to turn out in "low salience" elections: midterms, special elections, etc. By contrast the R new base is more likely to only vote in presidential years.
The last EU elections stunned just about everybody because anti-establishment parties did much better than expected. See the snap elections called by Macron in France.
But the surprising result in France is easily explainable by fast-developing alliances and tactical voting that resulted. France has very weird runoff elections with three candidates, very subject to manipulation.
Le Pen only didn't win this most recent election because of a very quick last-minute alliance between France's centrist and left-leaning factions. They didn't overestimate anything.
Numerous outlets described the June elections as "stunning".
The theory is that Trump's voters don't pick up the phone when pollsters call because they don't trust polling. It's not about him, it's about the populations that support him and their distrust of the system.
I think it's just down to it happening in two very high profile scenarios. A dispassionate statistician will see n=2 samples and say it's statistically negligible, but most people have a hard time even understanding that framing. On an emotional level, it's a way simpler to see a pattern of "Trump always beats his polls" and use that as the heuristic.
Nate..How does early voting affect the model? Do polls take into account voters who have already voted? Does that have an affect on the likely voter modeling and the averages?
I asked this in the chat but maybe it’ll be more likely to be answered if I post in here. What’s the difference between internal and public polling? Should we assume that campaigns’ internal polling is more accurate than what a high quality pollster releases? If so, what would make it more trustworthy?
Also internal polling has a different purpose. A campaign doesn't do polling two months before the election to predict the end result, they do it to identify problems in their campaign plan that needs to be addressed. Such a poll may not even attempt to represent the entire electorate, just a portion that they want a better read on.
I think the answer boils down to it costs a lot more, so they should be able to take the time and effort to do things that public pollsters can't or don't. It's hard because most people don't answer polls, right? On the other hand there are a lot more public polls, so there's the wisdom of the crowds aspect. But presumably campaigns think internal polling adds value since they're willing to pay for it.
The main reason us as outsiders can't trust internal polling is that it's being selectively released - it'll only be "leaked" if the campaign wants it leaked. This introduces unavoidable bias into the results we see, even if the polling itself is completely above-board.
>Given that convention bounces are smaller than they used to be, maybe we could treat them as a rounding error in future years, like by saying that Harris’s numbers were likely to be a little inflated for a few weeks in the narrative explanations we provide, but not making any specific adjustment for the convention in the model itself.
I don't think that's the solution. I think the solution is probably a slightly more considered approach to how the model anticipates convention 'bounces'. While I'm sure Nate is simplifying and it's not actually as simple as just taking a point or two off Harris regardless of the polls, I think the better approach would be something along the lines of:
- Expect a pre-convention reversion if a rise follows a convention, so that the convention in the long-term doesn't have much effect on the topline figures
- If there is no convention bounce, that shouldn't impact the projected topline figure, but perhaps *should* affect the distribution of possible results. If a big event like a convention doesn't move the needle much, that implies perhaps a more top-light (as opposed to top-heavy) distribution of potential outcomes for the candidate
What appears to be happening here, and what I think is really really hard to justify, is the model is essentially assuming that Harris will end up in a weaker position when the dust settles post-convention than she was at pre-convention. From what I've gathered from previous posts, it's not simply "correcting" for a bump, but it is assuming she "should" have got a larger bump, based on very weak and noisy historical evidence, and because her bump is less than what she "should" have got the model then takes off a fair bit more than the bump she actually got. In other words, if I have understood what Nate is saying properly, then the model in its current state will effectively assume that a candidate is better off having no convention at all if they get a bump but it's a lower bump than the model predicts. This is pretty tortured logic in my opinion and I haven't read a good justification for it.
tl;dr: it's good to try to hedge against convention bumps, but it's bad to use an average from weak, noisy, and trending historical data to effectively 'punish' candidates for not getting a bounce that meets the artificial expectations set.
The more important thing is that he explains his methodology at length. As a result, it's not a mystery why the model has Harris's chances going down, and also that if her recent improved numbers are a trend and not a convention bounce, then the model over time will reveal this.
Whether those of us critical about the convention bump adjustment are right or wrong, I do wish we'd be able to toggle it on or off accordingly like his 538 models used to do. I get that building a model with that level of user interactivity on Substack is probably difficult to do and probably not worth it for basically a two-person team to handle given the adjustment will be gone in a week or two anyway, but...I dunno. Something for the future I guess?
Date Nate Silver 538 Decision Desk Economist Betting Markets Consensus
Average
8/30/24 47.3% 57.9% 57.0% 50.0% 49.5% 52.3%
8/31/24 44.6% 56.5% 56.0% 50.0% 49.3% 51.3%
Given that 538 has fixed its model to be very similar to Nate's and they are showing the exact same national polling lead for Harris, I use 538 as a proxy for a Nowcast for Nate Silver's model.
If there was no convention adjustment, and Nate's model didn't overweight the republican PA polls the most because there is nothing else post RFK, I imagine he'd be showing about 56.5% Harris.
Yes and the average bounce is 1% since 1998 and Nate said that right after the convention the model was adjusting -2.5%.
That might explain why he's now predicting Harris wins the popular vote by 1.5%, 2.3% below 538's model.
538 model has been fixed to focus 80+% on polls instead of 66% fundamentals as they were with Biden.
Until the convention bounce ends and we get some new post RFK drop PA polls I would assume that Nate's model is the least reliable of all the major forecasting models.
Date Nate Silver 538 Decision Desk Economist Betting Markets Consensus
Average
8/30/24 47.3% 57.9% 57.0% 50.0% 49.5% 52.3%
8/31/24 44.6% 56.5% 56.0% 50.0% 49.3% 51.3%
Nate has admitted as much today, and said it doesn't make sense to try to come up with a fix to this rather unique but temporary situation.
Within 2 weeks the post convention adjustment will go away and we'll certainly have several more PA polls.
The justification is simply that at the time, you can't tell the difference between a candidate suffering a drop in their underlying support that is partially or completely masked by a normal convention bounce, and a candidate whose underlying support hasn't changed but who got little or no convention bounce.
Why is it more likely? If the historically candidates get a couple of percent bounce that deflates back to the underlying support within a fortnight, that seems like a reasonable default assumption.
I think the low response rates of those with lack of trust in insitutions, who are also typically Trump's biggest fans, is more likely than not to bias polls against Trump in 2024 as it did in 2016 and 2020. The years 2018 and 2022 are not good counterexamples because many of those people mostly care about Trump himself and not the GOP or conservative values in particular. We can't just assume that pollsters have found a way to fix this issue when only 1-4% of people contacted actually respond/complete surveys.
In principle that is true. But if it isTrump supporters who distrust institutions, then correcting for under-representation of them is certainly possible. For example, maybe these distrustful folks are mostly non-college educated males. Any decent pollster would be correcting for under-representation there, by just calling more.
I don't have access to Cook, but one thing clearly has changed in this election and almost nobody is focused on it.
It's renters vs homeowners. The former is getting hammered.
The BLS reported rising rents up at least 0.4 percent every month for 33 straight months. That string recently broke but for one month only. Those who rent (think young adults and blacks) have absolutely been hammered compared to homeowners who refinanced in 2021 at ~3.0 percent and some even less. This explains the huge pickup by Trump in young adults and blacks.
The state of the economy is much more important for this set as well. Not a single pollster asks "Do you rent or own a home?" If that group (young voters and blacks) returns to the Democrat fold as happened in 2020, Dems will win. If not look for Trump will win. This is why recession/unemployment is very important.
The polls reflect the current state of the economy, but the polls do not reflect an economy that hasn't happened yet.
So is it rising unemployment/recession? That is my call. Strong enough to matter (and fast enough)? I don't know even if my view is correct.
Trump has turned off a lot of people with his belittling comments instead of focusing on the economy. But a recession and/or rising unemployment benefits Trump.
And unemployment has been rising among Blacks and young adults. I have charts by race and age group.
It's possible that younger voters and woman are being under sampled and Harris is leading with them by a large amount.
After all, there has never been a presidential election after 40% of woman suddenly lost the right to abortions.
And with 8 states having abortion rights on the ballot, including NV, AZ, and FL (key Senate Race).
Is it likely that woman turnout will be greater with a woman leading a ticket and abortion the #2 or #3 issue (in recent surveys)?
Given that Dobbs came after the 2020 elections I think 100% likely that yes, more Woman will vote than in 2020.
But how many more?
How would pollsters try to adjust their weightings of woman vs men to compensate?
I imagine they just don't do it.
Same with younger voters.
If they turn out in larger numbers than in 2020 (and I think that's a fair assumption given the new race we face) how would a pollster adjust the weighting of young voters?
In a recent poll (CBS I think) 88% of voters said they were very likely to vote or extremely likely and 92% said they were at least somewhat likely to vote.
There is no way in hell that turnout is close to 88% so you can't just ask people "are you likely to vote" and then adjust the weighting based on the results.
Nate, I am once again asking to stop basing your model's accuracy on prediction markets. Reminder that Polymarket had Beyonce at a 96% chance to play at the DNC! They don't know any more than any of us here do. It's silly to assume that they should always mirror your model.
I've never seen Nate use prediction markets as anything other than a gut check. "If there's a big difference between the model and prediction markets, that's interesting, what could be causing it?"
Even as a "gut check", it's not a good comparison between model A and model B when you know for a fact that model A is a significant input into model B. If Nate's model were private, a comparison to prediction markets would be much more informative.
It would be *more* informative, but that doesn’t mean it’s *zero* information. If there’s a big disagreement *despite* being a major input, that tells you *something*. What precisely? That’s harder to say.
My concern with Nate’s use of prediction markets is one of causality (or is it endogeneity - sort of out of my statistical expertise here). I would think that prediction markets are influenced a great deal by Nate’s model. Consequently, it doesn’t feel right to feed prediction markets back into the model.
I don't think prediction markets are inputs to his model. I think he uses them as a sounding board to gauge how his model is doing.
I stand corrected and mis-read how he uses Polymarket. That said, there is some concern with using it as validation as a few other commenters have pointed out. It would be best to see a completely independent source used to validate the model, but that may be difficult to find.
Honestly, I don't even think it's validation in Nate's view. He's a gambler and has a somewhat stream-of-consciousness style of observation. His references to prediction markets are just that - observations.
This is about as good as it gets. 🤷♂️
I have my own model that is the average of Nate Silver, 538, Decision Desk, Economist, and the Real Clear Politics Betting Market Average.
Right now it's 51.3% down from 52.3% yesterday.
Nate Silver 44.6%
538 56.5%
Decision Desk 56%
Economist 50%
Betting Market Consensus: 49.3%
Average: 51.3%
So I can say with 51.3% absolute certainty that if the election were held today, Kamala Harris would win;) LOL.
What an excellent steaming hot take. 👍🏼
💩
Even that is unwise. Prediction markets are data bro woo-woo. There’s no a priori reason to expect them to have any predictive value at all, and that’s before you take into account the strong likelihood of the model influencing them. They’re just conventional wisdom aggregators, and conventional wisdom is wrong all the time. Just because they can turn the same inchoate vibe of the race anyone could glean by watching enough CNN into a single number doesn’t make them actual data.
You are leaving variables on the table if you don't factor in prediction markets. It is true that prediction markets will be partially based on his model, but they will also have other interesting variables factored in that would be a mistake to not integrate in some way.
A lot of the interesting variables that they have factored in involve people being able to be stupid and also compulsive gamblers simultaneously.
I came here to say something very similar. It’s bad practice to use prediction markets as a metric of the model’s validity, whether formally or informally, when it’s highly likely that the model is influencing the prediction markets. I’m sure Nate understands this rather simple fact, so I’m wondering whether his comment here lacked clarity.
So if Predictive Models are influenced by Nate and thus makes NO sense for him to factor them into the model, how can we use ANY polls from the main stream media (ABC, CNN< FOX, NBC...), who are in the business of influencing the electorate through their brands of journalism, which is all biased. Recently a study was done by the Media Research Center that showed that across the major networks for the past few months Trump received 92% Negative Coverage, and Biden/Harris had 85% Positive Coverage - all of which influences the same people they are polling for their opinions...get where I am going? The higher-level answer is to return to a day when networks had "balanced coverage" and did not engage in censorship by omission. The number of Democrats (and I am an Independent) that I speak to who are very smart but COMPLETELY OBLIVIOUS as to the facts of the day's new is ASTOUNDING...I often watch CNN and FOX side by side and its remarkable what is NOT REPORTED by CNN...I digress, but if Nate eliminates Predictive Markets (who are gamblers and not political hacks), then he also needs to get rid of the main stream media's polls (including FOX)...so we can get a clearer picture/snapshot - according to your premise. Does make make sense? I hope so...
Although I agree with your comments on media bias, as you say, it's a little bit of a digression. The distinction here is that media bias affects the way people actually vote, whereas Nate's model affects the ways people *think* people will vote. Voters don't look at Nate's model and say "I'll vote for Candidate X because Nate's model says Candidate X will win," but the participants in prediction markets do look at Nate's model and say "I will bet that Candidate X will win because Nate's mode says Candidate X will win." In other words, there is good reason to believe that Nate's model will strongly influence the prediction markets, even if the polls that go into the model are biased (although I think Nate's model does a better job than any of eliminating biases as much as possible.)
As others have said, any difference between Nate's model and the prediction markets could be informative, but I think that's only likely to happen around major events because the markets react much more quickly than Nate's model will ever be able to. Over a long timescale, I'd expect them to be highly correlated because Nate's model is widely regarded as the best out there, and you'd need to have a good reason to bet against it.
I like Nate's model...but I think media outlest polls should be excluded, if for anything, the appearance of the temptation of bias by the media polls. You make good points, tho, but we are almost splitting hairs. I very much think that people look at, for example, the reporting of RCP average (now often in the media) and without other information some make assumptions about where the voting group is breaking (maybe in a geography for example) - the inference being "they must be doing something right" or " their message must be resonating"...better to get rid of all media polls...they conduct them in order to report on them to get ratings (FOX is a huge culprit, here)...that, on its face should be dispositive. IMHO.
it's highly likely, so if they were significantly divergent that would send a signal someone's done something wrong. they're not, so you can at least rule that out, even if there might be more subtle issues with the model
To be fair, Nate has said he works for Polymarket. I'm surprised he didn't include a disclaimer.
If prediction markets are wrong, you should be making money off of them. Why aren't you?
Because polymarket only takes moneylaunderingbux and blocks US users, and PredictIt puts absurd taxes and restrictions on their markets.
This is a silly thought terminating cliché for people who think that the efficiency arguments that apply to giant liquid markets like the US equities market automatically apply to low-volume offshore crypto casinos.
It's quite trivial to buy into polymarket, actually, with a VPN or otherwise. And it's absolutely not low volume. The presidential election market has $774 million bet so far. There is a 0.1 cent bid/ask spread with insanely large liquidity. If you genuinely think it's mispriced you should absolutely pile tens of thousands of dollars into it.
The fact is that it's very difficult to beat the markets, and I think people intuitively know that, which is why nobody who complains about their accuracy ever actually puts their savings into it.
I wouldn’t convert my savings into unstablecoins and deposit them into an unregulated CFTC-banned cryptocurrency casino no matter how mispriced I thought the casino’s odds were, that’s correct. I would (and have) put my money into predictit until realizing that their winnings tax and betting limits make it not worth anyone’s time to arbitrage a mispriced contract unless it’s off by like 20%+.
I'm not really a fan of crypto either but this just feels like pretty massive cope. Polymarket is incorporated in New York and has a 9 figure valuation and isn't going to steal everyone's money anytime soon.
There's plenty of smart people out there, and only a few people who can consistently beat the markets. If one of them said that the markets were stupid, I would believe them, but people sitting on the sidelines saying so have as much value as some random dude on twitter saying that Silver's model is terrible compared to his own.
There are 6 major betting markets.
Betting Odds Data
Betting Odds
Trump
Harris
RCP Average
49.3 49.3
Betfair 49 46
Bovada 51 51
Bwin 51 51
Polymarket 49 47
PredictIt 47 54
Smarkets 49 47
Hundreds of thousands of bettors betting over $1 billion.
Date Nate Silver 538 Decision Desk Economist Betting Markets Consensus Average
8/30/24 47.3% 57.9% 57.0% 50.0% 49.5% 52.3%
8/31/24 44.6% 56.5% 56.0% 50.0% 49.3% 51.3%
Yesterday Nate and the betting market consensus were within 2.2% of each other.
Today the spread is 4.7%.
Because even if you think a data source is unreliable, it doesn't mean you know with any certainty what direction or how much it will be off by.
If someone makes a weather forecasting model that is just a monkey spinning a wheel, I know it's a bad model, but I'm not a meteorological expert, so I don't think I can necessarily feel confident that I can reliably beat it. We're ultimately both just guessing.
If someone says that a specific data source is bad, I feel that they should have a data source that is superior. If I said that election models are bad but failed to come up with any better form of forecasting, would that be a good criticism?
To take your argument even one step further, I totally would bet against the monkey, basing my bet on the newspaper's weather forecast.
Wait wait, isn't there a very easy way to handle some of this? Event study!
Heuristically: if Nate wants a little gut check, compare model output to Polymarkets just barely before updates are published. To see the extent to which Polymarket is following Nate, see what the movement is just after model is published.
b-but the wisdom of crowds...
I think "crowdsourcing" ended up becoming the single most detrimental lingering effect of the "technocratic liberalism" rush that Nate mentions. It was absolutely worth a shot given what we, societally, knew at the time, but a decade later what it seems to have resulted in is an even more insidiously-entrenched need to pander to the lowest common denominator, because "recommended for you" algorithms basically drive everything, and those algorithms are still mostly just click-counting and keyword-sifting.
We've found that there is a sort of genre that develops through audience democracy, but unfortunately this genre sort of ends up dominating all other genres and resulting in an all-you-can-eat buffet where the lo mein sort of tastes like the General Tso's chicken. It's like if Bing Crosby still ostensibly played crooner music, but had a Beatles mop-top and peppered his lyrics with "groovy" a lot.
The problem with using a monetary incentive to Marshall the wisdom of the crowds is that the financial incentive invites gamblers. And while there are professional gamblers who smartly gamble for a profit, that is not where most of the money for gambling is coming from.
I believe the idea of the betting markets is that if you combine the consensus of all of them, the 6 biggest, then the average consensus of over $1 billion wagered by hundreds of thousands of people (possibly millions by now) will be as good if not better than most models.
And certainly better than something like "We put the odds of Harris (Trump) at 60%." that you might hear on Bloomberg or CNN.
Personally, I take the average of Nate, 538, Decision Desk, the Economist and the average of all the betting markets and put it into a daily spreadsheet.
Date Nate Silver 538 Decision Desk Economist Betting Markets Consensus
Average
8/30/24 47.3% 57.9% 57.0% 50.0% 49.5% 52.3%
8/31/24 44.6% 56.5% 56.0% 50.0% 49.3% 51.3%
So I can say with 51.3% absolute statistical certainty that Harris is an ever so slight favorite to win.
With an edge roughly equal to 3:2 single deck blackjack.
I understand what the idea of it is. The problem is that the idea is ruined by it being the consensus of a self-selected group weighted heavily towards compulsive gamblers. There is no reason to believe that consensus will be better than… anything.
The crappy spreadsheet you are describing, for instance, is very stupid and unlikely to be better than any of its sources. It might be better than the betting markets but you are even including the betting markets.
If you’re using this to play the betting markets, you are proving my point about how the consensus they come to doesn’t actually shine any light because it’s composed of ridiculous gambler logic.
Yeah I know he works for Polymarket but he consistently used to dismiss them as "the Scottish teens" on the 538 podcast, so it feels weird to now use them as a calibration, particularly when he is now connected to the point where his own model is likely a major influence on it. It's also kind of weird that he considers it a sort of validation given that the original appeal of 538 was that it was supposed to cut through the general "vibes" of election coverage and cut to the hard data, and betting market numbers are going to be very vibes-based.
where does he say he does?
Couldn’t find Nate’s post about it but there’s this: https://www.axios.com/2024/07/16/nate-silver-polymarket
"Polymarket, a venture-backed predictions market, has hired statistician and journalist Nate Silver as an adviser while it looks to build out more forecasts around news events."
Nate is an advisor, he doesn't work for them.
He's a freelancer who occasionally provides insights into their models.
He's not an employee.
What??? He was hired by Polymarket. Have you ever heard of someone being hired as an advisor to a huge company for free? What do you think employee means?
A freelancer is paid. It is not an employee.
Right... so he is being paid by Polymarket and officially/publicly works with them. What point are you making?
I think you made the right call being editor-in-chief.
Had you not done it, you would have always regretted not taking the shot for your website, and you eventually got to the percentage and work you felt better on.
And as EIC, you had Claire, Micah, Harry, Ben and Walt. That combo was incredible and all those folks may not have come if you weren’t EIC.
And if you hadn’t done it, transiting to your current role would be tougher. You might have taken traditional media deal thinking, “I want to run a large staff,” then you’d be miserable for another five years instead of so free to do what you want now.
100% I worked for a company where the founder/Chairman of the Board didn't want to be the President/CEO, and whoever they brought in for that role had no pull with the employees -- they'd just go over their head to the founder. And it likely kept them from getting quality CEOs in the first place so long as the founder was in the corner office.
I became a devotee of 538 when it was “wrong” on Trump winning in 2016. I remember in the run up to Election Day fretting that Trump had a 30% chance of winning. That was a huge chance that the fool would be our next President. At the time the “conventional wisdom” was that Trump’s chances were minuscule. You were wrong but quite correct. Looking back what “errors” are you most proud of?
Related post: https://goodreason.substack.com/p/nate-silvers-finest-hour-part-1-of
It’s always bizarre that they like Silver for not giving Trump for enough credit for winning in 2016 was the conventional wisdom before election night was he was giving Silver for being too bullish on Trump.
I’m sorry whenever people dismiss Silver and say Silver for blowing shows that person just isn’t serious.
https://abcnews.go.com/Politics/nate-silver-predicts-close-2016-presidential-race/story?id=43329272
Nate's thoughtful explanation of the mistakes he thinks he made at the beginning of the ABC / ESPN era were enlightening. and bode well for the new enterprise. Other people can do plenty of damage but you don't have much control of their actions. But you can learn from your own mistakes and not make them again. Nate's recent post on his plans for Silver Bulletin suggest he has learned - grow slowly, small staff, narrow scope, focus on where he and the team can excel.
As a reader at the time I thought it was too hard to find high quality items worth my time among too much material of varying quality.
I signed up to Silver Bulletin the first day and became paid the first day available. So far NOT wasting my time. I'm happy paying for info and analysis I don't see in MSM.
> As a reader at the time I thought it was too hard to find high quality items worth my time among too much material of varying quality.
That was exactly my struggle. There was one specific moment in a podcast where one of the employees refused to answer a straightforward question from Galen about current issue polling because it didn't match her political preferences, and instead hit the audience with a rambling five minutes of poll unskewing. I remember thinking: "You know, I legit would pay for a service that filters out all of this bullshit."
I agree with much of your comment, and I think the content in this newsletter (and Nate’s book) has been even better than Disney-era 538 (which I did enjoy) because it’s more focused on topics that are a natural fit.
Nate’s self awareness and humility are refreshing. Many, possibly most people who went from cube dweller to 3 million twitter followers and national semi fame would think they were the ubermensh and lose their edge far faster.
Nate, I live in Arizona and will use this as an example. In Arizona, every poll that shows Harris winning, consistently under polls Republicans by roughly 6% and over polls Democrats by 2%. When the polls are closer to the actual political makeup of our state, Trump always comes out on top.
In an article by John Mclauglin last week, he stated that in many of the polls he's analyzed, he is seeing an over polling of Democrats by 5-10%. Do you agree and how does your model adjust for this.
Don’t you think pollster know this and adjust the results for the over/under polling? It’s pretty basic stuff. You’re never going to get respondent counts that exactly match demographics so you always factor that in.
Obviously, the pollsters aren't adjusting for this. If they were, I wouldn't be seeing a poll on a Monday that says Harris is winning Arizona by 3 points, then the very next day a poll comes out and says Trump is winning by 3 points. Something is seriously wrong with our polling system.
Those two polls prove nothing. If the race is actually tied, both are within the margin of error. This is how statistics work.
As Nate explained, pollsters normally don't do this kind of adjustment. If you assume a constant R/D ratio, then the polls would barely move at all. What's the point then? Why not just predict the election based on voter registration?
All you can hope is that other, indirect adjustments -- such as by education -- is sufficient in correcting partisan response bias.
His point is that some of the polls are 35% D and only 30% R, for example. Those respondent compositions are coming from the pollsters.
What do you mean by “the actual political makeup of our state”? If you want to use the polls to predict the result of the election, then what you must mean is “the actual ratio of self-reported republicans to democrats that turns out to vote in November”. But I don’t think you have any more information about that than the pollsters do, so on what grounds are you criticizing the results? Are you assuming that self-reported party ID of people who turn out in the election will match registered party ID of registered voters?
I am pretty sure there are criticisms that the make up of some kd these polls doesn't match up to 2020.
It's obviously possible that there is good reason to believe that voters composition in 2024 will not march previous elections but getting a pollster to explain themselves is losing proportion. They're black boxes, which is I suppose tracking their historical accuracy is such a tempting proposition in the first place.
Why would one expect the make up of the electorate to be the same as 2020?
The makeup of the 2020 electorate was different than the makeup in 2016 which was different than the makeup in 2012, etc. 2020 was the highest turnout election since 1900, which means a lot of low propensity voters voted in that election. 29% of the people who voted in the 2020 election were voting in their first presidential election and 14% were voting for their first time in any election. These numbers were even higher in states like Arizona where they were 36% and 20% respectively. 2020 was a very unusual election from a turnout perspective.
You also have to factor in things like people turning 18 and people dying. In 2016 Gen Z + Millennials were 23% of the electorate and boomers were 51%. In 2020 those numbers were 31% and 44% respectively. That trend will only continue and the effects of covid mean the fall off for boomers+ will likely be even higher.
All that to say likely voter models are hard, but ones that expect the electorate to be exactly what it was in 2020 are likely to be wildly inaccurate.
There has also been a fair amount of commentary about the claimed under-polling of conservative seniors. https://www.theamericanconservative.com/older-liberals-are-destroying-polling/
Is that raw response data that is then getting weighted? If their methodology is truly faulty, then Nate is probably adjusting somehow.
Fwiw, this sounds extremely similar to the "poll unskewing" done in 2012 for Mitt Romney. You can look into what Nate thought of that but iirc, he was very dismissive of it as a practice
There are plenty of words written about this, that people’s self identified party affiliation or even recalled vote for the last election are different than the actual party registration and the actual vote share of the candidates. And pollsters do adjust where they can confidently adjust, such as party registration that isn’t a subjective answer at the time of being asked.
I'd be very interested in reading a response to something like this. Though, this is the first time I've read about these numbers, so I have no idea if they're true (but they sound like a recipe for bias).
nobody knows how the actual electorate will look. doing it by party reg is terrible practice. Pollsters have a very strong incentive to be right, if they were legibly oversampling Democrats they would stop doing that
I can confirm that this is what is happening in Arizona, I don't know about the other states. I'll see if I can still find the Mclauglin article, if I can I'll post the link.
In my opinion, Nate has a hidden solution to the bias problem in polls against Trump that he doesn't elaborate on here: the extensive and uncritical use of dubious pro-Trump pollsters like Rasmussen and Trafalgar.
to call the model's use of Rasmussen "uncritical" given Nate's extensive comments and data about how his model accounts for house effects is ridiculous, and either bad-faith or ignorant.
Rasmussen is rated B, not F.
I still think his model gets overwhelmed by the sheer volume of bad GOP-slanted pollsters in a way it can’t quite correct against. It definitely impacted the Senate forecast in 2022.
If they are known to be GOP slanted, then it won't be a problem because he will adjust for that bias. The plain aggregators would be impacted.
The plain aggregators, specifically RCP, have had more accurate predictions in 2016 and 2020. Of note RCP got 3 states wrong in 2016 and 2 states wrong in 2020. 538 got 5 wrong in 2016 and 2 in 2020. The 7 states combined that 538 got wrong all were states called D that went R. The 5 that RCP got wrong included one called R that went D and then 4 the other way. Bottom line is that 538 had a significantly larger Dem bias.
That RCP allows more credence to the R leaning Pollsters like Rasmussen and Trafalgar pushes them to the R side. I am not claiming their methodology or lack of, is better, but in an environment where there is a built in D bias (I attribute that to non response bias where Rs answer polls less) they will be attenuating the Dem bias.
Yeah, Nate's bias correction only works if the current bias matches whatever he used to calculate the correction. RCP doesn't correct biases, they just depend on polling errors canceling each other out. So if RCP is more accurate, it is just luck.
Decision Desk has an average of every single poll for every race.
They have collected over 35,000 polls so far, including 150 national Harris vs Trump polls.
No correction, like a Super RCP.
They are showing Harris +3.8% in the popular vote.
MI: Harris +2.0%
WI: Harris +4.0%
PA: Harris +1.2%
NV: Harris +0.7%
AZ: Harris +0.1%
GA: Harris +0.1%
NC: Tied
Avg Swing State: Harris + 1.2%
Note that RCP is showing Harris +0.3% in swing states and Bloomberg/Morning Consult +2%
So Decision Desk Swing State average is exactly equal to the mid-range between them.
If the election were today and the unadjusted of all polls in every swing state were accurate then Harris wins 303 electoral votes with NC a toss up (possibly 319).
I tend to think Nate's 44.6% estimate is significantly off but he admits that at the moment it's undercounting Harris's odds and she'll bounce back.
Decision Desk model, based on all polls, is showing 56% and given the average is Harris winning in 6/7 swing states and tied in one, I would say that seems reasonable.
"Of note RCP got 3 states wrong in 2016 and 2 states wrong in 2020. 538 got 5 wrong in 2016 and 2 in 2020."
So in 2020 both RCP and 538 got 2 states wrong.
538 adjusts based on house effects and RCP doesn't.
Yes in 2020. they both had two states wrong each but where RCP had their two miscalls going either way calling FL for the Ds and GA for the Rs, 538 called both miscalls for the Ds, FL and NC where they both went R, so though you might say the accuracy was the same, 538 had higher bias as they did in 2016 where both got MI, WI, and PA wrong but 538 had NC and FL blue.
I am not contesting whatever 538 does as not valid, just that the results ended with Dem bias. Bias is the term that 538 uses and doesn't imply that they did something wrong just that they missed the result. I might use the term over prediction, but it uses more letters.
I guess he doesn’t adjust enough? It’s admittedly hard to do when it seems like 4 out of every 5 polls are from questionable pollsters.
They’re not *bad*. They are equal precision to the others but have a slightly different bias. But it’s very hard to tell where the truth is when you have several sets of estimators, each with different and unknown bias.
This is funny. Compare the accuracy of Rasmussen and Trafalgar to the massive misses that the big name polls make on a regular basis. It’s not even close. If it were, Gillum would be Governor right now and NC would be blue according to the big name polls.
And yet polls like ABC WaPo had massive misses but they aren’t biased? They had Biden up 17 in Wisconsin. Final result: Biden 0.1.
Polling is difficult, and sometimes big mistakes are made in good faith. Among some Republican pollsters, there is a feeling that there is simply no good faith.
538 drops Rasmussen Reports from its analysis
https://www.washingtonpost.com/politics/2024/03/08/rasmussen-538-polling/
"For years, Rasmussen’s results have been more favorable for Republican candidates and issues. During the Trump administration, though, the site’s public presence became more overtly partisan, with tracking polls sponsored by conservative authors and causes and a social media presence that embraced false claims that spread widely on the right. At times, Rasmussen’s polls actively promoted those debunked claims, including ones centered on voter fraud...
Last March, for example, Rasmussen released data purporting to show that Republican Senate candidate Kari Lake (R) had won her gubernatorial election in November 2022. The route it took to get to that determination was circuitous and, to put it mildly, atypical. On behalf of the group College Republicans United, Rasmussen asked Arizona voters who they voted for in Lake’s race and, after weighting the results to exit polls — which is unusual — declared that, contrary to the certified tally, Kari Lake had won her race by eight points.
An election of 2.5 million voters is a better indicator of an election outcome than a retrospective question offered to 1,000 Arizonans four months later from a Republican-leaning pollster that is adjusting its results to a metric, exit polls, that is itself weighted to the election results. But Rasmussen trumpeted this revisionist look at the race loudly — including on Stephen K. Bannon’s podcast — as did Trump allies." Washington Post
I think this is pretty damning evidence that Rasmussen has become an unreliable pollster.
They are peddling disinformation, and attempting to undermine confidence in American elections.
That's objective fact, not opinion.
Nate is probably OK adjusting for house effects, but I think there is zero question that Rasmussen is not operating in good faith.
They are a GOP (a party now run by Trump's daughter in law) polling organization that works for Trump and that's why all the polls they have done this cycle have been tilted towards Trump.
538 is trash now. Ask Nate.
The same could be said with the big news/university polls that are consistently wrong. Yet somehow they are never held to account. And somehow their misses always favor Dems.
That’s not really an intentional solution so much as laissez faire
Thanks Nate for taking my question
Question for SBSQ #13: anything you can share on what Silver Bulletin's election night coverage may look like? The FiveThirtyEight live blogs were one of my favorite ways to follow election results, so I'm curious what your plans are this time around.
Silver isn't saying that there's no systematic bias against Trump, he's saying that it's not possible to know if there's any systematic bias against Trump. Pollsters guard their methodologies because they are trade secrets--whether or not they changed their procedures to correct for 2016/2020 and if so how simply isn't public knowledge.
Some questions aren't worth asking because the answer is not obtainable.
That said, here's some pure conjecture on my part: the shy voters in 2016 and 2020 were probably blue collar whites. The shy voters in 2024 may not be--they could be low propensity voters who are disproportionately young and non-white. It's conceivably that a hypothetical poster could try to get more white voters without a college degree while missing the new Trump 2024 voters. Eric Levitz at New York threw this idea out there as a possibility.
It’s not about missing them though, it’s about whether the respondents are representative. So while it’s probably true that you will get fewer young non-white voters who agree to take a poll than their share of the voters, pollsters can weight for that. The problem is if the young non-white voters you do get answer in a different way from the young non-white voters you don’t get. That might have happened a bit in 2020, but Nate Cohn has run some experiments that seem to suggest the effect now is very small if any.
"....pollsters can weight for that."
Yes, but what weight do they set? Young and non-white voters migrating to the GOP, much less Trump, doesn't really have a historical precedent in recent memory. What's your guess and what do you use to justify it?
Keep in mind that these same pollsters were caught badly off guard in 2016 and 2020. If they're facing something new, again, in 2024 that's not exactly confidence inducing.
But unless the young and non-white voters you mention are missed in the sample or lie when polled, their answers will be reflected in the resulting numbers.
If they are underrepresented in your sample then you need to boost their representation in the final reported results. That's the tricky bit.
No it isn’t. Again, if you know that young non-white voters should make up 12% of your sample, and they turn out to only be 8% of who answers, you can easily adjust. The problem only comes up if the 4% you didn’t get would have responded very differently than the 8% you did get.
The issue is that different groups vote at rates.that are disproportionate compared to their representation in the general population. The young do not vote.consistently, for example, while the older are more likely to.
Plus there are issues like Trump's low propensity voters. Will they show up or won't they? High turnout will probably benefit Trump in this scenario while low turnout helps Harris.
Actually he sites a Nate Cohn article from NYT Upshot that does show that Trump voters are less likely to answer polls. To quote from Cohn's article "White registered Democrats were more than 20 percent likelier to respond to our surveys than white registered Republicans."
What I would like to see is more polls that weight for whom you voted for in the last election so that they reflect that vote, otherwise you are going to have Dem leaning bias.
NYT/Sienna’s polls actually do that. It worked extremely well in 2018 and 2022, less so in 2020. (I think that David Shor’s hypothesis that people who socially distanced in response to COVID were more likely to answer polls because they were both available and bored is pretty plausible.)
Dave Wasserman proposed that college educated white collar professionals are more likely to vote in midterm elections, and they are Democrats now.
I think the problem with comparing the midterms with the presidential years is they are two very different demographics, and why I dont do that. Presidentials bring out more low information, less educated voters and people who only show when Trump is there just as Obama brought in a lot of not consistent voters.
I was unaware if Sienna controlled for prior vote but their predictions in 2020 were actually a bit more Dem biased than the average which was Dem biased. On the national pop vote their last poll had Biden up 9 where the final aggregate was more like 7 and his actual margin was 4.5, I checked just one state, the critical PA and that was way off where the RCP aggregate had it +1.2 Dem which nailed the actual vote, but NYT/Sienna had Biden +6.
Nate's older Pollster Ratings used to have the partisan bias listed very accessibly but now I cant find that metric My guess is they are more Dem biased than the norm. He actually last ranked NYT/S first but that looks at a lot of criteria. I personally am more interested in the partisan bias more than anything.
The Silver Bulletin Pollster Ratings give NYT/Siena College a "Mean-reverted bias" of D+1.0, if that's what you're looking for.
Yes, kezme, where do you find that, I cant seem to access the Bias ratings anymore.
They're on this post: https://www.natesilver.net/p/pollster-ratings-silver-bulletin
Check out that Eric Levitz piece. What if the shy respondents this time are disproportionately young and not white?
Anything is possible, I just dont see that the electorate is that different from 2016 and 2020. We can agree that a big change happened from 2012 to 2016 where many college educated Romney supporters moved to the Ds and non college Obama supporters moved to Trump. If Haley were the R candidate then I would expect a big change but Trump remains Trump and I dont think Harris' demographics are that different, though perhaps she might pull out more Black voters as Obama did, we will see
At the very least indications are that a bunch of Gen Z voters are now old enough to vote in this election and young males are skewing heavily Trump.
Another demographic some believe is under-polled is conservative seniors. https://www.theamericanconservative.com/older-liberals-are-destroying-polling/
Nate, one thing I haven’t seen you talk about much is voter registration. Does the model take into account upticks in voter registration after “big” events, like Biden dropping out, or either of the conventions? I’m not sure exactly how you would want to measure the effect of voter registration, but it does seem logical to assume that things such as the uptick in registration after Harris became the nominee would have an effect on the polling and the exact makeup of the electorate on Election Day. If the model doesn’t take this into account, have you ever thought about including it, and how would you do so?
I don't think so, because these new voters should eventually be represented in polls. But it will take a while.
Is that the case, though, or would this be filtered out when pollsters weight for various demographic factors? For instance, off the top of my head, I believe that there's been a large increase in Hispanic voter registration and young voter registration since Harris became the nominee, so it would be a reasonable assumption that we might see an uptick in the Hispanic vote share and younger voters' vote share as compared to previous elections. If pollsters are using previous elections to extrapolate the vote share for these groups for 2024, then the increase in voter registration may not ever be reflected in the polls.
I saw an interview with a guy who collected statistics on voter registration and he said pollsters would eventually figure out how to include new voters. I suppose that means pollsters who aren't basing their models solely on past elections. Presumably a good pollster tries to figure out what changes each cycle. But I figure both cases must exist.
I think that PA has shown a major disparity in the number of R's versus D's registering recently. Might be useful information but I don't think incorporating it into the model is necessarily straightforward.
I think the attitude that so many have that Trump under polling is a persistent systemic bias is fed by how much of a rule breaker he has been. After all, if he can politically survive the Access Hollywood tape, being a convicted felon, having a mob of his supporters ransacked the Capitol, and so on, then why not this too? But Nate is a data guy so he doesn't think that way.
I think it's mostly driven by the actual electoral results versus the polls in 2016 and 2020.
This would be a reasonable analysis if combined with one showing that n=2 patterns were predictive in this context. Nate's already provided analysis showing that such patterns are *not* predictive.
On the one hand, it is only n=2, on the other hand, it is 2 out of 2. And it's not exactly like flipping a coin twice and getting heads twice. The odds of getting heads twice is apparently 3 out of 8, 37.5%. And the odds of getting 3 in a row without a systemic issue is 1 in 8.
That’s statistically sound reasoning but it’s also wrong.
It is far too small a sample size. All I am saying is that looking around at what is happening in the rest of the world makes me wonder about demographic change, which means that models going forward could easily have issues.
It's just that the most plausible explanation for that polling error to persist, rather than having been unique to each election, is that Trump-only voters (as opposed to those who are voting in midterms too) have particularly low social trust and therefore end up being impossible to poll. The way a Democrat would phrase this is a large percentage of Trump supporters are sociopaths, and if one looks at it that way, it's not much different from what the comment above is saying.
I just don't buy the theory that there is this hyper-specific subset of people who vote for Trump but don't vote for Republicans in midterms (despite very Trumpian candidates being on the ballot), who also happen to not respond to polls. The modern Republican party is already positioning itself as anti-establishment, and Trump is more or less synonymous with the party now. As Nate mentions in his post, there are legitimate explanations for the missed in 2016 and 2020, and more broadly it's really not unreasonable to argue that something happening twice does not make a pattern.
I'm not sure how much I buy the theory but I do think that no other candidates in the party are truly very "Trumpy". Trump's appeal isn't down to some kind of coherent political ideology or strategy that can be replicated, it's down to his unique charisma and personality. Even when other candidates try to emulate his positions or beliefs or his personality, it just never really works, at best it feels like off-brand Trump from Temu, and they also just never have the mainstream recognition that he does.
I can at least entertain the idea that there are some set of low-propensity voters out there who are into Trump but just don't care about his down-ballot wannabes.
We know that white collar professionals vote in midterms at a higher rate than blue collar workers. When this cohort used to vote R the R's always had a midterm turnout advantage.
The question now is whether since the parties are swapping bases whether that still holds true for the D's.
Again, what makes Trump unique? As far as I can tell he is not: Milei, Le Pen, Wilders, Orban, Duterte, on and on and on. I sometimes describe them as "populists" but I think what they really are is outsiders--the "anti-establishment". That is why their supporters don't participate in polling. They do not necessarily lack trust in society, but rather the system.
It also suggests to me that, since demographic movements typically have a lifespan of a few decades, that Trump will not be the last populist we see in the US.
He is unique in the US, but not so much for those familiar with autocracy elsewhere. The US has had plenty of autocrats, but not elected predident.
I mean that Trump is clearly comparable to Milei, Orban, etc.
Yes he's a great fan of Oban, who proudly declares Hungary an "illiberal democracy."
Elections are held on time, it's just so rigged he always wins.
Kind of how Putin keeps winning with over 90% of the vote.
Russia is a democracy to this day...according to Putin.
Fascists is the word you are poking for. But you know that already and are just playing your fashy little word games.
Trump appears to be the only one with the cult of personality appeal though.
45% of Americans seem ready to embrace outright fascism as long as he's the one in charge.
Ron Desantis tried to be "Trumpism without the Trump" and crashed and burned.
So did Vivak, promising to be "the most Trumpy candidate that's not Trump".
The primary race when it was Trump vs Niki Haley was 75% Trump.
Trump said "We might have to suspend parts of the constitution in order to save it." His people love him for it. They cheered.
Imagine JD Vance on the stump saying this same thing.
If DeSantis proposed ending elections, or Vance, it would be the end of their careers.
Trump is the only one with the black magic of demagoguery working for him.
Yes it can work on both sides of the isle. Huey Long ruled LA with an Iron Fist and had a plan to become president in 1940.
But it's a rare person that can manage to be a successful demagogue.
Trump's secret appears to be saying things that no one else would dare to say.
When others try to say things like that (aka Vance) they somehow lack the Charisma or celebrity to make it work.
If JD Vance said "Mexicans are rapists" it would have been a week long scandal.
When Trump said it, it was a scandal that got him a lot of free media and launched him on the road to the White House in 2016.
If JD Vance said "Migrants aren't human...Protestors are terrorists...Liberals are vermin." It would have ended his career.
When Trump said all of these things his crowds ate it up.
As far as I know, Margery Taylor Green and Karrie Lake, haven't said anything like that, they haven't had the courage to try.
For whatever reason, Trump, a man who isn't at all charismatic in the traditional sense, not in any way a great orator, has the animal magnetism of a Grade A demagogue and might become the most powerful president in American history in a few months.
David Frum said something like "if moderates fail to control the border then extremists will". I think that can be extrapolated out to governance in general.
The establishment has failed, and that has opened the door to outsider candidates.
Well in this context we're talking about polls that were very accurate in 2018 and 2022, and not in 2016 and 2020, while Trump was head of the Republican party in all four elections. I guess I'd have to see a post with the data and analysis on how polls in other countries are or aren't able to account for these anti-establishment voters.
Trump was only running in 2016 and 2020. Plus Dave Wasserman, Matthew Yglesias, etc. have suggested that the D new base (highly educated white collar professionals) are more likely to turn out in "low salience" elections: midterms, special elections, etc. By contrast the R new base is more likely to only vote in presidential years.
The last EU elections stunned just about everybody because anti-establishment parties did much better than expected. See the snap elections called by Macron in France.
But the surprising result in France is easily explainable by fast-developing alliances and tactical voting that resulted. France has very weird runoff elections with three candidates, very subject to manipulation.
Le Pen only didn't win this most recent election because of a very quick last-minute alliance between France's centrist and left-leaning factions. They didn't overestimate anything.
Numerous outlets described the June elections as "stunning".
The theory is that Trump's voters don't pick up the phone when pollsters call because they don't trust polling. It's not about him, it's about the populations that support him and their distrust of the system.
I think it's just down to it happening in two very high profile scenarios. A dispassionate statistician will see n=2 samples and say it's statistically negligible, but most people have a hard time even understanding that framing. On an emotional level, it's a way simpler to see a pattern of "Trump always beats his polls" and use that as the heuristic.
Nate..How does early voting affect the model? Do polls take into account voters who have already voted? Does that have an affect on the likely voter modeling and the averages?
The earliest early voting is September 16th in PA. And then other states like MN starting on Sept 20th.
Anyone planning on endorsing (Swift) would do best to do so on September 15th.
I think he has said in the past that it has little to no impact.
I asked this in the chat but maybe it’ll be more likely to be answered if I post in here. What’s the difference between internal and public polling? Should we assume that campaigns’ internal polling is more accurate than what a high quality pollster releases? If so, what would make it more trustworthy?
Also internal polling has a different purpose. A campaign doesn't do polling two months before the election to predict the end result, they do it to identify problems in their campaign plan that needs to be addressed. Such a poll may not even attempt to represent the entire electorate, just a portion that they want a better read on.
I think the answer boils down to it costs a lot more, so they should be able to take the time and effort to do things that public pollsters can't or don't. It's hard because most people don't answer polls, right? On the other hand there are a lot more public polls, so there's the wisdom of the crowds aspect. But presumably campaigns think internal polling adds value since they're willing to pay for it.
Presumably, internal polling asks questions whose results are never made public, that public polling doesn’t ask. Things like message testing.
The main reason us as outsiders can't trust internal polling is that it's being selectively released - it'll only be "leaked" if the campaign wants it leaked. This introduces unavoidable bias into the results we see, even if the polling itself is completely above-board.
>Given that convention bounces are smaller than they used to be, maybe we could treat them as a rounding error in future years, like by saying that Harris’s numbers were likely to be a little inflated for a few weeks in the narrative explanations we provide, but not making any specific adjustment for the convention in the model itself.
I don't think that's the solution. I think the solution is probably a slightly more considered approach to how the model anticipates convention 'bounces'. While I'm sure Nate is simplifying and it's not actually as simple as just taking a point or two off Harris regardless of the polls, I think the better approach would be something along the lines of:
- Expect a pre-convention reversion if a rise follows a convention, so that the convention in the long-term doesn't have much effect on the topline figures
- If there is no convention bounce, that shouldn't impact the projected topline figure, but perhaps *should* affect the distribution of possible results. If a big event like a convention doesn't move the needle much, that implies perhaps a more top-light (as opposed to top-heavy) distribution of potential outcomes for the candidate
What appears to be happening here, and what I think is really really hard to justify, is the model is essentially assuming that Harris will end up in a weaker position when the dust settles post-convention than she was at pre-convention. From what I've gathered from previous posts, it's not simply "correcting" for a bump, but it is assuming she "should" have got a larger bump, based on very weak and noisy historical evidence, and because her bump is less than what she "should" have got the model then takes off a fair bit more than the bump she actually got. In other words, if I have understood what Nate is saying properly, then the model in its current state will effectively assume that a candidate is better off having no convention at all if they get a bump but it's a lower bump than the model predicts. This is pretty tortured logic in my opinion and I haven't read a good justification for it.
tl;dr: it's good to try to hedge against convention bumps, but it's bad to use an average from weak, noisy, and trending historical data to effectively 'punish' candidates for not getting a bounce that meets the artificial expectations set.
The more important thing is that he explains his methodology at length. As a result, it's not a mystery why the model has Harris's chances going down, and also that if her recent improved numbers are a trend and not a convention bounce, then the model over time will reveal this.
Whether those of us critical about the convention bump adjustment are right or wrong, I do wish we'd be able to toggle it on or off accordingly like his 538 models used to do. I get that building a model with that level of user interactivity on Substack is probably difficult to do and probably not worth it for basically a two-person team to handle given the adjustment will be gone in a week or two anyway, but...I dunno. Something for the future I guess?
Date Nate Silver 538 Decision Desk Economist Betting Markets Consensus
Average
8/30/24 47.3% 57.9% 57.0% 50.0% 49.5% 52.3%
8/31/24 44.6% 56.5% 56.0% 50.0% 49.3% 51.3%
Given that 538 has fixed its model to be very similar to Nate's and they are showing the exact same national polling lead for Harris, I use 538 as a proxy for a Nowcast for Nate Silver's model.
If there was no convention adjustment, and Nate's model didn't overweight the republican PA polls the most because there is nothing else post RFK, I imagine he'd be showing about 56.5% Harris.
We'll find out in 2 weeks.
538 is also projecting based on fundemantals, and presumably also mean reversion and greater uncertainty, so it's not really a nowcast.
Yes and the average bounce is 1% since 1998 and Nate said that right after the convention the model was adjusting -2.5%.
That might explain why he's now predicting Harris wins the popular vote by 1.5%, 2.3% below 538's model.
538 model has been fixed to focus 80+% on polls instead of 66% fundamentals as they were with Biden.
Until the convention bounce ends and we get some new post RFK drop PA polls I would assume that Nate's model is the least reliable of all the major forecasting models.
Date Nate Silver 538 Decision Desk Economist Betting Markets Consensus
Average
8/30/24 47.3% 57.9% 57.0% 50.0% 49.5% 52.3%
8/31/24 44.6% 56.5% 56.0% 50.0% 49.3% 51.3%
Nate has admitted as much today, and said it doesn't make sense to try to come up with a fix to this rather unique but temporary situation.
Within 2 weeks the post convention adjustment will go away and we'll certainly have several more PA polls.
This comment is Kamala-esque
If you're saying you don't understand what I'm saying, that's a you problem.
The kind of comment that only someone who appreciates Venn diagrams and considering things in context might make. Like most fans of this site.
The justification is simply that at the time, you can't tell the difference between a candidate suffering a drop in their underlying support that is partially or completely masked by a normal convention bounce, and a candidate whose underlying support hasn't changed but who got little or no convention bounce.
Which is why it's so stupid to assume the former when the latter is clearly far more likely.
Why is it more likely? If the historically candidates get a couple of percent bounce that deflates back to the underlying support within a fortnight, that seems like a reasonable default assumption.
Because there's no evidence that it happened. The polls were flat.
I think the low response rates of those with lack of trust in insitutions, who are also typically Trump's biggest fans, is more likely than not to bias polls against Trump in 2024 as it did in 2016 and 2020. The years 2018 and 2022 are not good counterexamples because many of those people mostly care about Trump himself and not the GOP or conservative values in particular. We can't just assume that pollsters have found a way to fix this issue when only 1-4% of people contacted actually respond/complete surveys.
It’s easy to correct for age bias. It’s harder to correct for a bias that doesn’t match a simple demographic question.
In principle that is true. But if it isTrump supporters who distrust institutions, then correcting for under-representation of them is certainly possible. For example, maybe these distrustful folks are mostly non-college educated males. Any decent pollster would be correcting for under-representation there, by just calling more.
The simplest way would be to weight for prior election choice and I believe a few polls might, wonder if that is Rasmussen and Trafalgar.
The polls started correcting for education after 2016 but we still got a worse Dem bias in 2020.
We know the percentage of people who voted for Biden and Trump roughly 51% and 47% and it could be easily weighted for.
I don't have access to Cook, but one thing clearly has changed in this election and almost nobody is focused on it.
It's renters vs homeowners. The former is getting hammered.
The BLS reported rising rents up at least 0.4 percent every month for 33 straight months. That string recently broke but for one month only. Those who rent (think young adults and blacks) have absolutely been hammered compared to homeowners who refinanced in 2021 at ~3.0 percent and some even less. This explains the huge pickup by Trump in young adults and blacks.
The state of the economy is much more important for this set as well. Not a single pollster asks "Do you rent or own a home?" If that group (young voters and blacks) returns to the Democrat fold as happened in 2020, Dems will win. If not look for Trump will win. This is why recession/unemployment is very important.
The polls reflect the current state of the economy, but the polls do not reflect an economy that hasn't happened yet.
So is it rising unemployment/recession? That is my call. Strong enough to matter (and fast enough)? I don't know even if my view is correct.
Trump has turned off a lot of people with his belittling comments instead of focusing on the economy. But a recession and/or rising unemployment benefits Trump.
And unemployment has been rising among Blacks and young adults. I have charts by race and age group.
https://mishtalk.com/economics/the-unemployment-rate-bottomed-a-year-ago-whos-impacted-the-most/
BLS uses owner equivalent rent which is a flawed metric that lags one year.
Zillow has real-time rent data and for 10 months rental inflation has been negative.
In fact, according to Charlie Billelo's most recent "this week in charts." Wage adjusted rent today is lower than in 2016.
But almost all polls weight for age but not for prior election choice.
It's possible that younger voters and woman are being under sampled and Harris is leading with them by a large amount.
After all, there has never been a presidential election after 40% of woman suddenly lost the right to abortions.
And with 8 states having abortion rights on the ballot, including NV, AZ, and FL (key Senate Race).
Is it likely that woman turnout will be greater with a woman leading a ticket and abortion the #2 or #3 issue (in recent surveys)?
Given that Dobbs came after the 2020 elections I think 100% likely that yes, more Woman will vote than in 2020.
But how many more?
How would pollsters try to adjust their weightings of woman vs men to compensate?
I imagine they just don't do it.
Same with younger voters.
If they turn out in larger numbers than in 2020 (and I think that's a fair assumption given the new race we face) how would a pollster adjust the weighting of young voters?
In a recent poll (CBS I think) 88% of voters said they were very likely to vote or extremely likely and 92% said they were at least somewhat likely to vote.
There is no way in hell that turnout is close to 88% so you can't just ask people "are you likely to vote" and then adjust the weighting based on the results.
Or maybe you can. I'm not a pollster.