I have a guest essay up at the New York Times with a fun headline: “Here’s What My Gut Says About the Election. But Don’t Trust Anyone’s Gut, Even Mine.” I’m linking to it here in case people want to comment — as usual, comments on election-related stuff is limited to paid subscribers — and to provide some color and context.
(And if you’re coming over from the Times — well, where have you been all cycle? But welcome. You can find our latest election forecast here, which continues to show a very close race. Or if you need something to distract you from the election, I’d recommend my book, On the Edge, which you can learn more about here.)
Most of the column is about how Kamala Harris could beat her polls — or Donald Trump could beat his again. One thing that might be counterintuitive is that even a normal-sized polling error — polls are typically off by around 3 points in one direction or the other — could lead to one candidate sweeping all 7 key battleground states. In our simulations yesterday, which account for the possibility of a correlated polling error, the most common outcome was Trump winning all 7 swing states: this happened 24 percent of the time. And the next most common was a Harris sweep, which occurred in 15 percent of simulations:
I’ve discussed the possibility of the polls again being biased against Trump before in the newsletter, and that’s also covered in the Times column. This case is more intuitive — after all, Trump beat his polls in 2016 and then beat them again by an even wider margin in 20201. Although, as the column points out, the reasons for this are sometimes misattributed: it’s probably not “shy Trump voters” 2 but rather nonresponse bias: Democrats are more likely to respond to political surveys.
But there’s not much evidence for the shy-voter theory — nor has there been any persistent tendency in elections worldwide for right-wing parties to outperform their polls. (Case in point: Marine Le Pen’s National Rally party underachieved its polls in this summer’s French legislative elections.) There’s even a certain snobbery to the theory. Many people are proud to admit their support for Mr. Trump, and if anything, there’s less stigma to voting for him than ever.
Instead, the likely problem is what pollsters call nonresponse bias. It’s not that Trump voters are lying to pollsters; it’s that in 2016 and 2020, pollsters weren’t reaching enough of them.
This is potentially a hard problem to overcome. But as Nate Cohn has pointed out in his excellent series of columns at the Times, pollsters are very aware of it and in many cases have been changing their methods in response. If polling firms were still applying the same techniques they did in 2016 and 2020, we’d probably be seeing a Harris lead in the Electoral College right now. Instead we have a toss-up, more or less.
However, the baseline assumption of the Silver Bulletin model is that while the polls could be wrong again — and in fact, they probably will be wrong to some degree — it’s extremely hard to predict the direction of the error.3 Empirically, there’s basically no correlation in polling error from one cycle to the next one.
And pollsters could be overcompensating if they’re worried about missing low on Trump again or if the 2020 polling error was primarily caused by COVID: Democrats being more likely to “socially distance” and having more time to respond to polls. There are prominent examples of this, such as in the 2017 UK election, where pollsters put a heavy finger on the scale for Tories but Labour beat its polls instead:
How might that happen? It could be because of something like what happened in Britain in 2017, related to the “shy Tories” theory. Expected to be a Tory sweep, the election instead resulted in Conservatives losing their majority. There was a lot of disagreement among pollsters, and some did nail the outcome. But others made the mistake of not trusting their data, making ad hoc adjustments after years of being worried about “shy Tories.”
Polls are increasingly like mini-models, with pollsters facing many decision points about how to translate nonrepresentative raw data into an accurate representation of the electorate. If pollsters are terrified of missing low on Mr. Trump again, they may consciously or unconsciously make assumptions that favor him.
So if the polls are often unreliable, should you trust your gut instead? Or look at the vibes — subjective perceptions about the race and how it’s covered in the media — which have shifted toward Trump more than the underlying data has?
The column argues absolutely not. One’s gut instinct can be quite useful in something like poker, when you’ve been able to calibrate it by playing out thousands of hands.4 But elections occur only once every four years, and most people’s guts just tell them that the same thing that happened last time will happen again. Or they repeat what they hear in the media. Or they may engage in some degree of emotional hedging: Democrats fear another Trump win, so they imagine it happening to protect themselves from disappointment.
For what it’s worth, my gut says Trump too — it’s hard for it not to when I’m vacuuming up so much media every day, and the media vibes have been Trumpy lately. I just don’t think there’s any value in my gut. Basically, you should stick to the models or other relatively objective indicators. It’s not like I really have any idea how an undecided voter in Latrobe, Pennsylvania, is thinking about the race anyway: their political preferences and news consumption habits are very different from mine.
There’s even a case for mild contrarianism: that you should shade a little in the opposite direction of whatever the conventional wisdom says. The conventional wisdom is very often wrong — it’s wrong more often than the polls are, including in dismissing any chance of a Trump win in 2016 even though it was a fairly close race in the polls. That would have served you well in 2022, for instance.
The case goes something like this. The hivemind of the media sometimes takes on a life of its own, an echo chamber. There are some psychological and sociological factors at play here, but mostly it’s just the fact that nobody really knows anything and the people who do know something aren’t saying anything. But it’s boring to just say “it’s a toss-up!” over and over again. So small shifts in “momentum” tend to be exaggerated. Media coverage of polls — how articles are headlined or which polls generate more discussion — often swings around more than the underlying data as accounted for by models like ours.
But pollsters are also influenced by vibes, in various ways. Conscious or unconscious biases may cause them to tinker with their assumptions so as to match the media narrative or to herd toward other polls. They may find excuses not to publish “outliers”: a conspicuously large number of polls in this race show the swing states within 2 points in either direction and nobody except maybe NYT/Siena seems to have the guts to publish a Trump +5 or Harris +6, for instance.
So it may be that if pollsters put a blindfold on, completely trusted their data, and published all their numbers “as is” — knowing nothing about what other polls said or about media coverage of the race — it would be a hair more favorable to Harris than the numbers we’re seeing in the public record, which are influenced by the Trumpy vibes lately.
Maybe.
Or maybe not. This is a tricky one because it’s not a case where the vibes say Trump and the data says Harris. Rather, the vibes say Trump and the data says we just don’t know. I’m not trying to predict the direction of polling bias. But the point is that you should probably assume that a pro-Trump polling error is roughly as likely as a pro-Harris one.
Just not by quite enough to win since Joe Biden had such a large lead.
Trump voters are not “shy” exactly, anyway. I’m thinking of a certain Trump-hat-wearing Phillies supporter at the NLDS game I attended at Citi Field a few weeks ago who was loudly mocking both Harris supporters and Mets fans — until Franciso Lindor hit that grand slam.
The reason that a Trump sweep is more likely than a Harris sweep in our simulations is simply that he has clearer “leads” in Georgia and Arizona than Harris has in any state — the other 5 states are basically toss-ups, whereas you could describe Georgia and Arizona as tilting toward Trump. So it’s a slightly less heavy lift.
You should be empirical about when to trust your gut, in other words. People’s gut instinct about when to trust their gut is often wrong.
“If polling firms were still applying the same techniques they did in 2016 and 2020, we’d probably be seeing a Harris lead in the Electoral College right now. Instead we have a toss-up, more or less”
Why haven’t any outfits released a “here’s what our data would look like given our 2020/2016” approach? Would be very informative to this discussion, but maybe it would highlight the large amount of subjective modeling work at play in a way that pollsters would prefer not to talk about.
Title of essay: Nate Silver: Here’s What My Gut Says About the Election. But Don’t Trust Anyone’s Gut, even mine.
How it is being reported in other headlines: Nate Silver says his instinct is that Trump will win.
That was written as a joke but after looking it up it doesnt go far enough
https://duckduckgo.com/?q=nate+silver&iar=news&ia=news
"Trump is going to win election, says America's top pollster" is a real headline.