SBSQ #14: Election Day lightning round!
All your last-minute questions, answered. Actually, just 13 of them.
While election nights are incredibly busy, Election Day isn’t so bad, actually. There’s not a lot to do until results start coming in — and no, you probably shouldn’t pay too much attention to exit polls.
So there’s probably just enough time for the late October/early November edition of Silver Bulletin Subscriber Questions. You can leave questions for next time in the comments below or the Subscriber Chat. For now, I’d probably keep it to non-politics-related questions just because everything in the political landscape could look very different 24 hours from now. I’ll send a reminder once we get closer to the late November/early December edition.
And, obviously, this is going to have be a lightning-round edition. I’m going to try to get through as many of these as I can, sticking to answers of a paragraph or two, and probably missing a typo or three or four.
In this edition:
Does Osborn have a real shot in Nebraska?
Are pollster ratings and aggregation sites responsible for herding?
What’s up with another Trump surge in prediction markets?
Will the ground game help Harris to beat her polls?
Do elections usually “break” toward a candidate at the end of the race?
Is fundraising data predictive for presidential races?
Does game theory say you should you vote out of self-interest?
Should prediction markets be incorporated into the models?
What’s the point of a 50/50 forecast?
What explains the weird divergence between Senate and presidential polls?
How much did the Selzer poll affect the model?
Is early voting data predicting record turnout?
What happened to Silver Bulletin’s sports coverage?
Does Osborn have a real shot in Nebraska?
Petr asks:
I think I noticed another weird prediction from 538: https://projects.fivethirtyeight.com/2024-election-forecast/senate/nebraska/
Their priors are too strict, making Osborne's [sic] victory seem highly unlikely. But they should react to the fact that he is circumventing the partisan lean by running a third-party campaign, right? And when the polls are this close, predicting only a marginal chance of victory seems wrong. The Economist does not make this mistake.
What do you think about this? :)
Yeah, their Senate forecast is much more bearish on Democrats than others. I understand the reasons for skepticism of Dan Osborn — in case readers don’t know, he’s an independent running against Deb Fischer in Nebraska who hasn’t said which party he’ll caucus with, but Democrats stepped out of the way for him, and most forecasters are treating him as a de facto Democrat.
Fundamentals can go a long way in Senate races — our Congressional forecasts are much more of a blend between polls and fundamentals than our presidential forecasts. And you may remember a similar independent named Greg Orman in Kansas, who was polling competitively in 2014 but then faded badly down the stretch run. But 538 deviates a lot from the other models in Nebraska, and their forecaster has a history of — just in my opinion as someone who’s built these models before, of course — overdoing it in the fundamentals direction. I’d certainly be buying some Osborn stock at the 5 percent odds 538 offers and some Jon Tester stock at 7 percent.
What I think they’re maybe missing is that when there’s a big conflict between the polls and the fundamentals — or generally between different groups of indicators — the uncertainty bands are wider. It wouldn’t shock me if Osborn loses by 14 points, but I also wouldn’t be entirely surprised by, say, a 2-point win, especially given Harris’s weirdly strong polling in the Prairie States.
Are pollster ratings and aggregation sites responsible for herding?
JP Stroman asks:
Possible question for November: I heard a few comments that the proliferation of electoral forecast models (Silver Bulletin, FiveThirtyEight, Split Ticket, etc.) actually has the effect of encouraging more herding among pollsters, because it gives them even MORE incentive to fit into the consensus. Do you think this could be a possible side effect?
Yeah, I definitely worry about it. That’s one of the reasons we introduced a herding penalty into our pollster ratings some years back, and maybe we’ll need to increase its magnitude further. It’s also why I’ve written a lot about herding and why I defend “outlier” polls and counter-punch others who criticize them.
At the same time, there’s some degree of conflict between what’s best for the pollster and what’s best for my model. I basically just want raw data and independent opinions, which I can then de-noise through averaging and the other techniques the model uses. But that noisy data may make a pollster more likely to be “wrong”, even if it makes my model better. Basically, the hedging/aggregating/averaging/reversion-to-the-mean step is now happening further upstream, at least for some pollsters. Likely a topic we’ll be revisiting again after the election, especially if there’s another polling error.
What’s up with another Trump surge in prediction markets?
Fresh from the Chat, Spencer asks:
Why have the betting markets reverted back to Trump >60%? (at least in Polymarket)