Silver Bulletin

Silver Bulletin

SBSQ #26: Do prediction markets make polls useless?

Plus the 2026 Senate map, Joe Burrow versus Joe Flacco, and why you should do creative work until you die.

Nate Silver's avatar
Nate Silver
Nov 09, 2025
∙ Paid
A Polymarket electronic billboard in Times Square. Financial Post.

This is SBSQ #26, the Wade Boggs edition: named after his uniform number and not the 107 beers Boggs claimed that he once drank on a cross-country flight. (We probably won’t make it to SBSQ #107 if I drink that much beer myself.) As always, please leave any questions for edition #27 in the comments below. If you want to read even more from me on Tuesday’s elections, I’d encourage you to check out my chat from Friday at the New York Times with Kristen Soltis Anderson and Frank Bruni. Otherwise, we’ve got a super fun mix of questions this time, so let’s get right to it:

  • Why do you still code your own models?

  • Do prediction markets make polls useless?

  • After Tuesday, how broad could the Senate map be in 2026?

  • Are AI valuations an all-or-nothing gamble?

  • Is Joe Burrow an elite quarterback?

Why do you still code your own models?

Josh asks:

SBSQ Question: why do you continue to code your models yourself? It seems like a huge opportunity cost and counter to how you’ve talked about the opportunity cost of your time: it’s feasible to hire somebody to do it and very time-consuming. I’m especially curious whether doing this work has intrinsic value that makes it worth doing? In my field I enjoy and am good a data analysis, but do very little now that I have executive level responsibilities and doing it tends to inhibit the team exercising data analysis capabilities.

Or, is this just something you like doing, feel you have to do because the risk of trade secrets walking out the door is too hard to mitigate, something else?

Thank you for the insightful question, Josh. I have basically two different answers for you.

One is more mechanical. These models are highly specialized and don’t necessarily lend themselves to some sort of production-line process. It has been hugely helpful to have people like Eli and Joseph and my former FiveThirtyEight colleagues working with me to help vet modeling strategies, gather data and make awesome tables and graphics.

Undoubtedly, I’m also a control freak. But the devil is very much in the details in these models and so I think they benefit a lot from hands-on experience. There’s a little bit of a 10,000 hours thing going on. How many other people have spent 10,000+ hours building predictive models in sports and politics? I’m guessing at most 1000 — probably mostly sports gamblers. They also require an eye for precision, which is one reason that AI tools aren’t currently so helpful for them. You could probably “vibe code” your way to a basic Elo-type model, but I expect it will be a long time before LLMs can reproduce the more advanced models that the Silver Bulletin team or other people in the space are publishing.

These models are more like series circuits than parallel circuits — think about those old-school Christmas tree lights where if one bulb goes out, the rest in the string also do. A single bad line of code can essentially contaminate your entire model because everything builds on the previous step. In fact, models like ELWAY are also recursive, since the rest of the season is simulated one week at a time and the results from each simulation feed back into the results for the following week, which can magnify the impact of any errors. Meanwhile, election models are highly sensitive to certain key assumptions about the correlation between different races. (There’s some good competition out there, but overall, the track record of other election models has been uneven.)

Don’t get me wrong: there are undoubtedly still bugs in every model that I’ve ever coded up. Some of the benefit of life experience is in detecting how robust your model is to those inevitable small bugs. And knowing when an implausible-seeming result reflects ill-conceived or even outright buggy code — or perhaps more commonly, errors in the data you’re feeding into the program. Domain knowledge helps a lot: I really am a huge sports and elections geek.

But there’s something else too — so here’s answer #2. I wouldn’t quite describe writing these models as “fun”. There’s a lot of pulling your hair out at 3 in the morning trying to figure out precisely which bug is causing your code to fail, or, worse, projecting the New York Jets to win the Super Bowl. However, I find it creatively fulfilling to do immersive work that builds on itself over weeks or months — and it’s exciting to see it come together, particularly since you often won’t know if the model is producing sensible results until relatively late in the process.

From the standpoint of content economics, quick hits pay more dividends in the short run. If I spent the several hundred hours I put into ELWAY this summer into writing paywalled hot takes instead, I’m almost certain we’d have a higher paid subscriber count today. That’s less clear in the long run, however, because models produce perpetual value in a way that hot takes do not.1

I also learn a lot when I’m writing these models. I learn a lot about the underlying subject matter, whether it’s football or elections. I develop better coding skills: programming is generally a young person’s game, but I’m a much better programmer at age 47 than at 27. There’s a lot of problem-solving, such as how to find the right mathematical function or what piece of the puzzle you’re missing when your spidey-sense tells you something isn’t quite right.

In the content game, there’s always pressure to squeeze every last thought you have into some sort of monetizable commodity. To some extent, that’s what the Always Be Blogging philosophy is about. If you’re spending a lot of time thinking about something in the news, you probably ought to try to turn it into a newsletter.

But at the risk of being slightly rude, there are a lot of Substacks I read and podcasts I listen to where there doesn’t seem to reflect much original thinking at all. It’s predictable regurgitation, often along partisan lines, posts where you could quite literally fill out the rest of the newsletter with an LLM once you saw the headline.

I guess I think of it as almost an existential thing. No matter how much of a big shot you get to be, you need time to do original research, reading, and reporting, as well as to expose yourself to new people, ideas, and life experiences. The periods of time when I haven’t done that — particularly in the early days of FiveThirtyEight @ Disney when I was spending a lot of time on management work — are the periods when I’ve done my worst work. So I’m almost phobic of not going back into the lab and embarking on some ambitious new projects from time to time. I’d consider it tantamount to retiring, and I hope I’m still decades away from that.

Do prediction markets make polls useless?

Francis Quinn asks:

Given the remarkable trajectory of prediction markets in the last few election cycles, and the public’s heightened awareness of how these markets work resulting from the explosion of trading volumes in sports event outcomes, will media make a sharp turn away from polling as a critical element of their political reporting?

What types of news business models and use cases for polling might arise if prediction markets replace polling as a primary tool of the media? Will polling firms take their IP into private, commercial realms?

Or am I just overstating the impact of the PM exchanges on political reporting?

I’m sure I’m overexposed to it because of the parts of the world I travel in, but it really is remarkable, Francis, how much mindshare prediction markets have gained over the past couple of years. Even New York City mayor-elect Zohran Mamdani recently mentioned Kalshi, for example.

Of course, I wear a few different hats here. So I have some complicated — indeed, conflicted — thoughts about this.

As an advisor to Polymarket, I’m obviously happy to see all the attention to prediction markets.

As a journalist who would like to see more statistical literacy, I also think it’s basically good when people are more exposed to probabilities and they become more normalized. Some journalists like to claim that they don’t make predictions, but journalism is chock-full of speculative statements about the future. There just isn’t a lot of accountability for them because they’re phrased in such vague, mealy-mouthed ways that pundits can spin away basically incorrect predictive statements later on.

And while there’s some notion that the public doesn’t “get” probabilities — i.e., that they treat an 80 percent chance as a 100 percent chance — I think that’s a significant oversimplification. People seem to understand what a 70 percent chance of rain is, or a 70 percent chance of the local NFL team winning its next home game, because they have a lot of exposure to probabilities in those contexts. Often, the failure point in communicating probabilistic information isn’t the audience, but journalists themselves. That’s one reason why I turn down most interview requests, other than some long-form podcasts. I’m afraid that things will be clipped out of context, so I’d rather that people come to Silver Bulletin or listen to Risky Business so they can hear it from me directly.

But of course, I also analyze polls and build probabilistic models myself. In that capacity, I strongly disagree with the notion that prediction markets can serve as a good substitute for polls.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Nate Silver
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture