Silver Bulletin

Silver Bulletin

Models & Forecasts

PRISM 2026 NBA draft rankings

Who should be the #1 pick? Our new NBA draft model says teams still overvalue potential and undervalue production.

Joseph George's avatar
Nate Silver's avatar
Eli McKown-Dawson's avatar
Joseph George, Nate Silver, and Eli McKown-Dawson
Mar 28, 2026
∙ Paid

🏀 Our latest NBA draft rankings

Updated March 28, 2026

I feel like we’ve been in a nice little rhythm at Silver Bulletin, rolling out more and more flagship features every month, from our Iran War polling tracker to our COOPER college basketball rankings. So here’s another big reveal: our new NBA draft model, PRISM! The text below is written by Joseph George and PRISM is Joseph’s baby, but there have been many late-night Slack threads between us discussing every detail of the model. Next on our agenda: our World Cup model, which we’re already making good progress on, and our midterms forecasts. We’ll also have at least two updated versions of PRISM before the NBA draft in June. P.S. Like our other chart-heavy pages, this post may be best viewed on the web rather than in email or on the app. – NS, 3/28/26

See also: Men’s COOPER ratings, March Madness projections

The NBA draft is the league’s most consequential event. We touched on this in our Future of the Franchise rankings, but most teams are in their respective positions because of their decisions in June. This has held true since the NBA’s inception — contrary to popular belief, only a handful of champions were built primarily through free agency or trades. Most title teams, like last year’s Oklahoma City Thunder, are born of smart drafting, supplemented by shrewd moves on the trade market rather than the other way around.

Teams are aware of this, and, of course, are willing to throw away entire seasons because of it. (With the league’s proposed anti-tanking solutions leaving something to be desired, we’re still working on our own NBA draft reform plan. Look for that soon.) In the years since Sam Hinkie’s “process”, tanking and its more polite cousins like “strategic rebuilding” have become a fixture of the league’s competitive landscape. Front offices will openly gut rosters, bench healthy veterans, and punt on entire seasons, all for the chance to move up a few slots.

You would think, then, that NBA teams would be remarkably good at making draft picks. If the draft is important enough to sacrifice entire seasons for, surely the evaluation process behind it must be at least very good? But every year, we see the same pattern of misses. There are a few reasons for this.

Yes, scouting is inherently noisy. Even with a sound approach, you’re trying to project outcomes years into the future from a small sample of games among a set of 18-year-old outliers. This isn’t to excuse bad process — there’s definitely plenty of that across the NBA. But even analytically sound prospects fail to translate for reasons that are hard to foresee.

NBA teams undervalue production and overvalue “potential”

Cognitive biases also distort where players end up getting drafted. Consensus evaluation tends to prize linear ordering — ranking players from “#1 option” down to “role player” — even when that hierarchy doesn’t map onto how basketball value actually works for most teams. The result is that, relative to more “reliable” players, teams systematically overweight creator archetypes and the sorts of players who can turn into what our friend Jeremias Engelmann calls “quagmires”. At each slot, teams should be selecting for the highest expected value, not just the highest ceiling. Team-specific needs complicate things further, but figuring out how to define expected value is most of the problem.

Current production is also chronically underrated. Simply put, the strongest predictor of future NBA success is current success. This shouldn’t surprise anyone, and yet every draft cycle, evaluators find new ways to talk themselves out of good players. Paolo Banchero was drafted ahead of Chet Holmgren in 2022 largely on this basis despite far less efficient college stats, but Holmgren has almost certainly been the better pro from an impact standpoint. So our new model, PRISM1, focuses on identifying good, impactful players for their age without glaring holes in their statistical profiles.

Admittedly, athletic characteristics like vertical leap, agility, speed, and strength are hard to measure outside of combine testing, which only becomes available in late May. And to be fair, our PRISM projections still incorporate consensus scouting rankings — though we’ll show you our “pure stats/computer” version just for fun later on.

But do combine measurements tell us which of those traits are functional — that is, do they actually correlate with positive NBA outcomes? The best studies we’ve seen say not really. If you want an athletic guard because athletic guards get to the rim, a guard who already gets to the rim is a good bet, regardless of what he tests at the combine or what the aesthetics look like.

When we remember the great teams in NBA history, it’d be hard to not rank the Golden State Warriors dynasty from 2015 through 2022 near the top. Most narratives assume they caught lightning in a bottle, drafting Stephen Curry in 2009 and Draymond Green in 2012. But what if I told you those picks were layups? That the Warriors did something as simple as ignoring aesthetic biases — Steph was considered too frail to translate and Draymond didn’t fit a specific archetype — and simply selected the most productive players at their positions. Curry and Dray were two of the best college players two years before they were drafted. There are actually plausible arguments that both should have been drafted top five in 2007 and 2010, respectively. The cornerstones of a dynasty were not that difficult to identify years before anyone considered them future Hall of Famers.

Steph Curry at Davidson. Shamus/Getty Images.

“Analytical” scouting, which isn’t restricted to models, certainly has its place in NBA front offices today, but it’s not nearly as widespread as most fans would think. There seems to be this assumption that front offices are making decisions with extreme efficiency and little bias, but many teams still default to silhouette scouting — evaluating players based on physical traits and aesthetics rather than production, and projecting outcomes based on who a player looks like rather than what they’ve done. This might explain why Kon Knueppel got Joe Harris comparisons last season.

When a player consistently drives winning outcomes against high-level competition, the burden of proof reverses. The question shouldn’t be whether their game “looks” translatable, but whether there’s any structural reason it wouldn’t be. “Structural” might be doing a lot of work in that sentence, but we mean this should be grounded in real, current gaps in production — not just perceived translatability concerns. Luka Doncic shredded the EuroLeague before entering the NBA, winning its MVP in 2017–18, but fell to third behind Deandre Ayton (!!!) and Marvin Bagley III (?!?!?) because evaluators worried his lack of athleticism would neutralize his otherwise extraordinary skill set. It didn’t. Luka never became a great athlete; the traits that made him dominant in Europe simply remained dominant in the NBA. On the other hand, Jahlil Okafor was genuinely productive in 2014–15, but fell out of the league because of his low passing and defensive indicators — real holes in his profile, not just aesthetic concerns.

PRISM is built to operationalize these principles. It anchors on production — what players have actually done against real competition — and layers in the contextual factors that shape projection: shot creation volume and efficiency, defensive and offensive roles, positional size, age-adjusted performance curves, and more. So let’s take a look at our top 5 for this year’s class. Note that PRISM currently only rates players who have spent at least some time with a Division I NCAA program2, but virtually all of the top prospects fall into that category this year.

Intrigued? You can find a more complete methodological description of PRISM here. Spoiler alert: Kansas’s Darryn Peterson, still considered the potential #1 overall pick, is just a fraction of a point outside of our top five. But his ranking has consistently fallen over the course of the college season as we’ve been refining PRISM, and the Jayhawks’ washout from the NCAA tournament dropped him to #6. Some of that reflects the depth of the class: he’d be as high as #2 on PRISM’s board in other recent drafts. But the possibility that he could wind up as a “tweener” or even one of Engelmann’s “quagmires” has been increasing as his production hasn’t quite matched his scouting reports.

In typical Silver Bulletin fashion, we’ve got a lot of detail for you in the rest of this newsletter:

  • Prospect profiles for Cameron Boozer, AJ Dybantsa, Peterson, and most of the other top ~20 players in the upcoming 2026 class

  • Full PRISM rankings for the 2022, 2023, 2024, 2025 and 2026 classes

  • A comparison of PRISM rankings versus consensus rankings

  • Projected offensive and defensive roles

  • Draft strength scores: Is the 2026 draft as good as reputed?

  • Raw PRISM ratings based purely on stats, with no prior based on scouting rankings

  • Projected player development trajectories and volatility ratings

  • And a draft simulation factoring in young core, fit, NBA lifecycle, and more.

Here are PRISM’s full rankings. (Note that you’ll want to scroll through the pages to see beyond the top 15.)

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Nate Silver · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture