Introducing COOPER: Silver Bulletin's NCAA basketball rating system
The methods behind the March Madness.
What’s new in COOPER
COOPER — named in honor of Naismith Award winner Cooper Flagg and two-time NCAA champion Cynthia Cooper — is Silver Bulletin’s new college basketball rating system.1 As part of a long Nate/Silver Bulletin tradition, it is a goofy backronym: Collegiate Outcomes with Opponents, Pace and Expert Ratings. The name hints at the system’s essential features:
The first “O” is for “outcomes”. COOPER is mostly based on margin of victory, but simply winning games matters, too.
The second “O” stands for “opponents”. The model adjusts for the strength of opponents through an Elo-like system, which is essential when rating college teams with their wide disparity in quality. Furthermore, games between more evenly-matched teams are weighted more heavily in the system.
The “P” for “pace”. We now account for how high-scoring a team’s games tend to be and derive both offensive and defensive ratings for each school. Higher-scoring games tend to introduce more variability.
Finally, “ER” for “expert ratings” means we revise each team’s rating at the start of the season using human opinions in the form of the preseason AP and Coaches’ polls. A team’s initial rating to start the season is based on a combination of these polls, its year-end rating from the previous season, and the strength of its conference.
COOPER represents an evolution of the SBCB ratings that we used in 2025, and COOPER and SBCB share some of the same programming, but we’ve dug pretty deep under the hood and there are some important changes:
The most noticeable one is that we now calculate separate offensive and defensive ratings for each school, which we call PPPG (projected points per game) and PPAG (projected points allowed per game). Essentially, these ratings represent how many points we’d expect a team to score against an average NCAA opponent. For instance, a team with a PPPG of 81 and a PPAG of 74 would be expected to win 81–74. By subtracting PPAG from PPPG, we can also derive a net rating for each team: in this case, it would be +7.
But net rating doesn’t tell the whole story because teams that play to higher scores (both scoring and allowing more points) tend to have higher variance. Instead, a team’s Elo rating accounts for this property and is its best representation of its likelihood of victory against an average opponent.2
Although it’s less visible, perhaps the most important change is that we’ve removed the constraint from SBCB that required a team to always gain in the ratings when it won and always lose points when it lost. For instance, even if Duke was a 35-point favorite against Cal St. Bakersfield and won the game by 1 point at the buzzer, our previous system would have viewed this as a positive for Duke. But that’s pretty clearly an unreasonable assumption from a Bayesian standpoint — you wouldn’t think more highly of Duke after this game — and this was causing a fair amount of information loss.3 So this is no longer the case in COOPER. Teams are awarded a bonus for winning games, regardless of the final margin.4 But they can now lose ground if they significantly underachieve COOPER’s expectations even after a win — or gain ground following a loss where the final score is impressive relative to the model’s expectations.
COOPER tends to show a wider spread between the best and worst teams than SBCB did. This is partly because there’s now slightly more carryover in the ratings from season to season. Previously, our view was that the increasing likelihood of star players defecting to the NBA after just a season or two was diminishing the advantage for blue-blood teams. However, NIL is helping the most elite programs remain as dominant as ever. Also, as described above, COOPER tends to put more emphasis on margin of victory as compared with SBCB and doesn’t punish teams for runaway margins in the same way. This is a bit less “politically correct” in the sense that it potentially gives teams more credit for beating up on weaker opponents. But it increases the fidelity in distinguishing good teams from great teams. Predictive accuracy is what we’re after here.
However, each game now has what we call an “impact factor” that reflects how reliable a signal it provides. Basically, games that are projected to be lopsided5 matter less in COOPER than those that the model expects to be close. Conference games and especially NCAA tournament games also have higher impact ratings. Teams tend to compete at full effort in these games, and the results are more reliable than for early-season, non-conference matchups.
Although COOPER continues to account for travel distance — an East Coast team playing in California will often have a rough time — we’ve found that these effects are diminishing over time as travel accommodations improve to a larger extent than SBCB was accounting for. However, we’ve retained one of my favorite SBCB features, which are customized home court advantage ratings for each school.
SBCB calculated both a “Bayesian” version of the ratings that adjusted each team’s rating every offseason based on preseason polls, and a “pure” version that applied purely objective data. We’re continuing to do this, but the “pure” version now receives less emphasis. The poll-adjusted version should be considered the official or flagship6 version of COOPER.
More about how COOPER works
Just so I don’t get accused of self-plagiarism, note that some of this text is copied from the SBCB methodology page.
COOPER is a profoundly Bayesian model in the sense that ratings are adjusted on an iterative basis as new information becomes available: namely polls at the start of the season, and then game results throughout the season. We don’t go back and revise past COOPER ratings based on information that wasn't available at the time the game was played.
Rating changes after each game reflect a combination of:
The margin of victory or defeat as compared with COOPER’s prior expectations of the “point spread”. Unlike in previous versions of SBCB/Elo, there are no diminishing returns to higher scoring margins. In practice, both NBA and college basketball teams tend to lay off the gas pedal once they already have a big lead, so the differences in the final score can actually understate the intrinsic gap in team quality. A linear formula works perfectly well for predictive purposes.
But we also account for whether a team wins or loses. Winning a game by any margin is essentially equivalent to 6 points of scoring margin. However, this “bonus” is compared against COOPER’s prior for each team’s probability of victory. Therefore, it has little effect if a team was heavily favored because the expectation of a win was already baked into the system. Wins in closely contested matchups — or upsets in games with clear favorites — matter far more.
Home court advantage is factored in. In fact, we calculate a separate home court rating for each team, based on how much it underperforms or exceeds its COOPER projection in home games. Generally speaking, teams that are reputed to have larger home court advantages based on difficult playing conditions or more enthusiastic fan bases actually do. Teams that play at high altitudes often have especially large home court advantages in basketball, as in other sports. These home court ratings move very slowly, taking advantage of data from previous years. (They fully carry over from season to season.) However, having a larger home court advantage isn’t helpful in the NCAA tournament, which is entirely played at neutral sites. Teams like Purdue, whose home court is worth an additional ~2 points of victory margin compared with the NCAA D1 average, may be overrated by other systems that don’t account for this factor.
Travel distance also matters and, for the 2025-26 season, is equal to
2.88 * m^(⅓)worth of Elo ratings points, wheremis the distance in miles from the visiting team’s campus. For home games, be sure to add the travel distance factor to the team-specific home court rating to calculate a team’s overall advantage. But note this is a considerably smaller advantage from travel than SBCB had assumed. We found that SBCB had been overrating home teams in recent years in games where the opponents traveled a long way to play. On the other hand, the effect of travel distance was much larger up through the 1980s. This almost certainly reflects improving travel accommodations and sports medicine and the general professionalization of college sports.COOPER ratings, like most other Elo systems applied to sports, partly carry over from season to season, with a discount factor applied that reverts the ratings toward the mean in between seasons. To be absurdly specific, a team’s rating is reverted by 28 percent toward the mean at the start of each new season. This is actually less mean-reversion than had been incorporated into SBCB. While it’s true that elite college talent tends to go pro sooner, top-tier programs like Duke or Kansas have plenty of other ways to perpetuate their success by investing more in their programs or through superior recruiting. While the introduction of NIL several years ago was “disruptive” to some degree — for instance, in boosting the basketball profile of the SEC — recent tournament and regular-season results suggest a recalibration toward a new steady state.
More precisely, teams in COOPER are reverted toward the mean of other teams in their conference — not toward the global average (with the exception of the few remaining independent teams). When a team changes conferences, the rating change is based on its new conference rather than its old one, as this can indicate where a program fits into the college basketball pecking order. Interconference play, especially in recent NCAA tournaments, is self-evidently important for this purpose. In essence, a team that exceeds expectations in the NCAA Tournament will then redistribute those gains toward the rest of its conference in COOPER’s off-season recalibration process. The default/Bayesian version of COOPER also accounts for preseason polls in its initial ratings to start the season, while the “pure” version does not.
However, this introduces various complications because the polls only provide a truncated list: that is, only 25 teams are ranked.7 The process for imputing human ratings quite literally applies Bayes’ Theorem in the sense that it relies on a prior about how likely a team is to be ranked. For instance, a team with a 2000 Elo rating would typically expect to be ranked somewhere in the top 25 in the next preseason poll — so if it isn’t ranked, that provides a lot of information that its performance is expected to decline, usually because of a loss of key talent. However, a team that ended the previous season with a mediocre rating would rarely expect to be ranked, so this tells us little. Teams ranked specifically #1 overall receive special treatment to ensure that truly dominant clubs like the late 60s/early 70s UCLA Bruins are not punished.2
I haven’t yet mentioned what is perhaps the most important parameter in any Elo-derived system, which is called the k-factor. This governs how much the ratings update after each game. A higher k-factor implies more sensitivity to recent play but also more volatility. Statistically speaking, the goal is generally to minimize autocorrelation. That is, you want to avoid both a too-high k-factor where ratings zig-zag around (i.e. teams usually decline after gaining and vice versa) and a too-low one where a team with a recent ratings gain can predictably be expected to follow that up with further gains because the system is too slow to account for what soccer fans call a change in “form”. Specifically, we use a k-factor of
42; this number has no intrinsic meaning and is derived empirically. Generally speaking, COOPER ratings are more aggressive than other college basketball systems about accounting for recent play and tend to ride a winning hand while discounting ratings for teams that have been on a downward trajectory.8However, the k-factor is up to 2x higher (so, it goes up to 84) for early-season games, diminishing until a team plays roughly the 15th game of its season. The intuition behind this is that early-season games reveal a lot of information as compared to COOPER’s crude preseason estimates. By the middle of the season, conversely, teams mostly are “what they thought we were” and each subsequent game tells us less.
Newly this year, each game receives an impact factor. Basically, games between teams that are closely matched in COOPER matter more. We’ve also found that what we call “high-stakes” games — namely, conference games and NCAA tournament games — tend to provide more signal, and these are weighted more heavily also. For NCAA tournament games specifically, there’s also a hard-coded boost to the impact rating. We’ve found that success early in the NCAA tournament — i.e. if a team blows out tough opponents in the first two rounds — tends to predict success for the rest of the tournament.
For calculating margins of victory and net ratings, one point in a basketball game equals approximately
28Elo points. Thus, a team with a 100-point Elo advantage, after accounting for home court and travel distance, would be roughly a 3 or 4-point favorite. However, newly for COOPER, this exchange rate varies slightly based on whether COOPER projects the game to be high-scoring or low-scoring.Also new this year, COOPER calculates a rolling “pace” factor for each team. This is a slight misnomer, because in basketball analytics, “pace” generally refers to the number of possessions in a game. For COOPER, because possession-by-possession data is unavailable until recent seasons, it instead reflects the overall number of points scored by both schools in games involving the team. In addition, we calculate a Bayesian expectation of the NCAA average points per game: 2025-26 has been a particularly high-scoring season, for instance. This leaguewide rating is designed to adjust more aggressively at the start of each season; changes in rules or style are often evident relatively quickly.
The combination of a team’s net rating and its pace rating essentially allows us to back into an offensive and defensive rating for each school.9 Higher-scoring games tend to introduce more variability. Both empirically and theoretically, a marginal point matters more in an environment where points are scarce. For instance, an uptempo team that projects to beat an average opponent by a score of 90-80 will win less often than a downtempo team that projects to win 65-55. Conversely, teams with offense-oriented mindsets tend to pull off a few more upsets when they’re underdogs, such as by winning the battle for 3-point shooting.
Teams that are new to Division I begin with a rating of
1300at the start of their first D1 season, adjusted slightly upward or downward based on the strength of their new conference. That is to say, they are usually considerably below average since the average Elo rating is 1500. Preexisting D1 teams’ ratings are adjusted slightly upward such that the global average remains at 1500 when new teams join.Our database contains many games between Division I and Division II teams, especially in recent seasons. However, rather than calculating a rating for individual D2 teams, we instead lump all D2 teams together into a single “divtwo” rating. Essentially, this makes them the equivalent of the Washington Generals, barnstorming around and usually getting obliterated.10 A separate “divtwohome” running tally is calculated for D2 teams that host home games as opposed to paying on the road or at neutral sites — but this has become rare as D1 teams generally don’t want to decline an opportunity to sell tickets. Overall, however, D2 teams are patsies, with D1 teams winning upwards of 99 percent of the time at home in recent years against D2 opponents.
We’ll update this document if we catch any bugs or make any changes. Thank you for being a subscriber to Silver Bulletin and for your interest in COOPER!
As of launch on March 10, only the men’s version is ready, but we’re at work on revising women’s SBCB into women’s COOPER.
You should think of all of these ratings as part-and-parcel of COOPER, with different expressions of COOPER being more useful depending on your purposes. Elo is the best expression of a team’s probability of winning future games, while net rating reflects its projected margins of victory.
As a result, COOPER correctly predicts the winner in about 1 percent of additional games as compared to SBCB. If you’ve ever bet sports for a living, you’ll know that’s a big deal.
Essentially, COOPER tacks on 6 points to the final margin for the winner. So a 6764 win would be treated as tantamount to 70-61 instead. The reason this is 6 points as opposed to some other number is just because that’s what produces the best predictions empirically.
More precisely, COOPER projects both a mean projected score (Duke wins by 7) and a standard deviation for each game (+/- 10 points). The standard deviation is a function of the difference in Elo ratings before the game: it tends to rise as the quality difference increases, which may reflect the fact that what happens in the second half of blowout games doesn’t matter much. It’s also higher in what we call “low-stakes” games, meaning non-conference games outside of the NCAA tournament. The impact score for each game is inversely proportional to the projected standard deviation.
Or maybe I should say “Flaggship”? I’ll see myself out.
Although we also account for teams in the “also receiving votes” category of the preseason rankings.
SBCB used a k-factor of 38, so this is slightly higher under COOPER.
For instance, if a team is projected to win games by an average of 10 points based on its net rating, and its pace rating (the expected combined number of points between a team and its opponents) is 150, simple algebra implies that it will win 80-70 against an average opponent.
A separate “divtwohome” running tally is calculated for D2 teams that host home games as opposed to paying on the road or at neutral sites — but this has become rare as D1 teams generally don’t want to decline an opportunity to sell tickets.



