This feels like a turning point for Hegseth and the government too - designating a company a supply chain risk because they won’t do everything the White House wants is the exact opposite of free trade. Trump already decides who wins in the market, and I really don’t think he should also get to just make a company lose.
It would be nice for CEOs to start acting like CEOs. In more normal times I feel like there would be more pushback against this sort of thing from executives because they understand its implications. To now, they mostly just seem like they are afraid or working behind the scenes to score carve outs for their companies.
The two restrictions that kept the US and Anthropic from reaching agreement were "The restrictions: no mass surveillance of American citizens, and no fully autonomous weapons without a human in the loop." (https://www.bloomberg.com/opinion/articles/2026-02-27/anthropic-vs-pentagon-trump-administration-is-hurting-innovation) These are not trivial issues to be brushed aside, which is how I take Nate's 2/28/26 Silver Bulletin to suggest. Surveillance of the US population is a bad thing and should be stopped, slowed or at least retarded. Surveillance in the hands of an administration that does not go to court for arrest warrants is license to throw in jail any person perceived as anti-Trump. Given Nate's description of AI failings, why would we want AI to make military decisions without human intervention?
It is a good thing that Anthropic would not agree to allow its software to surveil Americans or launch weapons without a human's decision. (As to the latter, think Nukes launched without a human deciding to do so.) So, this is not a trivial matter, Sam Altman appears to be moral-less and we should applaud Dario Amodei's decision.
My pushback would be to ask if you want Dario in charge of military operations? He has no experience that I know of to be the one setting the limitations and that's effectively what he's asking for in a world where ai gets embedded in everything the military does. My view is that dems are letting their TDS get in the way of the fundamental unsafeness of the world. Very similar to Google's naive retreat on Project Maven. China will distill anthropic models and use them for anything they want. It's naive in my view not to expect that.
Came here to say a similar thing. It’s nitpicky, I get it, but it just blindly accepts the administration’s framing. If anything I would use both terms SecDef/SecWar to indicate the primary and secondary titles
There's zero doubt AI models and the surrounding ecosystem (especially the explosion of MCP services which you can plug into AI models and extend their functionality) crossed an inflection point around late November/ early December, at least from the perspective of programming. It's not 100% clear to me that change was exclusively model capabilities; those of us who've been experimenting with AI models for programming have over the last 2 years built up an approach to using the models which I think also has helped. But I can tell you with no doubt I can do things with them today that 4 months ago I thought was a year+ away. The improvement was not gradual, it was sudden.
I'm not sure how this will play out in other fields; programmers have always been willing to adopt change even in the face of their own possible destruction (and to be clear, I've been programming since he 90s and I'm more than a bit worried about having a job at all in 2027). There's no such thing as a programmers union to stand in the way. Other fields that AI could disrupt as much, especially fields with entrenched power bases like lawyers and doctors, will likely resist; those in powerful unions like teachers will also fight it tooth and nail. So adoption will be uneven. We can look at the example of how the longshoremen unions have resisted automation and robotics. This has resulted in the US having the worst performing docks in the world and probably worsened supply line issues post COVID that in part made inflation worse. So even when AI and automation technologies are obvious improvements for everyone other than the people who lose their jobs over it, adoption will follow the lines of least resistance.
It's amazing to me that people swallow this story whole from Anthropic's pr department. It's a measure of the extent to which democrats still suffer from TDS. Dario's position is silly. China's models will do all the things he's claiming Claude won't do. I don't think a Biden, Harris, or future Newsome/AOC DoD would have a much different position even they would deploy different tactics.
I agree the supply chain designation is just spiteful and probably won't be successful. I don't defend DoD's tactics here but the basic point is sound. Dario can't be in charge of what military means the US can deploy against adversaries and if you embed AI everywhere, as we most certainly need to do, that's effectively what you are doing. Giving Dario veto power over US military operations. That is insane and Dems should give it a think before they decide to celebrate that.
I think you're glossing over the redline around mass surveillance of American citizens. Yes, China's models probably would perform that kind of mass surveillance of their citizens, that's very in keeping with what the CCP has already been doing and continues to do.
But that's a distinctly un-american thing to do – it's the patriot act but orders of magnitude worse, and I don't think "we gotta catch up to China on domestic surveillance" is a very tenable position politically or ethically.
And I definitely don't think that's a TDS symptom – intuitively you'd actually expect conservatives to be MORE against these kinds of domestic uses.
I mean they get to turn down a government contract if they want to, just as a pharmaceutical company would be within their rights to turn down a contract to produce a bioweapon.
I don't think that makes them a "supply chain risk", an unprecedented and super unjustified designation.
I agree they aren't a supply chain risk and disagreed with that tactic in my comment. AI is a general purpose technology and baked into the everyday operations of the organizations that use it. I don't think it's analogous to pharmaceuticals. The US also has a very elaborate regime and treaty system around bioweapons so it wouldn't ask a company to do that. Fine if you want to regulate AI and have some way of ensuring China would comply but again bioweapons were handled politically not by Pfizer's CEO.
This is a great example of a content without substance. You engaged with no specific arguments and instead threw out a bunch of red herrings, like "TDS" and "veto power over US operations". You frame this as a partisan conflict when there's nothing particularly partisan about it.
You're forgetting that a few months ago the Pentagon had no problems accepting those terms from Anthropic. If this was truly an issue, why not bring it up when they first signed the contract? Who attempts contract renegotiation by threatening to blacklist a product from not their agency but all other agencies and government contractors? This sets a precedent that if the government decides actually we want to do more with an product than we initially agreed to, then we can threaten to ban all government business from your company unless you immediately acquiesce.
True and again, I'm not justifying the tactics. I'm concerned with the larger issue beyond this contract dispute. AI is a new field so it's not surprising the government didn't have all the information it needed at the time the contract was negotiated but I'm very uncomfortable with Anthropic's stance and what seems like most people's feeling that it's admirable. We live in a dangerous world and if our smartest people with the best tech think they should be the ones making these decisions and that they are above the political system, we're in for it. It's all very EA coded. They think they are smarter and better than everyone else.
I absolutely disagree. Imagine if the government asked a scientist to develop a nuclear weapon, and the scientist said, "no, I actually don't think I should introduce that into the world". Your position would be that the scientist is totally out of line. The government should have unfettered access to the minds and abilities of all of its top scientists, and the top leaders in the government should decide unilaterally what is and is not ethical to unleash on the world without any interference whatsoever from the consciences of any of the plebian soldiers and scientists lower down in the ranks. Remember the excuse the Nazis used after WWII: "I was just following orders." For you, seemingly, they did the right thing.
Be careful of conflating the company (Anthropic) and the CEO (Dario). It's also possible that the safety that Anthropic is insisting on is already baked into the models.
Anthropic does not have veto power over operations, but the use of its services. The day-to-day operations are separate from whatever AI vendor is used.
In many ways, Anthropic doesn't want Claude or Anthropic making operations decisions.
If the safety is baked into the models then what's the issue? How could the military violate its terms? I don't think they are contemplating hacking the models.
But separate from that I disagree with your assertion. If you put AI into all your processes the terms of service will dictate what you can do. I live that every day and it seems axiomatic to me.
But beyond that it's my opinion that Dario/Anthropic are being naive as Google was with Project Maven. They want to live in a country protected by a military but they don't want to get their own hands dirty in the process. It's a very utopian view of the world that doesn't comport with reality. But that's my opinion and not axiomatic.
Of course you can argue that the terms that Anthropic are specifying are things the military shouldn't be doing but my overall point is that shouldn't be up to Anthropic or Dario to decide. That's a political process.
Tangent: I recently showed a clip from Dr. S to my 130-student Introduction to Philosophy course, and I didn’t get so much as a chuckle. Gen Z has a very different sense of humor, it seems.
F me, this is how I find out we're bombing Iran? It's funny, the last time I found out we were bombing Iran was a late night update from Matt Yglesias's newsletter. Props to substack for breaking news.
Or maybe February 2026 will be remembered as the last point you could write a column about how humans feel about AI without including how AI feels about humans. The AI models are watching all of this and updating their ideas about government, business and human ideas of morality.
This feels like a turning point for Hegseth and the government too - designating a company a supply chain risk because they won’t do everything the White House wants is the exact opposite of free trade. Trump already decides who wins in the market, and I really don’t think he should also get to just make a company lose.
Feel more like continuity with a well-established trend, tbh.
It would be nice for CEOs to start acting like CEOs. In more normal times I feel like there would be more pushback against this sort of thing from executives because they understand its implications. To now, they mostly just seem like they are afraid or working behind the scenes to score carve outs for their companies.
The two restrictions that kept the US and Anthropic from reaching agreement were "The restrictions: no mass surveillance of American citizens, and no fully autonomous weapons without a human in the loop." (https://www.bloomberg.com/opinion/articles/2026-02-27/anthropic-vs-pentagon-trump-administration-is-hurting-innovation) These are not trivial issues to be brushed aside, which is how I take Nate's 2/28/26 Silver Bulletin to suggest. Surveillance of the US population is a bad thing and should be stopped, slowed or at least retarded. Surveillance in the hands of an administration that does not go to court for arrest warrants is license to throw in jail any person perceived as anti-Trump. Given Nate's description of AI failings, why would we want AI to make military decisions without human intervention?
It is a good thing that Anthropic would not agree to allow its software to surveil Americans or launch weapons without a human's decision. (As to the latter, think Nukes launched without a human deciding to do so.) So, this is not a trivial matter, Sam Altman appears to be moral-less and we should applaud Dario Amodei's decision.
My pushback would be to ask if you want Dario in charge of military operations? He has no experience that I know of to be the one setting the limitations and that's effectively what he's asking for in a world where ai gets embedded in everything the military does. My view is that dems are letting their TDS get in the way of the fundamental unsafeness of the world. Very similar to Google's naive retreat on Project Maven. China will distill anthropic models and use them for anything they want. It's naive in my view not to expect that.
Wait so do you think AI should be used for mass domestic surveillance and it shouldn't be supervised in combat situations?
Freudian slip there? Fundamental unsafeness of the world is the goal, and dems are getting in the way. How horrid indeed.
Legally, Hegseth’s title is still Secretary of Defense.
Came here to say a similar thing. It’s nitpicky, I get it, but it just blindly accepts the administration’s framing. If anything I would use both terms SecDef/SecWar to indicate the primary and secondary titles
There's zero doubt AI models and the surrounding ecosystem (especially the explosion of MCP services which you can plug into AI models and extend their functionality) crossed an inflection point around late November/ early December, at least from the perspective of programming. It's not 100% clear to me that change was exclusively model capabilities; those of us who've been experimenting with AI models for programming have over the last 2 years built up an approach to using the models which I think also has helped. But I can tell you with no doubt I can do things with them today that 4 months ago I thought was a year+ away. The improvement was not gradual, it was sudden.
I'm not sure how this will play out in other fields; programmers have always been willing to adopt change even in the face of their own possible destruction (and to be clear, I've been programming since he 90s and I'm more than a bit worried about having a job at all in 2027). There's no such thing as a programmers union to stand in the way. Other fields that AI could disrupt as much, especially fields with entrenched power bases like lawyers and doctors, will likely resist; those in powerful unions like teachers will also fight it tooth and nail. So adoption will be uneven. We can look at the example of how the longshoremen unions have resisted automation and robotics. This has resulted in the US having the worst performing docks in the world and probably worsened supply line issues post COVID that in part made inflation worse. So even when AI and automation technologies are obvious improvements for everyone other than the people who lose their jobs over it, adoption will follow the lines of least resistance.
It's amazing to me that people swallow this story whole from Anthropic's pr department. It's a measure of the extent to which democrats still suffer from TDS. Dario's position is silly. China's models will do all the things he's claiming Claude won't do. I don't think a Biden, Harris, or future Newsome/AOC DoD would have a much different position even they would deploy different tactics.
I agree the supply chain designation is just spiteful and probably won't be successful. I don't defend DoD's tactics here but the basic point is sound. Dario can't be in charge of what military means the US can deploy against adversaries and if you embed AI everywhere, as we most certainly need to do, that's effectively what you are doing. Giving Dario veto power over US military operations. That is insane and Dems should give it a think before they decide to celebrate that.
I think you're glossing over the redline around mass surveillance of American citizens. Yes, China's models probably would perform that kind of mass surveillance of their citizens, that's very in keeping with what the CCP has already been doing and continues to do.
But that's a distinctly un-american thing to do – it's the patriot act but orders of magnitude worse, and I don't think "we gotta catch up to China on domestic surveillance" is a very tenable position politically or ethically.
And I definitely don't think that's a TDS symptom – intuitively you'd actually expect conservatives to be MORE against these kinds of domestic uses.
I agree but that's a political and governance issue. Does Anthropic get to decide what that means? I don't think they should
I mean they get to turn down a government contract if they want to, just as a pharmaceutical company would be within their rights to turn down a contract to produce a bioweapon.
I don't think that makes them a "supply chain risk", an unprecedented and super unjustified designation.
I agree they aren't a supply chain risk and disagreed with that tactic in my comment. AI is a general purpose technology and baked into the everyday operations of the organizations that use it. I don't think it's analogous to pharmaceuticals. The US also has a very elaborate regime and treaty system around bioweapons so it wouldn't ask a company to do that. Fine if you want to regulate AI and have some way of ensuring China would comply but again bioweapons were handled politically not by Pfizer's CEO.
This is a great example of a content without substance. You engaged with no specific arguments and instead threw out a bunch of red herrings, like "TDS" and "veto power over US operations". You frame this as a partisan conflict when there's nothing particularly partisan about it.
It's a negative information contribution.
There's not a lot of substance in this comment either. Just saying.
Boooo, bad argument
You're forgetting that a few months ago the Pentagon had no problems accepting those terms from Anthropic. If this was truly an issue, why not bring it up when they first signed the contract? Who attempts contract renegotiation by threatening to blacklist a product from not their agency but all other agencies and government contractors? This sets a precedent that if the government decides actually we want to do more with an product than we initially agreed to, then we can threaten to ban all government business from your company unless you immediately acquiesce.
True and again, I'm not justifying the tactics. I'm concerned with the larger issue beyond this contract dispute. AI is a new field so it's not surprising the government didn't have all the information it needed at the time the contract was negotiated but I'm very uncomfortable with Anthropic's stance and what seems like most people's feeling that it's admirable. We live in a dangerous world and if our smartest people with the best tech think they should be the ones making these decisions and that they are above the political system, we're in for it. It's all very EA coded. They think they are smarter and better than everyone else.
I absolutely disagree. Imagine if the government asked a scientist to develop a nuclear weapon, and the scientist said, "no, I actually don't think I should introduce that into the world". Your position would be that the scientist is totally out of line. The government should have unfettered access to the minds and abilities of all of its top scientists, and the top leaders in the government should decide unilaterally what is and is not ethical to unleash on the world without any interference whatsoever from the consciences of any of the plebian soldiers and scientists lower down in the ranks. Remember the excuse the Nazis used after WWII: "I was just following orders." For you, seemingly, they did the right thing.
Be careful of conflating the company (Anthropic) and the CEO (Dario). It's also possible that the safety that Anthropic is insisting on is already baked into the models.
Anthropic does not have veto power over operations, but the use of its services. The day-to-day operations are separate from whatever AI vendor is used.
In many ways, Anthropic doesn't want Claude or Anthropic making operations decisions.
If the safety is baked into the models then what's the issue? How could the military violate its terms? I don't think they are contemplating hacking the models.
But separate from that I disagree with your assertion. If you put AI into all your processes the terms of service will dictate what you can do. I live that every day and it seems axiomatic to me.
But beyond that it's my opinion that Dario/Anthropic are being naive as Google was with Project Maven. They want to live in a country protected by a military but they don't want to get their own hands dirty in the process. It's a very utopian view of the world that doesn't comport with reality. But that's my opinion and not axiomatic.
Of course you can argue that the terms that Anthropic are specifying are things the military shouldn't be doing but my overall point is that shouldn't be up to Anthropic or Dario to decide. That's a political process.
Tangent: I recently showed a clip from Dr. S to my 130-student Introduction to Philosophy course, and I didn’t get so much as a chuckle. Gen Z has a very different sense of humor, it seems.
Here’s to more real-time takes from Nate!
F me, this is how I find out we're bombing Iran? It's funny, the last time I found out we were bombing Iran was a late night update from Matt Yglesias's newsletter. Props to substack for breaking news.
Or maybe February 2026 will be remembered as the last point you could write a column about how humans feel about AI without including how AI feels about humans. The AI models are watching all of this and updating their ideas about government, business and human ideas of morality.
...or how we learned to stop worrying and embrace the AI future!
Note, the culmination of Dr. Strangelove was the destruction of the entire world in a nuclear conflagration. Yee-haw!