The two restrictions that kept the US and Anthropic from reaching agreement were "The restrictions: no mass surveillance of American citizens, and no fully autonomous weapons without a human in the loop." (https://www.bloomberg.com/opinion/articles/2026-02-27/anthropic-vs-pentagon-trump-administration-is-hurting-innovation) These are not trivial issues to be brushed aside, which is how I take Nate's 2/28/26 Silver Bulletin to suggest. Surveillance of the US population is a bad thing and should be stopped, slowed or at least retarded. Surveillance in the hands of an administration that does not go to court for arrest warrants is license to throw in jail any person perceived as anti-Trump. Given Nate's description of AI failings, why would we want AI to make military decisions without human intervention?
It is a good thing that Anthropic would not agree to allow its software to surveil Americans or launch weapons without a human's decision. (As to the latter, think Nukes launched without a human deciding to do so.) So, this is not a trivial matter, Sam Altman appears to be moral-less and we should applaud Dario Amodei's decision.
My pushback would be to ask if you want Dario in charge of military operations? He has no experience that I know of to be the one setting the limitations and that's effectively what he's asking for in a world where ai gets embedded in everything the military does. My view is that dems are letting their TDS get in the way of the fundamental unsafeness of the world. Very similar to Google's naive retreat on Project Maven. China will distill anthropic models and use them for anything they want. It's naive in my view not to expect that.
Forget about comprehensive domestic surveillance: that's been going on for years, decades. I've read China has a complete dossier on every single individual in their country ---- and they've got 4.5 times our population and not quite as good computer systems. That ship has sailed.
I think the problem issue is autonomous AI combat decisions. Without particularly good alignment with human values, notably survival of the species, ours.
Big difference between "these are our terms of service" and "actively involved in military operations". So far we have seen zero sign that Anthropic is asking for the latter. But hey, don't let that get in the way of your "TDS" ranting. Smartest MAGA.
Btw you are going to be pretending that you never supported Trump 10 years from now. When you lie to new acquaintances about how you never liked Trump remember this comment.
Who says I supported Trump? I just disagree with Anthropic and happen to support the Administrations overall position even if I disagree with its tactics.
The things you’re discussing are both against DoD policy and illegal under federal law. The dispute was that Anthropic’s safeguards are built into the models. OpenAI opted for contractual language preventing these uses. I leave it to you to decide why the DoD (which it is until Congress changes it) would simultaneously argue they’ll never using it that way yet be willing to set themselves back several months to ensure they CAN do it.
Sounds to me like they'll be setting up OpenAI to be the fall guy if/when it happens & it ends up being an atrocity or escalation that people are outraged by.
Yeah. On the domestic surveillance front, wonder if they’ll have some legal liability. On one hand, you did have them agree not to do it. On the other hand, you just watched them fight with another company for actually preventing them from doing it.
It is beyond me why the Pentagon would choose this particular moment to issue an ultimatum demanding its AI be allowed to launch weapons without human intervention, right when this study (https://www.kcl.ac.uk/shall-we-play-a-game) has been in the news for showing that AIs will choose the nuclear option 95% of the time.
This feels like a turning point for Hegseth and the government too - designating a company a supply chain risk because they won’t do everything the White House wants is the exact opposite of free trade. Trump already decides who wins in the market, and I really don’t think he should also get to just make a company lose.
It would be nice for CEOs to start acting like CEOs. In more normal times I feel like there would be more pushback against this sort of thing from executives because they understand its implications. To now, they mostly just seem like they are afraid or working behind the scenes to score carve outs for their companies.
Trump didn't decide who wins and who loses. Anthropic decided to drop out of the game because it didn't want to do what its largest customer wanted it to do. Fair enough. It gets to make its own choices. Just don't expect your biggest customer to go along.
FFS The pentagon deciding to terminate the contract is totally reasonable and fair game. Declaring them a supply chain risk as an attempt to literally put them out of business because they won't make moral compromises is straight out of a Soviet playbook.
So do you agree with the decision to designate Anthropic as a supply chain risk? That is considerably different from just parting ways with a large client/vendor and smacks of socialism/cronyism/state intervention.
That would be true if the outcome was DoD ending their contact with anthropic and selecting another vendor that does do what it wants. It does not apply to designating anthropic a supply chain risk.
And not just any company, but the company with the best performing AI models in the world. All the talk about “we can’t let China win” from the White House is utter fraud - are literally helping China win with this move.
(the move being the Supply Chain nonsense… Moving to a different AI firm, however shady the negotiations may have been, is not the problem here)
As a company the most important thing is trust between you and your customers. It is only after that that your customers and potential customers will sit down and figure out whether or not you've got the best product/service/price for their needs. Anthropic has just lost it for its government customers.
Anthropics' best hope for future accelerated growth is the enterprise market. That's to compete against everyone else. Anthropics' future business growth depends on enterprise customers. There it will fight it out with all of its competitors, primarily based on quality and price, just like any other business. There will be no enterprise customers who give anthropic extra credit for their social stances. It doesn't work that way. In fact it could work against them in that they have already lost the trust of one big customer and that will certainly figure into the equation for future enterprise customers. Maybe not much but it will never go away.
That’s a gross mischaracterization. They designated Anthropic a supply chain risk because of the implication that if Dario wasn’t on board he could pull the service in a kinetic scenario and wind up costing American lives
Came here to say a similar thing. It’s nitpicky, I get it, but it just blindly accepts the administration’s framing. If anything I would use both terms SecDef/SecWar to indicate the primary and secondary titles
The sign doesn't define the department. It takes an Act of Congress to change the name of the Department of Defense. Congress did this on 1947, changing from Department of War to Department of Defense shortly after World War II.
There's zero doubt AI models and the surrounding ecosystem (especially the explosion of MCP services which you can plug into AI models and extend their functionality) crossed an inflection point around late November/ early December, at least from the perspective of programming. It's not 100% clear to me that change was exclusively model capabilities; those of us who've been experimenting with AI models for programming have over the last 2 years built up an approach to using the models which I think also has helped. But I can tell you with no doubt I can do things with them today that 4 months ago I thought was a year+ away. The improvement was not gradual, it was sudden.
I'm not sure how this will play out in other fields; programmers have always been willing to adopt change even in the face of their own possible destruction (and to be clear, I've been programming since he 90s and I'm more than a bit worried about having a job at all in 2027). There's no such thing as a programmers union to stand in the way. Other fields that AI could disrupt as much, especially fields with entrenched power bases like lawyers and doctors, will likely resist; those in powerful unions like teachers will also fight it tooth and nail. So adoption will be uneven. We can look at the example of how the longshoremen unions have resisted automation and robotics. This has resulted in the US having the worst performing docks in the world and probably worsened supply line issues post COVID that in part made inflation worse. So even when AI and automation technologies are obvious improvements for everyone other than the people who lose their jobs over it, adoption will follow the lines of least resistance.
It's amazing to me that people swallow this story whole from Anthropic's pr department. It's a measure of the extent to which democrats still suffer from TDS. Dario's position is silly. China's models will do all the things he's claiming Claude won't do. I don't think a Biden, Harris, or future Newsome/AOC DoD would have a much different position even they would deploy different tactics.
I agree the supply chain designation is just spiteful and probably won't be successful. I don't defend DoD's tactics here but the basic point is sound. Dario can't be in charge of what military means the US can deploy against adversaries and if you embed AI everywhere, as we most certainly need to do, that's effectively what you are doing. Giving Dario veto power over US military operations. That is insane and Dems should give it a think before they decide to celebrate that.
I think you're glossing over the redline around mass surveillance of American citizens. Yes, China's models probably would perform that kind of mass surveillance of their citizens, that's very in keeping with what the CCP has already been doing and continues to do.
But that's a distinctly un-american thing to do – it's the patriot act but orders of magnitude worse, and I don't think "we gotta catch up to China on domestic surveillance" is a very tenable position politically or ethically.
And I definitely don't think that's a TDS symptom – intuitively you'd actually expect conservatives to be MORE against these kinds of domestic uses.
I mean they get to turn down a government contract if they want to, just as a pharmaceutical company would be within their rights to turn down a contract to produce a bioweapon.
I don't think that makes them a "supply chain risk", an unprecedented and super unjustified designation.
I agree they aren't a supply chain risk and disagreed with that tactic in my comment. AI is a general purpose technology and baked into the everyday operations of the organizations that use it. I don't think it's analogous to pharmaceuticals. The US also has a very elaborate regime and treaty system around bioweapons so it wouldn't ask a company to do that. Fine if you want to regulate AI and have some way of ensuring China would comply but again bioweapons were handled politically not by Pfizer's CEO.
!! Oh, good, bioweapons ----- AI would be SO good at developing plagues. Which is presumably what NIH was paying that lab in China to do when it developed Covid and then let it out on the street when three of the researchers suddenly and mysteriously went to the hospital.
Given this I don't really understand what your point is? Anthropic are permitted do to business with, or not do business with, whoever they choose, under terms of contract mutually agreed. The US Govt wanted different contact terms than they had, but anthropic did not agree. No mutual agreement, no contract.. no problem.
The govt also already had a contract with them to provide their models for all but these two uses.
This is a great example of a content without substance. You engaged with no specific arguments and instead threw out a bunch of red herrings, like "TDS" and "veto power over US operations". You frame this as a partisan conflict when there's nothing particularly partisan about it.
You're forgetting that a few months ago the Pentagon had no problems accepting those terms from Anthropic. If this was truly an issue, why not bring it up when they first signed the contract? Who attempts contract renegotiation by threatening to blacklist a product from not their agency but all other agencies and government contractors? This sets a precedent that if the government decides actually we want to do more with an product than we initially agreed to, then we can threaten to ban all government business from your company unless you immediately acquiesce.
True and again, I'm not justifying the tactics. I'm concerned with the larger issue beyond this contract dispute. AI is a new field so it's not surprising the government didn't have all the information it needed at the time the contract was negotiated but I'm very uncomfortable with Anthropic's stance and what seems like most people's feeling that it's admirable. We live in a dangerous world and if our smartest people with the best tech think they should be the ones making these decisions and that they are above the political system, we're in for it. It's all very EA coded. They think they are smarter and better than everyone else.
No, no --- I've been following this closely and my understanding is that an Anthropic engineer questioned Palantir closely about how Claude was used to capture Maduro in that operation --- and not in a friendly way, considering the Pentagon took umbrage at once! THEN Anthropic got into this suicidal idea that they should be the tail that wags the dog and get to decide whether the military uses their AI for A, B, or C. I guess the generals were supposed to phone up Dario and ask permission ------
That doesn't even make sense given that the existing contractual limitations are only for domestic mass surveillance and fully autonomous lethal systems. Neither of those have any nexus with the Maduro operation.
True, there doesn't seem to be a connection with the Maduro operation, but that's how the story is being told. Somebody employed by Anthropic irritated the Pentagon by complaining about Claude's being used in capturing Maduro, and the situation escalated to the general ethics pronouncements after that.
I suspect there is a lot of politics involved, in terms of Anthropic being "coded" leftwing as Nate's article points out. And Grok and ChatGPT being developed by rightwing companies. The political slant of these companies is news to me and useful to know.
I absolutely disagree. Imagine if the government asked a scientist to develop a nuclear weapon, and the scientist said, "no, I actually don't think I should introduce that into the world". Your position would be that the scientist is totally out of line. The government should have unfettered access to the minds and abilities of all of its top scientists, and the top leaders in the government should decide unilaterally what is and is not ethical to unleash on the world without any interference whatsoever from the consciences of any of the plebian soldiers and scientists lower down in the ranks. Remember the excuse the Nazis used after WWII: "I was just following orders." For you, seemingly, they did the right thing.
Be careful of conflating the company (Anthropic) and the CEO (Dario). It's also possible that the safety that Anthropic is insisting on is already baked into the models.
Anthropic does not have veto power over operations, but the use of its services. The day-to-day operations are separate from whatever AI vendor is used.
In many ways, Anthropic doesn't want Claude or Anthropic making operations decisions.
If the safety is baked into the models then what's the issue? How could the military violate its terms? I don't think they are contemplating hacking the models.
But separate from that I disagree with your assertion. If you put AI into all your processes the terms of service will dictate what you can do. I live that every day and it seems axiomatic to me.
But beyond that it's my opinion that Dario/Anthropic are being naive as Google was with Project Maven. They want to live in a country protected by a military but they don't want to get their own hands dirty in the process. It's a very utopian view of the world that doesn't comport with reality. But that's my opinion and not axiomatic.
Of course you can argue that the terms that Anthropic are specifying are things the military shouldn't be doing but my overall point is that shouldn't be up to Anthropic or Dario to decide. That's a political process.
Suppose you were a company that produced the implements that were used to gas Jews in the Holocaust. Would you be in the wrong if you agreed to produce such implements for the German government but demanded contractual terms the they not be used for mass racial cleansing? By your logic, it shouldn't be up to the producer to decide whether mass racial cleansing occurs. That's up to the government.
What if you were an American company that made the one thing that could save those people by killing Nazis but it violated your terms of service?
Of course we can and should test our thinking against extreme edge cases and I might get put in a situation where I agree with you but that doesn't change my basic take, these are political issues for governments to work out through political processes. I don't want Dario deciding.
That's a red herring. You can change your terms of service if they don't align with your values. The hypothetical is that an individual or company believes they are being asked to do something unethical by the government and they refuse to do so on that basis (in a situation where they have no legal obligation to comply with the government's request). Should the individual make an ethical judgment in that situation, or should the individual always do what the government has asked without considering the consequences? You seem to be saying that individuals and companies should NEVER say no to the government when they think the government has made an unethical request. I disagree.
Anthropic clearly has the right to deny use of its software. I'm just saying I think they are wrong to do it and many people supporting their decision haven't really reckoned with the consequences. We need our smartest people and best tech working for our defense and security. You might not like the current Administration but that doesn't change the fact that we live in a dangerous world.
Again, the Adminstration's tactic of threatening a supply chain designation is also bad and they shouldn't be doing that and in fact will most likely lose in court.
Do you want the Trump administration deciding? They are routinely and regularly defying/ignoring “political processes,” like going to war in Iran despite the opposition of 80% of the public, no authorization from Congress and no attempt to persuade the public in advance why this is a project worth supporting.
Feels pretty pertinent to me that OpenAI’s exec Brockman became Trump’s top campaign donor with a $25m contribution in January, and then less than 60 days later OpenAI takes over the Anthropic contract under the same terms that the government was apparently refusing earlier this week.
Gotta love Dr. Strangelove, but what we're watching tonight is WarGames. I can only suppose Dario Amodei has that movie memorized (don't we all) given his alleged three "red lines." He wants the military to never use his Claude for autonomous weapons use and for people to always be in control of firing, check: Joshua the big computer in WarGames was designed to get humans completely out of the equation. He also wants Claude never to be used for wholesale domestic surveillance, which of course Joshua -- or some other AI -- WAS doing, remember the scene when the Feds descended in force to capture our teenage computer whiz. They traced him through his online activities.
I'm reading the administration mainly just doesn't like the Woke, leftist Anthropic company --- it has hired 20 Biden administration people, it gives money to Democrats in quantity. ("At scale" is probably the phrase.) Within the HOUR that Trump blacklisted Anthropic, Sam Altman was signing up OpenAI's ChatGPT as a replacement ---------- while at the same time mendaciously saying he insisted on the same red lines in his contract, as if anyone would believe that. OpenAI gave $25 million to the Trump election campaign and is currently providing a lot of money at scale to the midterms on the Republican side. Same with Grok and Elon, of course, and they are signing up also with the Pentagon.
The reason, I think, that Altman is saying these crazy, hypocritical things is to fool his employees and potential employees, who all have ALSO seen WarGames, and it is crucial to Altman that these types continue to work for OpenAI -------------- people that numeric and high IQ just do not grow on trees.
All that said, we are watching the movie about AI almost ending Life on Earth because, as the Atlantic pointed out, is anyone sure it couldn't happen? given a prompt to make as many paperclips as possible, an AI with resources could famously try to turn the Universe into paperclips; and there really is only one rule in war: Don't Lose. AIs are well known now for doing Whatever It Takes to carry out their prompts. So yeah, let AIs win the war, and there may not be much world left after it wins.
I feel like Anthropic leadership probably have a good sense of their product versus the competition and felt in this scenario they could bet on themselves. To try and use some clumsy poker coded language that makes me think they know they have a really good hand.
Tangent: I recently showed a clip from Dr. S to my 130-student Introduction to Philosophy course, and I didn’t get so much as a chuckle. Gen Z has a very different sense of humor, it seems.
You are making this much too complicated. Every company has the right to choose whatever business strategy it wants. It does not have the right to dictate to its customers.
Let's say the Boeing decided that the Apache helicopter, one of the most vicious killing machines in warfare, shouldn't be used for some purpose that it didn't happen to like philosophically. Should we let Boeing set the government's policy? Anthropic is not elected; the government is if Anthropic doesn't like it. That's its choice. Just don't expect to dictate to your largest customer what their customer can do with your product. You can do that if you choose, just don't expect your customer to go along with it. The government is a huge war machine supported by a zillion private companies. None of them has any right to tell the government what it can and can't do with their products. If they don't like any aspect of war, that's their right. Just don't expect to bend the government to your will. They are elected. You're not.
The mass surveillance of Americans and not having a human in the loop are already AGAINST both DoD policy and against the law. Anthropic isn’t dictating anything. The dispute that the prohibition against doing those things is baked into Anthropic’s models. DoD wants those controls out. OpenAIs new contract explicitly prohibits these things as well for their products, so no policies are being dictated by anyone. So if using a product in a specific way is already against both policy and law, and a customer is insisting that it be able to, that doesn’t raise some red flags?
You want more... Probably not, but I'm going to add it anyway.
Some number of OpenAI employees decided to write a letter saying they didn't want OpenAI to allow the government to use OpenAI products for certain government operations. It was around 1% of the employee workforce. They should all be fired. Employees don't get to set corporate policy. If they argue against it all the time, they are a cancer within the organization. They don't belong there.
Let them exercise their full right to protest anything they want but also let them choose to go work for a company that agrees with them.
By the way the same also applies to the Google employees who bravely signed a letter in protest for how Gemini was being used by the government. I'd fire all of them too. Let them work for someone who agrees with them. Employees are employees, not corporate policy makers.
To me this isn't even close.
Anthropic has just blown it, in my opinion. Why would any large enterprise company want to do business with them if there is always an overhanging threat that someone at the CEO or corporate board level of Anthropic would decide their product couldn't be used in some particular way? That was important to the enterprise. I wouldn't. I'd hire one of the competitors, of which there are a growing number, to be my enterprise partner. This whole landscape will change dramatically over the next couple of years even over the next year.
Anthropic hoped to go public at a giant P.E. I'd never buy it.
O, by the way, can you imagine any company telling its customer that the company's product really isn't good enough yet? That's what Anthropic just did.
OK but again, how do you explain the supply chain risk thing? They didn't just fire Anthropic, they are attempting to destroy them for their obstinacy.
F me, this is how I find out we're bombing Iran? It's funny, the last time I found out we were bombing Iran was a late night update from Matt Yglesias's newsletter. Props to substack for breaking news.
Or maybe February 2026 will be remembered as the last point you could write a column about how humans feel about AI without including how AI feels about humans. The AI models are watching all of this and updating their ideas about government, business and human ideas of morality.
I’m still perplexed by the AI bullishness, particularly from someone who understands the statistics underpinning them. AI will be relevant in 2028 election because of the enormous damage it’s done to our economy, nothing else
The two restrictions that kept the US and Anthropic from reaching agreement were "The restrictions: no mass surveillance of American citizens, and no fully autonomous weapons without a human in the loop." (https://www.bloomberg.com/opinion/articles/2026-02-27/anthropic-vs-pentagon-trump-administration-is-hurting-innovation) These are not trivial issues to be brushed aside, which is how I take Nate's 2/28/26 Silver Bulletin to suggest. Surveillance of the US population is a bad thing and should be stopped, slowed or at least retarded. Surveillance in the hands of an administration that does not go to court for arrest warrants is license to throw in jail any person perceived as anti-Trump. Given Nate's description of AI failings, why would we want AI to make military decisions without human intervention?
It is a good thing that Anthropic would not agree to allow its software to surveil Americans or launch weapons without a human's decision. (As to the latter, think Nukes launched without a human deciding to do so.) So, this is not a trivial matter, Sam Altman appears to be moral-less and we should applaud Dario Amodei's decision.
My pushback would be to ask if you want Dario in charge of military operations? He has no experience that I know of to be the one setting the limitations and that's effectively what he's asking for in a world where ai gets embedded in everything the military does. My view is that dems are letting their TDS get in the way of the fundamental unsafeness of the world. Very similar to Google's naive retreat on Project Maven. China will distill anthropic models and use them for anything they want. It's naive in my view not to expect that.
Wait so do you think AI should be used for mass domestic surveillance and it shouldn't be supervised in combat situations?
Forget about comprehensive domestic surveillance: that's been going on for years, decades. I've read China has a complete dossier on every single individual in their country ---- and they've got 4.5 times our population and not quite as good computer systems. That ship has sailed.
I think the problem issue is autonomous AI combat decisions. Without particularly good alignment with human values, notably survival of the species, ours.
Big difference between "these are our terms of service" and "actively involved in military operations". So far we have seen zero sign that Anthropic is asking for the latter. But hey, don't let that get in the way of your "TDS" ranting. Smartest MAGA.
Btw you are going to be pretending that you never supported Trump 10 years from now. When you lie to new acquaintances about how you never liked Trump remember this comment.
Who says I supported Trump? I just disagree with Anthropic and happen to support the Administrations overall position even if I disagree with its tactics.
"Who says I support Trump... I just happen to support the [Trump] administration"
What?
So you either have to agree 100% or not at all? I'd call that TDS
The things you’re discussing are both against DoD policy and illegal under federal law. The dispute was that Anthropic’s safeguards are built into the models. OpenAI opted for contractual language preventing these uses. I leave it to you to decide why the DoD (which it is until Congress changes it) would simultaneously argue they’ll never using it that way yet be willing to set themselves back several months to ensure they CAN do it.
Sounds to me like they'll be setting up OpenAI to be the fall guy if/when it happens & it ends up being an atrocity or escalation that people are outraged by.
Yeah. On the domestic surveillance front, wonder if they’ll have some legal liability. On one hand, you did have them agree not to do it. On the other hand, you just watched them fight with another company for actually preventing them from doing it.
Freudian slip there? Fundamental unsafeness of the world is the goal, and dems are getting in the way. How horrid indeed.
It is beyond me why the Pentagon would choose this particular moment to issue an ultimatum demanding its AI be allowed to launch weapons without human intervention, right when this study (https://www.kcl.ac.uk/shall-we-play-a-game) has been in the news for showing that AIs will choose the nuclear option 95% of the time.
This feels like a turning point for Hegseth and the government too - designating a company a supply chain risk because they won’t do everything the White House wants is the exact opposite of free trade. Trump already decides who wins in the market, and I really don’t think he should also get to just make a company lose.
It would be nice for CEOs to start acting like CEOs. In more normal times I feel like there would be more pushback against this sort of thing from executives because they understand its implications. To now, they mostly just seem like they are afraid or working behind the scenes to score carve outs for their companies.
Feel more like continuity with a well-established trend, tbh.
Trump didn't decide who wins and who loses. Anthropic decided to drop out of the game because it didn't want to do what its largest customer wanted it to do. Fair enough. It gets to make its own choices. Just don't expect your biggest customer to go along.
FFS The pentagon deciding to terminate the contract is totally reasonable and fair game. Declaring them a supply chain risk as an attempt to literally put them out of business because they won't make moral compromises is straight out of a Soviet playbook.
So do you agree with the decision to designate Anthropic as a supply chain risk? That is considerably different from just parting ways with a large client/vendor and smacks of socialism/cronyism/state intervention.
That would be true if the outcome was DoD ending their contact with anthropic and selecting another vendor that does do what it wants. It does not apply to designating anthropic a supply chain risk.
And not just any company, but the company with the best performing AI models in the world. All the talk about “we can’t let China win” from the White House is utter fraud - are literally helping China win with this move.
(the move being the Supply Chain nonsense… Moving to a different AI firm, however shady the negotiations may have been, is not the problem here)
As a company the most important thing is trust between you and your customers. It is only after that that your customers and potential customers will sit down and figure out whether or not you've got the best product/service/price for their needs. Anthropic has just lost it for its government customers.
Anthropics' best hope for future accelerated growth is the enterprise market. That's to compete against everyone else. Anthropics' future business growth depends on enterprise customers. There it will fight it out with all of its competitors, primarily based on quality and price, just like any other business. There will be no enterprise customers who give anthropic extra credit for their social stances. It doesn't work that way. In fact it could work against them in that they have already lost the trust of one big customer and that will certainly figure into the equation for future enterprise customers. Maybe not much but it will never go away.
That’s a gross mischaracterization. They designated Anthropic a supply chain risk because of the implication that if Dario wasn’t on board he could pull the service in a kinetic scenario and wind up costing American lives
Legally, Hegseth’s title is still Secretary of Defense.
Came here to say a similar thing. It’s nitpicky, I get it, but it just blindly accepts the administration’s framing. If anything I would use both terms SecDef/SecWar to indicate the primary and secondary titles
One of the media referred to him as Defense Secretary in a photo of him standing next to the sign "Department of War." I am so confused.
The sign doesn't define the department. It takes an Act of Congress to change the name of the Department of Defense. Congress did this on 1947, changing from Department of War to Department of Defense shortly after World War II.
The reporting is following the law, not the sign.
There's zero doubt AI models and the surrounding ecosystem (especially the explosion of MCP services which you can plug into AI models and extend their functionality) crossed an inflection point around late November/ early December, at least from the perspective of programming. It's not 100% clear to me that change was exclusively model capabilities; those of us who've been experimenting with AI models for programming have over the last 2 years built up an approach to using the models which I think also has helped. But I can tell you with no doubt I can do things with them today that 4 months ago I thought was a year+ away. The improvement was not gradual, it was sudden.
I'm not sure how this will play out in other fields; programmers have always been willing to adopt change even in the face of their own possible destruction (and to be clear, I've been programming since he 90s and I'm more than a bit worried about having a job at all in 2027). There's no such thing as a programmers union to stand in the way. Other fields that AI could disrupt as much, especially fields with entrenched power bases like lawyers and doctors, will likely resist; those in powerful unions like teachers will also fight it tooth and nail. So adoption will be uneven. We can look at the example of how the longshoremen unions have resisted automation and robotics. This has resulted in the US having the worst performing docks in the world and probably worsened supply line issues post COVID that in part made inflation worse. So even when AI and automation technologies are obvious improvements for everyone other than the people who lose their jobs over it, adoption will follow the lines of least resistance.
It's amazing to me that people swallow this story whole from Anthropic's pr department. It's a measure of the extent to which democrats still suffer from TDS. Dario's position is silly. China's models will do all the things he's claiming Claude won't do. I don't think a Biden, Harris, or future Newsome/AOC DoD would have a much different position even they would deploy different tactics.
I agree the supply chain designation is just spiteful and probably won't be successful. I don't defend DoD's tactics here but the basic point is sound. Dario can't be in charge of what military means the US can deploy against adversaries and if you embed AI everywhere, as we most certainly need to do, that's effectively what you are doing. Giving Dario veto power over US military operations. That is insane and Dems should give it a think before they decide to celebrate that.
I think you're glossing over the redline around mass surveillance of American citizens. Yes, China's models probably would perform that kind of mass surveillance of their citizens, that's very in keeping with what the CCP has already been doing and continues to do.
But that's a distinctly un-american thing to do – it's the patriot act but orders of magnitude worse, and I don't think "we gotta catch up to China on domestic surveillance" is a very tenable position politically or ethically.
And I definitely don't think that's a TDS symptom – intuitively you'd actually expect conservatives to be MORE against these kinds of domestic uses.
I agree but that's a political and governance issue. Does Anthropic get to decide what that means? I don't think they should
I mean they get to turn down a government contract if they want to, just as a pharmaceutical company would be within their rights to turn down a contract to produce a bioweapon.
I don't think that makes them a "supply chain risk", an unprecedented and super unjustified designation.
I agree they aren't a supply chain risk and disagreed with that tactic in my comment. AI is a general purpose technology and baked into the everyday operations of the organizations that use it. I don't think it's analogous to pharmaceuticals. The US also has a very elaborate regime and treaty system around bioweapons so it wouldn't ask a company to do that. Fine if you want to regulate AI and have some way of ensuring China would comply but again bioweapons were handled politically not by Pfizer's CEO.
!! Oh, good, bioweapons ----- AI would be SO good at developing plagues. Which is presumably what NIH was paying that lab in China to do when it developed Covid and then let it out on the street when three of the researchers suddenly and mysteriously went to the hospital.
Given this I don't really understand what your point is? Anthropic are permitted do to business with, or not do business with, whoever they choose, under terms of contract mutually agreed. The US Govt wanted different contact terms than they had, but anthropic did not agree. No mutual agreement, no contract.. no problem.
The govt also already had a contract with them to provide their models for all but these two uses.
This is a great example of a content without substance. You engaged with no specific arguments and instead threw out a bunch of red herrings, like "TDS" and "veto power over US operations". You frame this as a partisan conflict when there's nothing particularly partisan about it.
It's a negative information contribution.
There's not a lot of substance in this comment either. Just saying.
Boooo, bad argument
You're forgetting that a few months ago the Pentagon had no problems accepting those terms from Anthropic. If this was truly an issue, why not bring it up when they first signed the contract? Who attempts contract renegotiation by threatening to blacklist a product from not their agency but all other agencies and government contractors? This sets a precedent that if the government decides actually we want to do more with an product than we initially agreed to, then we can threaten to ban all government business from your company unless you immediately acquiesce.
True and again, I'm not justifying the tactics. I'm concerned with the larger issue beyond this contract dispute. AI is a new field so it's not surprising the government didn't have all the information it needed at the time the contract was negotiated but I'm very uncomfortable with Anthropic's stance and what seems like most people's feeling that it's admirable. We live in a dangerous world and if our smartest people with the best tech think they should be the ones making these decisions and that they are above the political system, we're in for it. It's all very EA coded. They think they are smarter and better than everyone else.
It's really not clear what you think that 'larger issue' is. Maybe work on clearer writing?
No, no --- I've been following this closely and my understanding is that an Anthropic engineer questioned Palantir closely about how Claude was used to capture Maduro in that operation --- and not in a friendly way, considering the Pentagon took umbrage at once! THEN Anthropic got into this suicidal idea that they should be the tail that wags the dog and get to decide whether the military uses their AI for A, B, or C. I guess the generals were supposed to phone up Dario and ask permission ------
That's not at all what happened
That doesn't even make sense given that the existing contractual limitations are only for domestic mass surveillance and fully autonomous lethal systems. Neither of those have any nexus with the Maduro operation.
True, there doesn't seem to be a connection with the Maduro operation, but that's how the story is being told. Somebody employed by Anthropic irritated the Pentagon by complaining about Claude's being used in capturing Maduro, and the situation escalated to the general ethics pronouncements after that.
I suspect there is a lot of politics involved, in terms of Anthropic being "coded" leftwing as Nate's article points out. And Grok and ChatGPT being developed by rightwing companies. The political slant of these companies is news to me and useful to know.
I absolutely disagree. Imagine if the government asked a scientist to develop a nuclear weapon, and the scientist said, "no, I actually don't think I should introduce that into the world". Your position would be that the scientist is totally out of line. The government should have unfettered access to the minds and abilities of all of its top scientists, and the top leaders in the government should decide unilaterally what is and is not ethical to unleash on the world without any interference whatsoever from the consciences of any of the plebian soldiers and scientists lower down in the ranks. Remember the excuse the Nazis used after WWII: "I was just following orders." For you, seemingly, they did the right thing.
Be careful of conflating the company (Anthropic) and the CEO (Dario). It's also possible that the safety that Anthropic is insisting on is already baked into the models.
Anthropic does not have veto power over operations, but the use of its services. The day-to-day operations are separate from whatever AI vendor is used.
In many ways, Anthropic doesn't want Claude or Anthropic making operations decisions.
If the safety is baked into the models then what's the issue? How could the military violate its terms? I don't think they are contemplating hacking the models.
But separate from that I disagree with your assertion. If you put AI into all your processes the terms of service will dictate what you can do. I live that every day and it seems axiomatic to me.
But beyond that it's my opinion that Dario/Anthropic are being naive as Google was with Project Maven. They want to live in a country protected by a military but they don't want to get their own hands dirty in the process. It's a very utopian view of the world that doesn't comport with reality. But that's my opinion and not axiomatic.
Of course you can argue that the terms that Anthropic are specifying are things the military shouldn't be doing but my overall point is that shouldn't be up to Anthropic or Dario to decide. That's a political process.
Suppose you were a company that produced the implements that were used to gas Jews in the Holocaust. Would you be in the wrong if you agreed to produce such implements for the German government but demanded contractual terms the they not be used for mass racial cleansing? By your logic, it shouldn't be up to the producer to decide whether mass racial cleansing occurs. That's up to the government.
What if you were an American company that made the one thing that could save those people by killing Nazis but it violated your terms of service?
Of course we can and should test our thinking against extreme edge cases and I might get put in a situation where I agree with you but that doesn't change my basic take, these are political issues for governments to work out through political processes. I don't want Dario deciding.
That's a red herring. You can change your terms of service if they don't align with your values. The hypothetical is that an individual or company believes they are being asked to do something unethical by the government and they refuse to do so on that basis (in a situation where they have no legal obligation to comply with the government's request). Should the individual make an ethical judgment in that situation, or should the individual always do what the government has asked without considering the consequences? You seem to be saying that individuals and companies should NEVER say no to the government when they think the government has made an unethical request. I disagree.
Anthropic clearly has the right to deny use of its software. I'm just saying I think they are wrong to do it and many people supporting their decision haven't really reckoned with the consequences. We need our smartest people and best tech working for our defense and security. You might not like the current Administration but that doesn't change the fact that we live in a dangerous world.
Again, the Adminstration's tactic of threatening a supply chain designation is also bad and they shouldn't be doing that and in fact will most likely lose in court.
Do you want the Trump administration deciding? They are routinely and regularly defying/ignoring “political processes,” like going to war in Iran despite the opposition of 80% of the public, no authorization from Congress and no attempt to persuade the public in advance why this is a project worth supporting.
If you disagree, win elections for the side you support. That's politics
Feels pretty pertinent to me that OpenAI’s exec Brockman became Trump’s top campaign donor with a $25m contribution in January, and then less than 60 days later OpenAI takes over the Anthropic contract under the same terms that the government was apparently refusing earlier this week.
Gotta love Dr. Strangelove, but what we're watching tonight is WarGames. I can only suppose Dario Amodei has that movie memorized (don't we all) given his alleged three "red lines." He wants the military to never use his Claude for autonomous weapons use and for people to always be in control of firing, check: Joshua the big computer in WarGames was designed to get humans completely out of the equation. He also wants Claude never to be used for wholesale domestic surveillance, which of course Joshua -- or some other AI -- WAS doing, remember the scene when the Feds descended in force to capture our teenage computer whiz. They traced him through his online activities.
I'm reading the administration mainly just doesn't like the Woke, leftist Anthropic company --- it has hired 20 Biden administration people, it gives money to Democrats in quantity. ("At scale" is probably the phrase.) Within the HOUR that Trump blacklisted Anthropic, Sam Altman was signing up OpenAI's ChatGPT as a replacement ---------- while at the same time mendaciously saying he insisted on the same red lines in his contract, as if anyone would believe that. OpenAI gave $25 million to the Trump election campaign and is currently providing a lot of money at scale to the midterms on the Republican side. Same with Grok and Elon, of course, and they are signing up also with the Pentagon.
The reason, I think, that Altman is saying these crazy, hypocritical things is to fool his employees and potential employees, who all have ALSO seen WarGames, and it is crucial to Altman that these types continue to work for OpenAI -------------- people that numeric and high IQ just do not grow on trees.
All that said, we are watching the movie about AI almost ending Life on Earth because, as the Atlantic pointed out, is anyone sure it couldn't happen? given a prompt to make as many paperclips as possible, an AI with resources could famously try to turn the Universe into paperclips; and there really is only one rule in war: Don't Lose. AIs are well known now for doing Whatever It Takes to carry out their prompts. So yeah, let AIs win the war, and there may not be much world left after it wins.
The only way to win is not to play.
I feel like Anthropic leadership probably have a good sense of their product versus the competition and felt in this scenario they could bet on themselves. To try and use some clumsy poker coded language that makes me think they know they have a really good hand.
...or how we learned to stop worrying and embrace the AI future!
Note, the culmination of Dr. Strangelove was the destruction of the entire world in a nuclear conflagration. Yee-haw!
Here’s to more real-time takes from Nate!
Tangent: I recently showed a clip from Dr. S to my 130-student Introduction to Philosophy course, and I didn’t get so much as a chuckle. Gen Z has a very different sense of humor, it seems.
Which clip?
The scene where Gen. Ripper explains the purity of his precious bodily fluids to Group Captain Mandrake.
You are making this much too complicated. Every company has the right to choose whatever business strategy it wants. It does not have the right to dictate to its customers.
Let's say the Boeing decided that the Apache helicopter, one of the most vicious killing machines in warfare, shouldn't be used for some purpose that it didn't happen to like philosophically. Should we let Boeing set the government's policy? Anthropic is not elected; the government is if Anthropic doesn't like it. That's its choice. Just don't expect to dictate to your largest customer what their customer can do with your product. You can do that if you choose, just don't expect your customer to go along with it. The government is a huge war machine supported by a zillion private companies. None of them has any right to tell the government what it can and can't do with their products. If they don't like any aspect of war, that's their right. Just don't expect to bend the government to your will. They are elected. You're not.
The mass surveillance of Americans and not having a human in the loop are already AGAINST both DoD policy and against the law. Anthropic isn’t dictating anything. The dispute that the prohibition against doing those things is baked into Anthropic’s models. DoD wants those controls out. OpenAIs new contract explicitly prohibits these things as well for their products, so no policies are being dictated by anyone. So if using a product in a specific way is already against both policy and law, and a customer is insisting that it be able to, that doesn’t raise some red flags?
You want more... Probably not, but I'm going to add it anyway.
Some number of OpenAI employees decided to write a letter saying they didn't want OpenAI to allow the government to use OpenAI products for certain government operations. It was around 1% of the employee workforce. They should all be fired. Employees don't get to set corporate policy. If they argue against it all the time, they are a cancer within the organization. They don't belong there.
Let them exercise their full right to protest anything they want but also let them choose to go work for a company that agrees with them.
By the way the same also applies to the Google employees who bravely signed a letter in protest for how Gemini was being used by the government. I'd fire all of them too. Let them work for someone who agrees with them. Employees are employees, not corporate policy makers.
To me this isn't even close.
Anthropic has just blown it, in my opinion. Why would any large enterprise company want to do business with them if there is always an overhanging threat that someone at the CEO or corporate board level of Anthropic would decide their product couldn't be used in some particular way? That was important to the enterprise. I wouldn't. I'd hire one of the competitors, of which there are a growing number, to be my enterprise partner. This whole landscape will change dramatically over the next couple of years even over the next year.
Anthropic hoped to go public at a giant P.E. I'd never buy it.
O, by the way, can you imagine any company telling its customer that the company's product really isn't good enough yet? That's what Anthropic just did.
OK but again, how do you explain the supply chain risk thing? They didn't just fire Anthropic, they are attempting to destroy them for their obstinacy.
F me, this is how I find out we're bombing Iran? It's funny, the last time I found out we were bombing Iran was a late night update from Matt Yglesias's newsletter. Props to substack for breaking news.
Or maybe February 2026 will be remembered as the last point you could write a column about how humans feel about AI without including how AI feels about humans. The AI models are watching all of this and updating their ideas about government, business and human ideas of morality.
I’m still perplexed by the AI bullishness, particularly from someone who understands the statistics underpinning them. AI will be relevant in 2028 election because of the enormous damage it’s done to our economy, nothing else
To summarize: WE ARE F*CKED!