Why Claude AI is fighting with the Pentagon? How about growing a pair?
- snitzoid
- 2 days ago
- 10 min read
General Jack D Ripper from Kubrick's epic film expresses perfectly my philosophy about weaponry. Are we going to let a bunch of left wing pussies tell us how to defend our country?
I suppose the next piece of propaganda we're about to hear is to be weary of nuclear weapons. Better to be feared than loved. And Claude AI can either get on board or get used to being road kill...dagnabbit.
A ‘Fight About Vibes’ Drove the Pentagon’s Breakup with Anthropic
The AI giant’s CEO Dario Amodei and Defense Secretary Pete Hegseth have contrasting personalities and worldviews. They proved difficult to reconcile in a high-stakes showdown over the future of warfare.
Defense Secretary Pete Hegseth preaches a warrior ethos that is a contrast to the professorial approach from Anthropic CEO Dario Amodei, who has voiced warnings about AI's risks as well as its benefits. Associated Press, Reuters
By Amrith Ramkumar, Keach Hagey and Marcus Weisgerber, WSJ
March 2, 2026 9:00 pm ET
In his first face-to-face meeting with Defense Secretary Pete Hegseth, Anthropic Chief Executive Dario Amodei made his case about the risks of AI-controlled autonomous weapons.
Hegseth didn’t want to hear it, even from a CEO whose company developed AI tools that have become instrumental for the military.
“No CEO is going to tell our war fighters what they can and cannot do,” Hegseth said after cutting off Amodei midsentence in the meeting on Feb. 24, according to people familiar with the matter.
The rupture between the two men, with sharply contrasting personalities and worldviews, was never resolved. And now the Trump administration, which has championed the speedy rollout of AI as essential to economic growth and national security, finds itself at loggerheads with a homegrown giant of the industry.
“This is a fight about vibes and personalities masquerading as a policy dispute,” said Michael Horowitz, a former Defense Department official who worked on AI policy.
The dispute comes down to a “breakdown in trust between Anthropic and the Pentagon, where Anthropic doesn’t trust that the Pentagon knows enough to use their technology responsibly and the Pentagon doesn’t trust that Anthropic will be willing to work on important use cases that it needs,” he said.
Amodei, who over a year earlier had assured anxious employees that the company’s contract with the U.S. military was mostly about paperwork, has more recently framed the clash with the Pentagon as one with grave implications for the future of modern warfare and even society at large.
On Friday, President Trump directed all federal agencies to stop working with Anthropic and attacked the company’s executives for being “leftwing nutjobs.”
Later that day, after a deadline passed for Anthropic to agree to a deal about how its tools could be used, Hegseth designated the company a supply-chain risk. Rarely used against a U.S. company, the move—if it survives Anthropic’s expected legal challenge—could impair Anthropic’s ability to work with other government contractors including Lockheed Martin, Amazon and Microsoft. It potentially threatens the business relationships that have made it one of the world’s most valuable startups.
In an ironic twist, minutes before his post, Trump authorized strikes on Iran—attacks that were planned with the involvement of Anthropic’s Claude models, The Wall Street Journal reported.
Claude also played a role in the January military operation that captured Venezuelan President Nicolás Maduro and has been used for war gaming and mission planning, people familiar with the matter said.
Anthropic for years has been the most vocal AI company advocating for guardrails to ensure the technology is used safely. That stance at times has frustrated administration officials, who have embedded Anthropic’s tools widely across the government even as they were bothered about the company wanting to exert control over how they are used.
Earlier this year, Anthropic effectively banned the use of the word “pathogen” in model prompts as part of its safeguards against AI creating a bioweapon on its unclassified systems used by many agencies, people familiar with the matter said. The ban made it difficult for employees at the Centers for Disease Control and Prevention to use the AI tool. It took weeks to get workers permission to circumvent the ban.
Emil Michael, the undersecretary of defense for research and engineering, last week called Amodei a liar for mischaracterizing the Pentagon’s offer and accused him of trying to play God. One administration official said other tech CEOs such as Google’s Sundar Pichai or Amazon’s Andy Jassy wouldn’t tell the government how to use their technology and would have found a compromise. Another said government AI tools should be ideologically neutral.
By Monday, agencies including the Treasury Department and Department of Health and Human Services were telling employees that their AI tools would no longer work with Claude.
To critics, the moves are the latest example of the administration strongarming a private company through methods more common in state-run economies. “The Trump administration is taking the Chinese playbook and coercing a U.S. company,” said Navtej Dhillon, former deputy director of the National Economic Council during the Biden administration.
At the core of the conflict is a novel question: who should ultimately control how cutting-edge AI tools are deployed in conflict and society writ large?
Amodei and Hegseth approach the question differently. A bespectacled researcher who often twirls his curly hair, Amodei authors lengthy documents philosophizing about the importance of AI safety and is known for his deliberate approach to problem solving. He has been a vegetarian since childhood.
Hegseth is a former Fox News host with several tattoos tied to his Christian faith and military service. Videos of Hegseth lifting weights frequently circulate on social media and he played a role in President Trump’s decision to rename the Defense Department the Department of War.
As of Monday, the Pentagon hadn’t formally issued the designation against Anthropic, raising the possibility that a deal could be reached.
In recent days, as Anthropic’s clash with the Pentagon intensified, it lost its status as the only AI company approved for use in classified settings. Elon Musk’s xAI recently reached an agreement to be deployed in such settings and late on Friday, OpenAI announced that it had as well.
The Anthropic fight was never personal and was always about the Defense Department wanting to use its AI tools for all lawful purposes, a Pentagon official said.
Professor Panda
Amodei co-founded Anthropic in 2021 after leaving OpenAI because he felt the company was prioritizing business goals over AI safety. He is known to some of his employees as “Professor Panda.” Amodei and Anthropic’s co-founders committed to donating 80% of his founding stock to charity—a stake now worth billions of dollars.
Amodei chose not to release an early version of Claude in the summer of 2022, fearing that it would start a dangerous technology race. OpenAI released ChatGPT a few weeks later, forcing Anthropic to play catch-up.
While Amodei cemented a reputation for his methodical approach to AI development, Michael and Hegseth became known for their cutthroat approach to business and war. Michael helped build Uber as chief business officer when the company was known for aggressively taking on competitors and regulators. He went on to work with dozens of startups and championed efforts to integrate technology into Pentagon operations.
Michael had a long relationship with OpenAI’s Sam Altman, helping him sell his first startup in 2012. They also worked in the same startup ecosystem while Altman was leading incubator Y Combinator from 2014 to 2019.
While OpenAI pulled ahead with consumers, Anthropic’s Claude tool developed a devout following among coders. It found success nabbing enterprise contracts and raised capital at a blistering clip. It was valued at $380 billion after its most recent fundraising round.
Big investments from Amazon proved particularly beneficial—and became an entryway into the Pentagon. In November 2024, during the final days of the Biden administration, Anthropic and data-mining firm Palantir announced a partnership with Amazon that would give U.S. intelligence and defense agencies access to Claude models.
The partnership gave Anthropic a fast track to be used in classified settings through Palantir’s systems and made Anthropic the first model developer available for the most sensitive Pentagon operations.
Some Anthropic employees questioned how the technology would be used. Were there accountability mechanisms? Might the tools be used in operations in which people were killed?
Amodei reassured staff that the work was more mundane than their questions suggested. In a late 2024 all-hands meeting, he likened it to helping the government finish paperwork and back-office functions more quickly.
Even as Anthropic gained momentum on many fronts, it was rankling administration officials at the start of Trump’s second term.
Amodei’s dire public warnings on the dangers of AI and criticism of companies sending advanced AI chips to China made him one of the few AI executives out of step with Trump. In late May, Amodei warned that AI could destroy about half of all entry-level white-collar jobs.
Trump’s AI czar David Sacks called Anthropic “committed leftists” on the podcast he co-hosts, citing its links to organizations that are Democratic donors. Anthropic had previously hired several Biden-era officials. Amodei called Trump a “feudal warlord” before the 2024 election.
Still, in July Anthropic announced a contract worth up to $200 million from the Pentagon. It also inked an agreement with the central procurement arm of the federal government to let other agencies use Claude.
Around the same time, Sacks and other officials worked on an executive order targeting “woke AI,” a move that was widely seen as going after Anthropic.
The company’s work with the military was seen by some industry players as one way it could rebuff claims about being woke, which the company has called baseless.
President Trump signing an AI initiative in the Oval Office in December. Associated Press
Anthropic touted its Pentagon work at a September event at Washington’s Union Station. But Amodei again criticized the administration for allowing chips to be exported to countries that could pose security threats. He said there are some government officials “who don’t seem to get it, who still think this is an economic race to diffuse our technology to different parts of the world, and not an attempt to build the most powerful technology the world has ever seen.”
Late last year, the Pentagon began discussing changing its contracts with AI companies so that they could use the technology in all lawful use cases. Anthropic’s hesitance to give the blanket approval and desire to maintain red lines banning uses for mass domestic surveillance and autonomous weapons frustrated some administration officials, people familiar with the conversations said.
OpenAI’s Altman sees opportunity
The clash between Anthropic and the Pentagon intensified in January, with the Journal and other media outlets reporting that its contract could be canceled.
After the Venezuela raid, an Anthropic employee asked a Palantir counterpart how Claude was used. Defense Department officials found out and were upset, people familiar with the matter said. Anthropic has said it was a routine call between partners.
During a Jan. 12 speech at Musk’s SpaceX, Hegseth said Grok would join the Pentagon’s military AI platform. He made apparent jabs at Anthropic, saying, “We will not employ AI models that won’t allow you to fight wars.”
The Defense Department was negotiating but Anthropic continued to hold firm on its red lines. It wanted the bans explicit despite Pentagon assurances it wouldn’t conduct those operations or break the law.
Around the same time, media outlets reported that when Michael asked Amodei a hypothetical about whether the Pentagon could use Claude to take out missiles approaching the U.S., the Anthropic CEO said Defense Department officials should check with the company first. The response reportedly angered the Trump administration. Anthropic denied that Amodei said that.
Suspecting they were at an impasse, Pentagon officials accelerated discussions with Anthropic’s main rival. Michael reached out to Joe Larson, the head of government at OpenAI, to see if the company could begin the process of becoming certified to get deployed on classified systems, according to people familiar with the matter. Officials were already working to secure that status for Musk’s Grok.
As Anthropic’s relationship with the administration hit new lows, allies stepped in to broker a truce. Shyam Sankar, Palantir’s chief technology officer, suggested workarounds so Anthropic could accept the Pentagon’s terms while maintaining safeguards that were later accepted by rival OpenAI, people familiar with the matter said. Anthropic rejected the agreement.
At their Feb. 24 meeting at the Pentagon, Hegseth complimented Amodei on the quality of the company’s models while reiterating his threat to label Anthropic a supply-chain risk. He also lobbed an even bigger threat: invoking the Defense Production Act, a Cold War-era law that gives the government control of key industries, to force Anthropic to do its bidding. The defense secretary gave Amodei until 5:01 p.m. Friday to agree to the military’s right to use the technology in all lawful cases.
On Wednesday night, the Defense Department sent over new suggested contract language.
That day, OpenAI’s Altman reached out to Michael. Altman felt strongly that the risk of either triggering the Defense Production Act or designating Anthropic a supply-chain risk wasn’t good for the country, according to people familiar with their conversation.
Sam Altman, CEO of OpenAI, speaking at the AI Impact Summit 2026.
OpenAI CEO Sam Altman saw an opportunity and agreed to conditions that allowed his company access to classified government systems. Rajat Gupta/EPA/Shutterstock
But he also saw an opportunity for OpenAI. The company proposed a contract that used language from existing law to keep the guardrails against mass domestic surveillance and autonomous weapons, while not asking the Pentagon to alter its usage policy for the company, according to people familiar with the matter. OpenAI’s contract also included other measures, like deploying researchers with security clearance to monitor how the systems would be used.
OpenAI has a different political profile than Anthropic: The company has praised Trump’s tech strategy and promised investments to build data centers for training AI models. OpenAI President Greg Brockman and his wife donated $25 million to a Trump aligned super political-action committee last year.
Missed deadline
On Thursday, Amodei reiterated the company’s red lines. “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” a spokesman said.
Some in the Defense Department thought that the two sides were close to a deal before Amodei’s statement. Senators called for both sides to de-escalate the situation.
That day, Altman told staff that the company was working on a deal that might help solve the impasse.
As the Friday deadline approached, Trump said that he was directing federal agencies to cease working with Anthropic. But the parties were still negotiating.
At 5:01 p.m. Michael called Amodei, who didn’t answer. Michael then talked to another top Anthropic executive offering a deal that Anthropic felt would have left the door open to the collection or analysis of large amounts of data on U.S. residents, people familiar with the matter said.
Some inside Anthropic had thought an agreement was nearly done before that final proposal, which was rejected. Company officials had recently found out that they were in line to win a Pentagon contract to work on using AI to control drones but were out of the running because of the ongoing negotiations.
Michael has disputed the company’s characterization of the offer.
Moments later, Hegseth posted on social media that he was designating Anthropic a supply-chain risk.
What happens next isn’t clear, but the standoff appears to be boosting Anthropic’s popularity among consumers. By Sunday, Claude had topped ChatGPT to become the most downloaded app in Apple’s app store.
Comments