Donald Trump emphatically told all government agencies to stop using the ‘woke‘ Anthropic AI technology and quickly set up a replacement deal with Sam Altman‘s OpenAI.
Trump, Defense Secretary Pete Hegseth and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology by a Friday deadline.
Anthropic is accused of endangering national security after CEO Dario Amodei refused to back down over concerns the company’s products could be used in ways that would violate its safeguards.
‘We don’t need it, we don’t want it, and will not do business with them again!’ Trump said on social media.
Hegseth also deemed the company a ‘supply chain risk,’ a designation typically stamped on foreign adversaries that could derail the company´s critical partnerships with other businesses.
In a statement issued Friday evening, Anthropic said it would challenge what it called an unprecedented and legally unsound action ‘never before publicly applied to an American company.’
Anthropic had said it sought narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons.
The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations.
Donald Trump emphatically told all government agencies to stop using the ‘ woke’ Anthropic AI technology and quickly set up a replacement deal with Sam Altman’s OpenAI
Anthropic is accused of endangering national security after CEO Dario Amodei (pictured) refused to back down over concerns the company’s products could be used in ways that would violate its safeguards
‘No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,’ the company said.
‘We will challenge any supply chain risk designation in court.’
The United States used a form of artificial intelligence during the military operation to capture Venezuelan President Nicolas Maduro.
The use of Anthropic artificial-intelligence tool Claude highlights how AI use is gaining tractions in the Pentagon.
It is understood that Anthropic was the first AI model developer to be used in classified operations by the US’s Department of Defense following a $200 million with the company last year.
However, the developers have refused to comment on whether the software was used in any specific operation.
They have also said Anthropic’s usage guidelines prohibit Claude from being used to facilitate violence, develop weapons or conduct surveillance.
Despite this the mission to capture Maduro and his wife on January 3 involved the bombing of several sites across Venezuela’s capital Caracas.
Defense Secretary Pete Hegseth and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology by a Friday deadline
The government’s effort to assert dominance over the internal decision-making of the company comes amid a wider clash over AI´s role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance.
Hours after its competitor was punished, OpenAI CEO Sam Altman announced on Friday night that his company struck a deal with the Pentagon to supply its AI to classified military networks, potentially filling a gap created by Anthropic´s ouster.
But Altman said that the same red lines that were the sticking point in Anthropic´s dispute with the Pentagon are now enshrined in OpenAI´s new partnership.
‘Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,’ Altman wrote.
He added that the Defense Department ‘agrees with these principles, reflects them in law and policy, and we put them into our agreement.’
Altman also said he hopes the Pentagon will ‘offer these same terms to all AI companies’ as a way to ‘de-escalate away from legal and governmental actions and toward reasonable agreements.’
Trump said Anthropic made a mistake trying to strong-arm the Pentagon.
He wrote on Truth Social that most agencies must immediately stop using Anthropic’s AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.
Hours after its competitor was punished, OpenAI CEO Sam Altman (pictured) announced on Friday night that his company struck a deal with the Pentagon to supply its AI to classified military networks, potentially filling a gap created by Anthropic´s ouster
The United States used a form of artificial intelligence during the military operation to capture Venezuelan President Nicolas Maduro (pictured)
‘The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!’ he wrote in all caps.
Months of private talks exploded into public debate this week and hit a stalemate when Amodei said his company ‘cannot in good conscience accede’ to the demands.
Anthropic can afford to lose the contract but the government’s actions posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable startups.
The president’s decision was preceded by hours of top Trump appointees from the Pentagon and the State Department taking to social media to criticize Anthropic, but their complaints posed contradictions.
Top Pentagon spokesman Sean Parnell said Anthropic´s unwillingness to go along with the military’s demands was ‘jeopardizing critical military operations and potentially putting our warfighters at risk.’
Hegseth said the Pentagon ‘must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.’
Trump’s social media post said the company ‘better get their act together, and be helpful’ during the phase-out period or there would be ‘major civil and criminal consequences to follow.’
However, Hegseth’s choice to designate Anthropic a supply chain risk uses an administrative tool that has been designed for companies owned by US adversaries to prevent them from selling products that are harmful to American interests.
Virginia Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, expressed his concerns with the entire process
Retired Air Force General Jack Shanahan, a former leader of the Pentagon´s AI initiatives, wrote on social media this week that the government ‘painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end’
Virginia Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, noted that this dynamic, ‘combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.’
The dispute stunned AI developers in Silicon Valley, where venture capitalists, prominent AI scientists and a large number of workers from Anthropic’s top rivals – OpenAI and Google – voiced support for Amodei’s stand in open letters and other forums.
The moves could benefit OpenAI’s ChatGPT as well as Elon Musk´s competing chatbot, Grok, which the Pentagon also plans to give access to classified military networks.
It could serve as a warning to Google, which has a still-evolving contract to supply its AI tools to the military.
Musk sided with Trump´s administration, saying on his social media platform X that ‘Anthropic hates Western Civilization.’
Altman took a different approach, expressing solidarity with Anthropic’s safeguards and opposing the government’s ‘threatening’ approach while also working to secure OpenAI’s deal with the Pentagon.
It marked the latest twist in OpenAI’s longtime and sometimes acrimonious rivalry with Anthropic, which was founded by a group of ex-OpenAI leaders in 2021.
Retired Air Force General Jack Shanahan, a former leader of the Pentagon´s AI initiatives, wrote on social media this week that the government ‘painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.’
Shanahan said Claude is already being widely used across the government, including in classified settings, and Anthropic´s red lines were ‘reasonable.’
He said the AI large language models that power chatbots like Claude, Grok and ChatGPT are also ‘not ready for prime time in national security settings,’ particularly not for fully autonomous weapons.
Anthropic is ‘not trying to play cute here,’ he wrote on LinkedIn. ‘You won’t find a system with wider & deeper reach across the military.’
