
The U.S. Treasury Department is ending all use of Anthropic products—including its Claude AI platform—after President Donald Trump ordered federal agencies to stop working with the company. Treasury Secretary Scott Bessent announced the decision in a post on X on Monday, framing the move as a national-security necessity and arguing that no private company should “dictate the terms” of U.S. national security.
The action is not isolated. Alongside Treasury, the Federal Housing Finance Agency (FHFA) said it is terminating Anthropic use, and FHFA Director William Pulte said the government-backed mortgage giants Fannie Mae and Freddie Mac are also ending use of Anthropic products. Taken together, the announcements show a coordinated, government-wide pullback from one of the most prominent U.S. AI labs.
The State Department is moving in the opposite direction—not away from AI, but away from Anthropic. The State Department will switch the model powering its internal chatbot, StateChat, from Anthropic to OpenAI, and for now the tool will run on GPT-4.1. This shift illustrates how agencies are replacing Claude with competing models rather than simply shutting down AI assistants altogether.
The abrupt pivot stems from an escalating standoff between Anthropic and the Trump administration over AI “guardrails”—rules that restrict how models can be used, especially in sensitive military and surveillance contexts. Trump directed agencies on Friday to stop work with Anthropic, and the Pentagon indicated it would designate the startup a supply-chain risk, a step that would carry major practical consequences for government procurement and contractor use. Trump also said there would be a six-month phase-out for the Defense Department and other agencies that still rely on Anthropic tools.
The episode is striking because it amounts to an extraordinary public rebuke of a top-tier American AI company—one that had been viewed as part of the U.S. technological edge in AI systems relevant to national security.The move as potentially turning Anthropic into a “pariah” in government circles, a status Washington typically reserves for hostile foreign suppliers.
At the same time, the vacuum created by Anthropic’s removal is quickly being filled. OpenAI announced a deal late Friday to deploy technology inside the Defense Department’s classified network—suggesting a competitive reshuffling where the government’s need for advanced AI continues, but preferred vendors are changing rapidly.
Overall, the story captures a sharp turning point in the U.S. government’s relationship with commercial AI providers: agencies appear willing to walk away from a major model supplier if its safety policies collide with defense and security priorities, even if that means quickly migrating tools, workflows, and contracts to rivals.








