Anthropic’s Claude could assist intelligence work by analyzing sensitive documents, but the company strictly prohibits domestic surveillance, a stance that has drawn the attention and ire of the Trump administration.
According to a Semafor article, two senior White House officials described growing hostility from the administration over Anthropic’s rules on law-enforcement use of Claude. They said federal contractors working with agencies such as the FBI and the Secret Service have encountered roadblocks when attempting to deploy Claude for surveillance tasks.
The friction centers on Anthropic’s usage policies, which the officials characterized as vague and potentially applied unevenly for political reasons. They worry the rules could be interpreted broadly, limiting cooperative work with federal agencies that rely on AI assistance.
In some cases, Anthropic’s Claude models are reported to be the only AI systems cleared for top-secret security situations via Amazon Web Services’ GovCloud. The company runs a national security service and has a deal with the federal government to provide its services to agencies for a nominal $1 fee, while also maintaining engagements with the Department of Defense, though AI models are not allowed to be used for weapons development.
Separately, OpenAI announced a deal to provide ChatGPT Enterprise access to more than 2 million federal executive-branch workers, with a blanket agreement from the General Services Administration allowing OpenAI, Google, and Anthropic to supply tools to federal employees. The arrangement underscores a broader push to integrate AI tools within the government, even as policy constraints shape how these tools are used in security contexts.