Anthropic Irks White House with Limits on AI Model Use The Scoop Anthropic, an AI startup, is facing tension with the Trump administration over its refusal to allow federal law enforcement agencies to use its AI models for certain purposes, notably surveillance of US citizens. Contractors working with agencies like the FBI, Secret Service, and Immigration and Customs Enforcement have had requests denied because Anthropic’s usage policy prohibits “domestic surveillance.” The policy is vaguely defined, leaving room for broad interpretation. This stance has caused friction with the government, which expects AI companies, championed by the administration as critical to national competitiveness, to be cooperative. Officials suspect Anthropic’s policy may be influenced by political considerations. Other AI companies, such as OpenAI, have surveillance restrictions but offer clearer exceptions for authorized law enforcement use. Anthropic has declined to comment on these tensions. Know More Anthropic offers “Claude” AI models through Amazon Web Services GovCloud, targeted at national security customers. These models are among the few top-tier AI systems cleared for top-secret security applications. The company has a deal with the federal government to provide its services for a nominal fee ($1). Anthropic also collaborates with the US Department of Defense but prohibits using its models for weapons development. The company previously opposed a Trump administration-backed bill that would have blocked state-level AI regulation, reflecting past clashes with officials. The company's policies have complicated the work of private contractors that serve law enforcement and national security clients. Reed’s View Anthropic’s approach raises broader questions about software control once sold to government bodies. Unlike typical software (e.g., Microsoft Office), which agencies can use freely, AI providers like Anthropic are restricting usage scenarios, thereby limiting utility. Government contracting traditionally disfavors selective usage restrictions. Activism within tech companies has led some to avoid defense industry engagements, but enforcing usage policies on AI products introduces new complexities. The conflict reflects a broader clash between the AI “safety” movement—aligned with startups like Anthropic advocating caution—and the Republican administration’s preference for rapid AI deployment. While Anthropic’s models perform well, ongoing political frictions could threaten its government business. --- Additional Resources & Notes Anthropic’s Usage Policy: https://www.anthropic.com/legal/aup OpenAI’s Policy: https://openai.com/policies/usage-policies/ Government deal announcement: https://www.gsa.gov/about-us/newsroom/news-releases/gsa-strikes-onegov-deal-with-anthropic-08122025 Anthropic and DoD collaboration: https://www.anthropic.com/news/anthropic-and-the-department-of-defense-to-advance-responsible-ai-in-defense-operations --- Author: Reed Albergotti, Tech Editor at Semafor Updated: September 17, 2025, 7:21 AM EDT