AI startup Anthropic is gaining popularity even as it walks away from a major Pentagon contract after a dispute over how its technology should be used.

The San Francisco based company, known for its Claude chatbot, recently clashed with the US Defense Department over safeguards on artificial intelligence, leading to the loss of a military contract.

Despite the setback, Anthropic’s business momentum is accelerating. The company’s annualized revenue run rate has jumped to about $19 billion, up from $14 billion just weeks earlier, while Claude recently became the most downloaded free app on Apple’s App Store.

Dispute Over AI Limits

The conflict centers on Anthropic’s refusal to remove certain safety restrictions from its AI systems.

CEO Dario Amodei has insisted the company will not allow its technology to be used for:

Domestic mass surveillance
Fully autonomous weapons

Pentagon officials, however, want fewer restrictions and argue that military use of AI should be governed only by US law, not by company rules built into software systems.

Some defense officials also became frustrated when Anthropic’s models refused to participate in certain war gaming scenarios, citing safety policies.

Anthropic Still Supports National Security Work

Despite the dispute, Anthropic has actively worked with the US national security sector in recent years.

The company signed a $200 million Pentagon contract in 2025 and partnered with Palantir to make its AI tools available for government use, including systems capable of handling classified information.

Amodei has said he is not opposed to AI powered weapons in principle, but believes current AI systems are not reliable enough for fully autonomous military roles.

Investors Seek a Resolution

The growing clash with the Pentagon has raised concerns among some investors.

Major backers, including executives connected to Amazon, have held discussions with Anthropic leadership and government officials to prevent the dispute from escalating further.

Investors worry that if the government labels Anthropic a “supply chain risk,” it could limit the company’s ability to sell its AI products to government contractors and large enterprise clients.

AI Safety vs Military Power

The standoff highlights a broader debate across the technology sector about who should control the limits of artificial intelligence.

AI companies want to maintain safeguards to prevent misuse, while governments argue that strict limits could weaken national security capabilities.

For now, discussions between Anthropic and the Pentagon are continuing, though the outcome could shape how AI technologies are used in future military operations.

Related: Anthropic Gets Support From Big Tech in Pentagon Fight