As cogent a chain of history examination as Crabgrass found online. The headline is a merger of two short ending paragraphs, where the entire story shows it as a good summary for a headline.
Where this goes
As of March 5, 2026, [Anthropic CEO] Dario Amodei has reportedly resumed talks with the Pentagon, telling CBS News: “We are trying to deescalate the situation and come to some agreement that works for us and works for them.” He added: “Disagreeing with the government is the most American thing in the world.”
But the ground has shifted. OpenAI already signed a deal with the Pentagon without Anthropic’s restrictions. xAI is moving onto classified networks. Defense tech companies are dropping Claude preemptively. The supply chain risk designation hangs over Anthropic. And sources say the government would seize the technology before it would let Anthropic pull the plug.
[Palantir's] Maven’s arc tells you everything. A $70 million pathfinder project. Seventy-five lines of Python code trained to spot 38 types of objects in drone footage. Google walked away. Palantir picked it up, codenamed it “Tron,” and spent six years turning it into a $1.3 billion AI war engine that processes classified intelligence from 179 sources, generates targeting packages at machine speed, and now sits at the center of a shooting war.
The Google employees were right. In 2018, those 3,100 workers who signed the letter against Project Maven were written off as naive idealists who didn’t understand national security. Eight years later, the program they warned about is selecting bombing targets in Iran using AI that processes intelligence faster than any human can review. They didn’t stop it. They just lost the chance to shape it.
In his 2017 announcement of Project Maven, Deputy Secretary Work wrote that the Pentagon needed to “move much faster” on AI to maintain advantages over adversaries. Eight years later, Maven is identifying targets in a war with Iran at speeds that outpace human cognition. By summer, if the Pentagon gets its way, every single target will originate from a machine.
The story is statistically uncertain military targeting decisions by the in-place AI, with less than 90 sec human overview on average, is only a partly reliable tool.
Reading the full item, and finding other discussion of the blackballing of Claud on a whim and fancy power play - the Trump/Hegseth ongoing paradigm - is easy research left to readers and their favorite search tools. This item frames the players and the situation in a way that better sense can be made of other web reporting.
One other report worth special note, Politico questions the logical coherence of the Hegseth/Trump power play. As in, this does not hang together and make any sense.
