Skip to main content

Classic NL – Mind Radio

Loading metadata…

Artificial Intelligence Is Reshaping the US-Israeli Campaign Against Iran — and Exposing a Rift at the Heart of the Pentagon



AI tools are accelerating targeting, intelligence and damage assessment at unprecedented speed — but the technology's most prominent developer, Anthropic, finds itself at odds with the very government deploying its systems.


The US-Israeli military campaign against Iran is unfolding at a pace and precision that would have been impossible even a few years ago — and artificial intelligence is a central reason why. According to an investigation by The Wall Street Journal, AI tools are now embedded across every phase of the operation, from gathering and sifting intelligence to selecting targets, planning strike missions and rapidly assessing battlefield damage after each wave of attacks.

The Journal reports that Israeli intelligence services spent years preparing the groundwork for the opening strike that killed Supreme Leader Ali Khamenei, relying increasingly on AI to process a continuous flood of data harvested from hacked Tehran traffic cameras and intercepted senior officials' communications. The technology allowed a relatively small team of analysts to manage volumes of raw intelligence that would otherwise require thousands of personnel. Human analysts, US officers have acknowledged, can examine at most four per cent of the intelligence material typically collected; AI is beginning to close that gap dramatically.

Beyond intelligence, AI is compressing mission-planning timelines that once stretched over weeks into a matter of days. The technology can instantly recalculate the cascade of logistical consequences — aircraft type, weapons loadout, crew rostering, fuel consumption — that flow from even minor adjustments to a target list. In one illustrative example cited by the Journal, the US Army's 18th Airborne Corps, using software from data company Palantir Technologies, matched its own record as the military's most efficient targeting operation ever — but this time with just 20 personnel, compared with more than 2,000 required for an equivalent operation in Iraq.

The same reporting confirms that the first-day airstrike which killed dozens of children at a girls' elementary school in the southern Iranian town of Minab was most likely carried out by American forces — an incident the Journal presents as a sobering illustration of the limits of AI-assisted warfare. The technology accelerates decisions, but it cannot yet replace the human judgment required to avoid catastrophic error — and critics warn that growing dependence on AI outputs risks producing a culture in which commanders defer to algorithmic recommendations without adequate scrutiny.

The most politically charged dimension of the matter concerns the role of Anthropic, the San Francisco-based AI safety company, and its large language model Claude. US officials have confirmed that Claude is actively being used in the Iran campaign — despite the fact that President Trump has ordered the federal government to cease using Anthropic's products, and despite Defense Secretary Pete Hegseth being engaged in a public dispute with the company. The Pentagon has simultaneously contracted with Anthropic's rival, OpenAI, to deploy its models in classified settings.

The contradiction is a telling one. Anthropic was founded explicitly on the premise that advanced AI must be developed with safety and ethical constraints at its core — a philosophy that sits uneasily alongside its technology's deployment in a war whose civilian toll is already severe and whose legal basis is disputed. Anthropic has consistently maintained that Claude should not be used to facilitate lethal autonomous targeting or operations that bypass human oversight. The company's published usage policies explicitly restrict applications that could contribute to mass casualties or undermine accountability in the use of force. The revelation that Claude is nonetheless embedded in an active military campaign — against the stated wishes of the sitting US administration — raises profound questions about the gap between the principles of AI developers and the realities of wartime adoption. It also underscores a broader dilemma: once a technology is capable enough to be militarily decisive, the ability of its creators to govern how it is used may be far more limited than they had assumed.

Graphic: Perplexity