Reports of AI-assisted targeting in the US-Israeli war on Iran renew scrutiny of Israel’s pioneering AI-driven killing systems.
Key Takeaways
- Experts say the scale and speed of strikes suggest AI may be generating targets in the US-Israeli war on Iran.
- The Washington Post reported the Pentagon used Palantir’s Maven Smart System with Anthropic’s Claude model to analyze intelligence and prioritize targets.
- Specialists warn that AI-driven targeting raises questions about “meaningful human control” and accountability when civilians are killed.
- Israel previously deployed AI targeting systems during the genocide in Gaza, including “Lavender,” “Gospel,” and “Where’s Daddy.”
- Researchers say Israel has long used occupied Palestine as a testing ground for surveillance and automated warfare technologies.
AI Warfare Expands in the War on Iran
Reports that artificial intelligence is being used to accelerate targeting in the US-Israeli aggression on Iran are raising growing concerns among technology experts and legal scholars, who warn that automation could be reshaping how lethal decisions are made on the battlefield.
The warnings come as thousands of strikes have reportedly been carried out across Iran since the start of the war. According to reporting by AFP cited in the Japan Times, the scale of operations and the rapid pace of target selection suggest that artificial intelligence systems may be playing a significant role in identifying potential strike locations.
Peter Asaro, an artificial intelligence and robotics expert at The New School in New York, said the tempo of operations points to automated tools being used to generate targets.
“You can rapidly produce long lists of targets much faster than humans can do it by automating that process,” Asaro told AFP.
However, he stressed that this speed raises fundamental legal and ethical questions about how targets are reviewed.
“The ethical and legal question is: To what degree are those humans actually reviewing the specific targets that have been listed, verifying their legality and their value militarily before authorizing?” Asaro said.
AI Targeting Systems
Separate reporting indicates that the United States military is already relying on advanced artificial intelligence platforms in its campaign involving Iran.
According to Anadolu Agency, citing a report by The Washington Post, the Pentagon used the Maven Smart System, an AI-powered platform developed by the American data analytics company Palantir Technologies, to identify and prioritize potential targets.
The system reportedly analyzes large volumes of classified intelligence gathered from satellites, surveillance platforms, and other sources. It then produces rapid assessments to help commanders select targets and determine operational priorities.
The report said the system has been enhanced with Claude, a generative AI model developed by the company Anthropic.
Sources familiar with the program told The Washington Post that the technology helped generate hundreds of potential targets and provided exact geographic coordinates, allowing commanders to move from weeks of preparation to near real-time operational decisions.
Reuters also reported that the Pentagon has become increasingly dependent on AI-enabled systems like Maven. The agency said Palantir was ordered to remove Anthropic’s technology after tensions emerged between the company and the Trump administration over the wartime use of artificial intelligence.
Despite the dispute, Reuters reported that the system remains embedded in US military planning while alternative technologies are being developed.
The Kill Chain: How Israel’s Spies Engineered War and Mayhem
Control and Accountability
Experts say the growing use of AI in warfare raises urgent questions about accountability when mistakes occur.
Asaro warned that automated systems can generate large target lists very quickly, potentially reducing the time available for legal review.
“The desire (with) all those systems is to be able to make decisions and move faster than your enemy,” he said.. “Are you actually still in control of what’s happening?”
Another concern is transparency. Because these systems rely on classified intelligence databases and proprietary algorithms, it can be difficult to determine how a target was selected or why a mistake occurred.
“There is no easy way of evaluating the output of these systems,” Asaro said.
This creates uncertainty when civilians are killed during military operations.
Asaro pointed to reports that a school in the Iranian city of Minab was struck during the first day of the war, killing more than 160 people according to Iranian authorities.
“If something does go wrong, then who’s responsible?” Asaro asked.
Is Iran Violating International Law by Striking US Bases in Gulf States?
Gaza’s AI Precedent
The debate over AI-assisted warfare in Iran is shaped heavily by Israel’s earlier use of artificial intelligence targeting systems during the genocide in Gaza.
Investigations by journalists and researchers have documented several AI-enabled platforms used by the Israeli military to generate targets for airstrikes.
According to research published by the Institute for Palestine Studies and other analysts, systems known as “Lavender,” “Gospel,” and “Where’s Daddy” were used to analyze massive surveillance databases and recommend individuals or buildings for attack.
The systems process large volumes of data collected through Israel’s extensive surveillance infrastructure in occupied Palestine, including communications metadata, digital activity, and social network connections.
These AI tools then generate lists of suspected targets, which Israeli operators review before approving strikes.
Israeli intelligence sources told +972 Magazine that officers sometimes spent as little as 20 seconds reviewing a target generated by the system, primarily confirming the target’s identity.
Technology scholar Sophia Goodfriend explained that the systems do not operate as fully autonomous weapons but dramatically accelerate the targeting process.
“Israel is not relying on fully autonomous weapons in the current war on Gaza,” she wrote in +972 Magazine. Instead, intelligence units “use AI-powered targeting systems to rank civilians and civilian infrastructure according to their likelihood of being affiliated with militant organizations.”
This process, she said, “rapidly accelerates and expands the process by which the army chooses who to kill, generating more targets in one day than human personnel can produce in an entire year.”
Researchers say these systems rely on the massive surveillance architecture Israel has built across Gaza and the occupied West Bank.
‘Hiroshima Would Be Child’s Play’ — Medvedev; Lavrov Condemns US ‘Aggression’
‘Digital Occupation’
Palestinian media scholar Helga Tawil-Souri described this system as a form of “digital occupation,” in which Israel controls telecommunications infrastructure and collects large volumes of data on Palestinian populations.
Scholars say the use of Gaza as a testing ground for surveillance and targeting technologies has long been part of Israel’s military strategy.
Palestinian legal scholar Samera Esmeir described the enclave as a “laboratory” for military experimentation, writing that “the transformation of Gaza into a laboratory for colonial and imperial hegemony in the region is made in Israel.”
For critics, these developments provide the broader context for today’s concerns about AI in the war on Iran.
The technology now shaping target selection in the conflict did not emerge overnight. Analysts say it is the product of decades of Israeli investment in surveillance systems, data analysis, and automated targeting technologies.
Many of these tools were first developed and tested in occupied Palestine, particularly in Gaza and the West Bank, before appearing on a wider battlefield.
(The Palestine Chronicle)


Be the first to comment