Amongst the on-going horrors of the war across the Middle East and on top of significant questions around the legality and intent behind it, we’re seeing a dangerous step into a new kind of warfare. How significantly is human judgement being replaced by artificial intelligence (AI) in the decisions that are leading to the deaths of civilians?
“Shortening the kill chain” in Iran
According to reporting from the Washington Post, the US military is using AI to autonomously select airstrike targets inside Iran. [1] This isn’t a first for the region, Israel was claimed to be doing the same when selecting targets in Gaza.[2] Further afield, both Russia and Ukraine have been reportedly using AI-enhanced drones, including ones that can make the decision to kill independently from their human operators.[3]

In the case of the US strikes in Iran, these may not be fully autonomous weapons but, in military speak, they “shorten the kill chain”. This means that they reduce the time it takes for decisions to be made on airstrike targets, with the risk of also reducing the level of human scrutiny. And when the official White House communications are using phrases like “no pause, no hesitation” over clips of airstrikes, it’s fair to question just how carefully they are checking the AI system’s decisions.[4]
In some of the most harrowing scenes from this conflict so far, a girls’ school in the southern Iranian town of Minab was destroyed. The imagery that has since emerged shows children’s backpacks and schoolwork books strewn among the rubble. Iranian officials report that 150 students were killed in the attack. While the exact circumstances are still unknown, various experts who have seen the available evidence say it was most likely the US that mistakenly attacked the school during a series of airstrikes on a nearby Iranian naval base.[5] A national security advisor with specialism in civilian harm told the New York Times that the strike was most likely down to “target misidentification”.[6]
The US military are unlikely to reveal if AI was used in this particular case, so we may never know it’s true level of involvement. However, with reports that AI is being used in target selection it becomes important to question exactly how the decision was made to target that site. If AI was used, the question then becomes: who is accountable for these deaths?
For AI systems to work, the need to be trained on data. In the case of these airstrikes, the data is likely drawn from previous human controlled targeting decisions. But in war the first real-world test for these AI systems are the airstrikes they’re already influencing with potentially lethal consequences for civilians caught in the ever-increasing race to “shorten the kill chain”.
The battle of AI use in battle
Just before the US and Israeli attacks on Iran began, the US Department of Defence got into a high-profile dispute with one of its AI service providers. The company, Anthropic, said they would allow the US military to use its AI systems for everything except two ethical no-go areas. Firstly, they couldn’t use it for mass surveillance of the American public and secondly, they couldn’t use it for fully autonomous weapons.
In response, the Trump administration threatened Anthropic with several measures that would not only mean they would lose their contracts with the US military but that any businesses that used Anthropic AI systems would also not be allowed access to US military contracts.
OpenAI, the company behind ChatGPT, was quick to pick up the pieces and made a deal with the Department that was seemingly absent of any safeguards.[7]
Palantir, a US company run by a close Trump ally, is currently used by the US, UK and other Nato partners for military use and the implementation of AI systems. The UK Government has proudly boasted of how it’s integrating AI into certain parts of its military.[8]
The Future of Arms?

Is this the future of warfare? Some would argue that theoretically an effective AI targeting system implemented with the correct safeguards could help to reduce civilian casualties by reducing human error – although there is no hard evidence to suggest this would be the case. The US and other military forces are all secretive on exactly how AI is integrated into their warfighting capabilities and without any international standards on their use, it’s impossible to know just how automatically targets are being selected and struck.
With these systems already being used in conflicts with potentially devastating effects, we desperately need world leaders to come together and decide on rules and red lines for the use of AI in warfare.
We’ve done this before for other military technologies, particularly in cases where the international community judged a weapon to be so destructive or have too high a risk of harming innocent civilians. Treaties on cluster munitions, anti-personnel landmines and nuclear weapons have all been successful in limiting their use or stopping their spread.
JPIT has been working with an international coalition of civil society organisations and academics calling on world leaders to work towards this goal. In late 2025, 156 countries, including the United Kingdom, voted in favour of a UN resolution drawing attention to the concerns about autonomous weapons. The United States, Israel and Russia all voted against this resolution. Join our call supporting those countries advocating for a treaty on autonomous weapons, sign the Stop Killer Robots petition today.
You can read more about JPIT’s work on the emergence of new, deadly warfighting technologies here or watch the video below.
We can also pray for the region and those being affected by the conflict. Here is a prayer for the Middle East from the Methodist Church in Great Britain:
God of love,
As we see and hear the news of the devastating violence wrought upon
and within the Middle East,
we hold your broken world before you in prayer,
We pray for lives lost, for people who mourn, and for those living in fear
We pray for people stranded and in need of safe routes and safe harbours.
We long for peace and pray that those who wage war might exchange weapons for words.
Help us to know how to pray, how to speak and how to act,
that we might remember our call to follow the Prince of Peace,
In whose name we pray,
Amen.
[embedded content]
[1] Anthorpic’s AI tool Claude central to US campaign in Iran, amid bitter a feud. Copp, T., Dwoskin, E. and Duncan, I. 4 Mar 2026. Washington Post. Available at: https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign/
[2] The Guardian. Available at: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes
[3] Ukraine’s Killer AI Drones Are Back With A Vengeance. Hambling, D. 2 Jan 2026. Forbes. Available at: https://www.forbes.com/sites/davidhambling/2026/01/02/ukraines-killer-ai-drones-are-back-with-a-vengeance/
[4] The White House on X. 5 Mar 2026.
[5] US missile hit military base near Iran school, video analysis shows. Thomas, M. & Sardarizadeh, S. BBC News. Available at: https://www.bbc.co.uk/news/articles/cvg548lyjnyo
[6] Analysis Suggests School Was Hit Amid U.S. Strikes on Iranian Naval Base. Browne, M. & Boxerman, A. 5 Mar 2026. New York Times. Available at: https://www.nytimes.com/2026/03/05/world/middleeast/iran-school-us-strikes-naval-base.html
[7] OpenAI have since come under fire and have slightly rowed back on this deal making it clear that they would not allow their systems to be used to surveil the American public.
[8] New strategic partnership to unlock billions and boost military AI and innovation. Ministry of Defence, United Kingdom. 18 Sept 2025. Available at: https://www.gov.uk/government/news/new-strategic-partnership-to-unlock-billions-and-boost-military-ai-and-innovation