The ‘Catastrophic’ Implications of Unregulated Artificial Intelligence Warfare

Photo by Fadi Wael Alwhidi
Photo by Fadi Wael Alwhidi

Today’s unprecedented warfare advancements include the integration of artificial intelligence (“AI”), permitting pre-programmed computers to complete specific tasks based on generated algorithms.[1] Described as the “third revolution in warfare,”[2] AI weaponry can target individuals suspected of terrorist activity by collecting, analyzing, and combining data sources through an AI image recognition system, such as drone footage or intelligence streams.[3] These AI systems suggest “who or what to attack and when.”[4] Yet implementing AI into warfare remains a divisive issue without a customary international law ban or specific regulations authorizing its use.[5] On one hand, superpower states argue that legal regulations would prevent them from “exploit[ing] any potential military advances”[6] such as defending territory, deterring violence, or “win[ning] in conflict.”[7] In contrast, more than thirty countries, 165 nongovernmental organizations, industry experts, and legal scholars warn that the absence of legal regulations provides states with minimal guidance on deploying AI weaponry.[8] These implications are catastrophic, leading to high error rates with little to no repercussions,[9] along with bias and imprecise targeting where states “blindly allow [the misuse] to continue” despite being aware of it.[10] As analyzed below, the Israeli forces’ current use of AI warfare against Palestinians in Gaza confirms the latter’s predictions, demonstrating the urgent need for an international ban or regulations.

International Humanitarian Law (“IHL”) governs military requirements for armed conflicts, in which the central rule is that “all parties must distinguish, at all times, between combatants and civilians.”[11] The parties must take precautions or “constant care to spare civilians and civilian objects” during all military operations, prohibiting indiscriminate attacks, especially for densely populated areas.[12] Parties cannot reason that “civilians are not the target of the attack” but instead must demonstrate they took “all feasible precautions to minimize harm to civilians and civilian objects.”[13] This extends to places of worship, historical monuments, areas for agriculture, and water and food sources.[14]

The U.S., U.K., Israel, Australia, and Russia have argued that IHL is sufficient to regulate the development of autonomous technological weapons.[15] In a letter to the Convention on Certain Conventional Weapons, the U.S. government wrote that AI machines “deploy force more precisely and efficiently,” thereby reducing the “risks of harm to civilians and civilian objects.”[16] Accordingly, the functions of autonomous weapons already uphold IHL’s fundamental law.[17] The U.K. also claims that IHL “provides a robust, principle-based framework for the regulation” of AI systems.[18] Israel killing over 20,000 Palestinians in two months, however, proves the opposite[19], as their widespread use of AI warfare has resulted in several human rights violations.[20]

AI weaponry’s “notoriously flawed” algorithms have caused a track record of high error rates seen across “applications that require precision, accuracy, and safety.”[21] Relying on an AI system that imprecisely and biasedly generates targets is nearly identical to indiscriminate bombing, as is the case in Gaza.[22] Most of the Israeli forces’ attacks are carried out by “Habsora,” an AI system that produces targets “at a rate that far exceeds what was previously possible, … essentially facilitat[ing] a ‘mass assassination factory.’”[23] Israeli forces are instructed to “kill as many Hamas operatives as possible,” [24] authorizing them to bomb wide coverage of land with AI machines.[25] This is done to “save time, instead of doing a little more work to get a more accurate pinpointing.”[26] Habsora recommends a targeted individual, weapon equipment, or a command post, based on “cell phone messages, satellite imagery, drone footage, and even seismic sensors.”[27] The AI system informs soldiers how much collateral damage each strike will cause to each property. Soldiers then decide “whether to pass it along to soldiers in the field” to act on the recommendation.[28] If children are in a home, for example, the Israeli forces claim the death of those children is “intentional” if it means the AI system depicts one individual in the house as a suspected terrorist.[29]

The consequence of this is a devastating loss of human life, as “over 300 families have lost ten or more family members in Israeli bombings” in less than two months. Israeli military claims that AI helps them strike “as many as 250 targets a day,”[30] except these targets are repeated attacks “against civilians, schools, and hospitals.”[31] Though Israeli forces assert they’re “committed to international law,”[32] their experimental use of AI warfare has led to a technological justification for killing more than 1% of the Palestinian population.[33] This directly refutes the proposition that AI weaponry need not be regulated under the existing IHL.

Without international law, there is no attribution of conduct on the part of Israeli forces or the Israeli government for engaging in human rights violations.[34] When AI machines target civilians, pursuing accountability is difficult when the state’s acts or omissions to act are not attached to any liability.[35] This raises the question of who should be liable when AI weaponry results in the injury or death of innocent civilians: is it the developer of the software system, the state government, or the commander who gave the green light to send airstrikes?[36] Tracing AI systems back to “human actions and omissions” would help states assess whether there was a breach of international law.[37] Second, the Geneva Convention (“GC”) aims to regulate the use of weapons, as it currently limits the use of “landmines, incendiary weapons, blinding laser weapons, and clearance of explosive remnants of war.”[38] GC must extend its coverage on the limits of AI weaponry, particularly limitations on development, production, and use.[39] Finally, the International Criminal Court (“ICC”) must investigate any crimes against humanity, war crimes, or crimes of genocide committed through AI weaponry.[40] In this case, the ICC should investigate how Israel killed 20,000 Palestinians with AI warfare; otherwise, failing to do so could diminish the ICC’s credibility[41] and, worse, encourage other states, including Israel, to cause or continue causing mass destruction with the help of AI.[42] It is crucial that the ICC, GC, and IHL set a precedent on the restrictions or ban of AI warfare. In the meantime, unrestricted use of AI weaponry leaves the door wide open for severe destruction and human rights violations to continue.


[1] Bérénice Boutin, State Responsibility in Relation to Military Applications of Artificial Intelligence, 36 Leiden J. Int’l L. 133, 135 (2023).

[2] Amreen Gill, Ominous or Autonomous? The Case for Banning Autonomous Weapons Systems in Targeted Killings, 2 U. Ill. L. J. Tech. & Pol’y 455, 469 (2022)

[3] What you need to know about artificial intelligence in armed conflict, INTERNATIONAL COMMITTEE OF THE RED CROSS (Oct. 6, 2023), https://www.icrc.org/en/document/what-you-need-know-about-artificial-intelligence-armed-conflict.

[4] Id.

[5] Damien Gayle, UK, US and Russia among those opposing killer robot ban, THE GUARDIAN (Mar. 29, 2019), https://www.theguardian.com/science/2019/mar/29/uk-us-russia-opposing-killer-robot-ban-un-ai; Gill, supra note 2, at 469.

[6] Gayle, supra note 6.

[7] U.S. Dep’t. of Defense, Data, Analytics, and Artificial Intelligence Adoption Strategy, 18 (2023).

[8] U.S. Naval Institute Staff, Congressional Research Service Report 2015: Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems (2023); Charles P. Trumbull IV, Autonomous Weapons: How Existing Law Can Regulate Future Weapons, 34 Emory Int’l L. Rev. 533, 534 (2020).

[9] Gill, supra note 3, at 467-70.

[10] Boutin, supra note 2, at 147.

[11] Clive Baldwin, How Does International Humanitarian Law Apply in Israel and Gaza?, HUMAN RIGHTS WATCH (Oct. 27, 2023), https://www.hrw.org/news/2023/10/27/how-does-international-humanitarian-law-apply-israel-and-gaza.

[12] Advisory Service on IHL, What is International Humanitarian Law?, INTERNATIONAL COMMITTEE OF THE RED CROSS 6 (2022).

[13] Baldwin, supra note 12.

[14] Advisory Service on IHL, supra note 13.

[15] Gayle, supra note 6.

[16] U.S. Mission Geneva, U.S. Statement on Laws: Potential Military Applications of Advanced Technology, U.S. MISSION TO INTERNATIONAL ORGANIZATIONS IN GENEVA (Mar. 26, 2019), https://geneva.usmission.gov/2019/03/26/u-s-statement-on-laws-potential-military-applications-of-advanced-technology/.

[17] Gayle, supra note 6.

[18] Ministry of Defense, Defense Artificial Intelligence Strategy https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1082416/Defence_Artificial_Intelligence_Strategy.pdf, p. 53

[19] Emma Graham-Harrison and Julian Borger, Palestinian death toll in Gaza nears 20,000 with nearly 2 million people displaced, THE GUARDIAN (Dec. 19, 2023), https://www.theguardian.com/world/2023/dec/19/palestinian-casualties-in-gaza-near-20000-with-nearly-2m-people-displaced.

[20] Anwar Mhajne, Israel’s AI Revolution: From Innovation to Occupation, CARNEGIE ENDOWMENT FOR INTERNATIONAL PEACE (Nov. 2, 2023), https://carnegieendowment.org/sada/90892.

[21] Geoff Brumfiel, Israel is using an AI system to find targets in Gaza. Experts say it’s just the start, NPR (Dec. 14, 2023), https://www.npr.org/2023/12/14/1218643254/israel-is-using-an-ai-system-to-find-targets-in-gaza-experts-say-its-just-the-st.

[22] Id.

[23] Yuval Abraham, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza, 927 MAGAZINE (Nov. 30, 2023), https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/.

[24] Id.

[25] Harry Davies, Bethan McKernan, and Dan Sabbagh, ‘The Gospel’: how Israel uses AI to select bombing targets in Gaza, THE GUARDIAN (Dec. 1, 2023), https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets.

[26] Abraham, supra note 24.

[27] Brumfiel, supra note 22.

[28] Abraham, supra note 24.

[29] Id.

[30] Brumfiel, supra note 22.

[31] Jeremy Scahill, This Is Not a War Against Hamas, THE INTERCEPT (Dec. 11, 2023), https://theintercept.com/2023/12/11/israel-hamas-war-civilians-biden/.

[32] Abraham, supra note 24.

[33] With 20,000+ deaths, Israel wipes out about 1% of Gaza’s pre-war population, TRTWORLD (Dec. 22, 2023), https://www.trtworld.com/middle-east/with-20000-deaths-israel-wipes-out-about-1-of-gazas-pre-war-population-16356722.

[34] Boutin, supra note 1 at 150.

[35] Id.

[36] Brumfield, supra note 22.

[37] Boutin, supra note 2 at 150.

[38] Gayle, supra note 6.

[39] Id.

[40] The International Criminal Court (ICC), GOV’T OF NETH., https://www.government.nl/topics/international-peace-and-security/international-legal-order/the-international-criminal-court-icc.

[41] Jonathan Kuttab, The International Criminal Court’s Failure to Hold Israel Accountable. ARAB CENTER WASHINGTON D.C. (Sep. 12, 2023), https://arabcenterdc.org/resource/the-international-criminal-courts-failure-to-hold-israel-accountable/.

[42] ‘I will not give up’ on push for Gaza humanitarian ceasefire: Guterres, UNITED NATIONS (Dec. 10, 2023), https://news.un.org/en/story/2023/12/1144622; Baldwin, supra note 13.