Delegating Life and Death: The Role of AI in Modern Warfare

On Friday, 11 October 2024, the University of Rome La Sapienza hosted a conference organized by the International Research Institute Disarmament Archive (IRIAD) and the Italian Peace and Disarmament Network, in collaboration with numerous non-governmental organizations. The event focused 

Machines are now at the center of debate, particularly in relation to two current conflicts, where the use of advanced technologies increasingly affects civilians. From semi-autonomous and autonomous drones in the Russia-Ukraine conflict to the use of algorithms for identifying potential enemies in the Israel-Gaza conflict, a critical question arises: are we delegating human decisions to machines?

The Use of Algorithms in the Gaza Conflict

Meron Rapoport, an Israeli journalist from Local Call and 972, discussed the use of artificial intelligence by Israel in response to the Hamas attack of October 7, 2023. AI generated a list of targets, including both military infrastructures such as bases and training camps, as well as individuals, including Hamas militants and Islamic jihadists, based on vast amounts of data collected by the state.

The initial list contained over 37,000 individuals, with an estimated error margin of around 10%. This margin, coupled with the number of acceptable collateral damages, led to a significant issue: errors were amplified due to limited human oversight on the automatically generated lists. The key issue that emerged was not whether AI use was justified, but the consequences of its deployment.

An example of the algorithm’s devastating impact was the attempt to identify potential Hamas militants on the streets, for instance, at charging stations. This led to a dramatic number of civilian deaths and injuries, with many civilians being mistakenly targeted. As Safwat Al Kahlout, an Al Jazeera journalist, pointed out, many victims were innocent people simply in the wrong place at the wrong time, highlighting the severe risks of relying on such technologies in conflict zones.

Public Opinion on AI Use in Warfare

Public opinion on the use of artificial intelligence in war was clearly reflected during the conference, thanks to an analysis by Francesca Farruggia from IRIAD. Referring to a 2021 IPSOS survey, it was noted that 61% of respondents in 28 countries opposed the use of AI in the military. The main concerns related to the cost of these weapons, the potential illegality of actions taken, the risk of technical malfunctions, and doubts about accountability in case of errors. However, the central issue that raised the most concern was moral in nature: delegating a machine to make life-or-death decisions about a human being raises profound ethical questions that need to be addressed.

International Norms on Autonomous Weapons

If 2024 marks the 160th anniversary of the first Geneva Convention, the complex global emergency we face today challenges the international legal achievements reached so far. One of the main current issues concerns the definition of lethal autonomous weapons. As highlighted by Rosario Valastro, president of the Italian Red Cross, an autonomous weapon does not need to be lethal to be considered illegal; what matters is its ability to strike indiscriminately. Therefore, the debate should focus more on indiscrimination than lethality.

The need for a legal framework requires efforts not only from the UN but also from civil organizations. Davide Del Monte, president of Info Nodes, emphasized how the rapid pace of technological progress represents an obstacle to the establishment of a regulation. At the European level, important steps have been taken by the European Union, as recalled by Marco Carlizzi from Banca Etica, who mentioned the first regulation on the subject. However, this regulation does not apply to defense systems, as established by Article 2, which distinguishes between high- and low-risk activities.

Amnesty International, represented by Tina Marinari, also recognizes the urgency of a global treaty against autonomous weapons. As with the previous success in banning cluster munitions, a collective action against autonomous weapons, which lack empathy and can indiscriminately strike any target, is now more necessary than ever.

Legal and Moral Aspects of Responsibility

If killing is never justified, international law establishes key principles such as proportionality, non-discrimination, and the distinction between military and civilian targets. However, when examining the responsibility for attacks, especially involving artificial intelligence, two crucial issues arise: legality and morality. While legality may be addressed within a normative context, morality is profoundly compromised by the use of autonomous machines that decide who lives and dies. As Peter Asaro, a professor of history and sociology of science at The New School in the USA, pointed out, the risk is that these autonomous weapons could evolve into new forms of weapons of mass destruction, eroding international rights due to the complexity of defining accountability.

Asaro underscores the urgent need for a multilateral treaty, distinguishing between two types of frameworks: the prohibited system, which includes unpredictable and uncontrollable weapons, and the regulated system, which refers to potentially harmful technologies but is designed primarily for security and defense, rather than for killing humans.

 

by Camilla Levis

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.