Saturday 20 Apr 2024

Autonomous weapons: Blessing or curse?

If autonomous weapons are not specifically regulated by any treaties of International Humanitarian Law, who is responsible for war crimes committed by these systems?

Sherwyn F F Correia | JUNE 25, 2019, 03:16 AM IST

Sherwyn F F Correia

Stephen Hawking once said, “the discovery of artificial intelligence is either the best or the worst thing to happen to humanity.” 

Wars have been infamous for their brutal casualties and deaths, but scientific advancements and modernization have given rise to weapons which do not require man to control them. Such unmanned weapons were used in 1849 by the Prussians. Autonomous Weapon Systems are those weapons which do not require human intervention in its critical functions. These weapons can be further classified as Lethal Autonomous Weapons (L.A.W’s). 

L.A.W’s are a type of autonomous military robots that can independently search and engage targets based on programmed constraints and descriptions. 

There are many root causes for the rise in the development of such weapons, firstly these weapons offer an undeniable advantage in combat, secondly the development and manufacture of the weapons are cheaper and reduce human casualties and deaths, and thirdly there is a longstanding competition between developed countries who are precocious in defence research and development. 

In this backdrop, a few questions would arise. 

Who is supporting the development and use of these weapons?

The United States of America and Russia are one the biggest champions of the use of such autonomous weapons. They are backed by Israel, South Korea and Australia. China on the other hand has a very mixed stand; they want to ban the use of fully autonomous weapons but not its development or production. 

Who are against it?

The African Group, in a statement to the Group of Governmental Experts at Geneva on 9th of April 2018 asked for a legally binding document asking for a ban on fully autonomous weapons. The Bolivarian Republic of Venezuela in a working paper on behalf of the Non-Aligned Movement (NAM) which was a brain-child of former Prime Minister Jawaharlal Nehru and the then Yugoslav President Josip Broz Tito, asked for a legally binding instrument on the regulations and prohibitions on the use of fully autonomous weapons. 

Though India is the founding member of NAM, our defence production policies are gradually tilting towards enlisting the help of artificial intelligence in development of weapons. 

Are such autonomous weapons in compliance with International Humanitarian Law?

Autonomous weapons as defined are not specifically regulated by any treaties of the International Humanitarian Law (IHL). The rule of proportionality and the rule of distinction are two rules that must be adhered to. 

The former ensures distinction between military objects and civilian objects. And the latter speaks about the abortion of a mission if it becomes apparent that a target is not a military objective. 

The IHL makes it very clear that only combatants will be held responsible for any violations of the law, and cannot be applied to a machine, computer programme or autonomous weapon systems.

Developments in AI means that it is possible to imagine an incalculable system which gives ‘better’ outcomes than human decision-making. A weapon like this would not actually be making decisions in the same way we humans do, however strong our desire to anthropomorphize them might be. They do not have any moral agency. 

Without such a moral agent it is not feasible to ensure compliance with international humanitarian law. 

So, the next question to be asked is,  who is responsible for war crimes committed by autonomous weapons?

The difficulty in ascertaining who is responsible for war crimes committed with the use of autonomous weapon systems has led to rising legal concerns and debates over their development. .According to Rebecca Croot of, a law researcher, autonomous weapon systems will inevitably commit a serious violation of international humanitarian law without any human being acting intentionally or recklessly. 

And due to absence of such wilful human action, no one can be held criminally liable. But, a person who intentionally programmed an autonomous weapon system to commit a grave violation of international humanitarian law could be indicted or held responsible for a war crime, as could one who recklessly deployed an autonomous weapon system incompetent of distinguishing between lawful and unlawful targets, and even a commander who ordered that an autonomous weapon system be used in an inappropriate fashion would also be directly responsible for its actions. 

These are easy cases, but what about the hard case wherein no individual acts intentionally or recklessly, but an autonomous weapon system nonetheless takes action that constitutes a grave violation of international humanitarian law?

The need of the hour is for humans to retain control over the critical functions of such weapons so that these are in sync with international humanitarian law. I could draw an analogy from the Marvel movie, ‘Avengers: Age of Ultron,’ where Ultron, an AI system invented by Tony Stark, wanted to wipe out of all of humanity; in the end the Avengers had to fight their own brain-child. Are these warnings in disguise, or have we given birth to our own apocalypse?  

Share this