The SkyShark, an autonomous drone built in the United Kingdom, is put on display at the Defence and Security Equipment International exhibition in London.Credit: John Keeble/Getty Images
Weapons capable of identifying and attacking targets automatically have been in use for more than 80 years. An early example is the Mark 24 Fido, a US anti-submarine torpedo equipped with microphones to home in on targets, which was first deployed against German U-boats in 1943.

Nature Spotlight: Robotics
Such ‘first-wave’ autonomous systems were designed to be used in narrowly defined scenarios and programmed to act in response to signals such as the radiofrequency emissions of specific targets. The past ten years have seen the development of more advanced systems that can use artificial intelligence to navigate, identify and destroy targets with little or no human intervention. This has led to growing calls from human-rights groups to ban or regulate the technologies.
Nehal Bhuta, a professor of international law at the University of Edinburgh, UK, has been investigating the legality and ethics of autonomous weapons for more than a decade. He was among the authors of a report on the responsible use of AI presented to the United Nations Security Council last month by Netherlands Prime Minister Dick Schoof.
Bhuta says that autonomous weapons, especially those that are AI-enabled, raise multiple ethical and legal concerns, including determining responsibility for system failures and potentially encouraging the intrusive collection of civilian data. He says there is still time for the international community to agree on principles and regulations to limit the risk, and warns that an arms race could ensue if it fails to do so.
Which legal frameworks and principles currently apply to autonomous weapons systems?
There is no specific legal framework that applies to the use of autonomy or AI in these systems. Under international humanitarian law, based on the Hague Conventions and the Geneva Conventions, which together set out international law on war and war crimes, weapons must be capable of being used in a manner that can distinguish between civilian and military targets. Attacks must not result in disproportionate harm to civilians, and combatants must take precautions to verify they have the right target and reduce the risk of civilian harm. These international laws apply to all weapons, including the use of advanced autonomous systems, such as the drones deployed by Ukraine in June, which used machine learning to select, identify and strike targets deep within Russia on the basis of preprogrammed instructions.

Professor Nehal Bhuta says it is important for the international community to agree on guidelines regarding the use of autonomous weapons.Credit: Edinburgh Law School
What are the risks associated with autonomous weapons?
Insufficient care in their development and deployment could compromise compliance with the principles of distinction and proportionality. Could the system generate too many false positives when identifying targets? Might an autonomous weapon calculate that large numbers of civilian deaths is an acceptable price to pay when targeting a suspected enemy soldier? We don’t really know yet because it’s immature technology, but these are vast risks. There is also a danger that if a system fails to accurately process incoming data in a rapidly changing environment, it could target the wrong forces or civilians.
To make these systems effective, you have to acquire masses of data, including biometric information, voice calls, e-mails and details of physical movements. That’s a concern if this is done without the consent of those involved. The more you want to do, the more data you need. This creates an incentive to collect data more intrusively.
Who is legally and ethically responsible when autonomous weapons kill?
I think it is likely that some sovereign states will in future deploy weapons that are capable of making decisions to kill. The question is whether countries wish to regulate such systems. Effective legal frameworks require ways of attributing responsibility for violations. It can become difficult with complex autonomous weapons systems to identify the individuals responsible for failures and violations.
The operators of future systems might not be adequately trained in when to ignore a system’s recommendations. They might also develop automation bias, making them unwilling to question a technologically advanced machine. A system could be systematically biased in how it acquires targets — in which case, responsibility would lie somewhere between the developer and the military officials who authorize its use.
There is a risk that accountability becomes so diffuse that it’s hard to identify the individuals or groups of agents responsible for violations and failures. This is a common problem with complex modern technologies, and I think the answer lies in the adoption of regulatory frameworks for the development and use of autonomous weapons systems.
