News » Autonomous Weapon Systems: Legality under International Humanitarian Law and Human Rights

Autonomous Weapon Systems: Legality under International Humanitarian Law and Human Rights

Posted on: 16 May 2017

Our new publication Defending the Boundary analyses the constraints and requirements on the use of autonomous weapon systems (AWS), also called ‘killer robots’, under international humanitarian law (IHL) and international human rights law (IHRL). A Research Brief of this publication provides policy makers and advocacy groups with a summary of key findings.

Drawing on case law dealing with other weapon technologies and autonomous systems, it asks where and when AWS may be used, and what the procedural legal requirements are in terms of the planning, conduct and aftermath of AWS use. The use of a ‘sentry-AWS’ to control a boundary, secure a perimeter or deny access to an area, for example along an international border, forms the backdrop to the legal discussion.

A Timely Publication

‘This publication is particularly timely as the UN Convention on Certain Conventional Weapons (CCW) mandated a Group of Governmental Experts to consider the issue as views diverge on the circumstances in which it would be lawful to use an AWS and on whether additional law is required to ensure respect for the norms to safeguard humanity’ underlines Maya Brehm, the author of the report.

‘At the national level, several states are also looking at the legality of AWS, including in Switzerland where two parliamentary motions call for an international prohibition of fully autonomous weapons’ she adds.

What’s at Stake?

There is, as yet, no agreed definition of an AWS but the basic idea is that once activated, such a weapon system would, with the help of sensors and computationally intensive algorithms, detect, select and attack targets without further human intervention.

The use of such a weapon system can be expected to change how human beings exercise control over the use of force and its consequences. Human beings may no longer be able to predict who or what, specifically, is made the target of attack, or explain why a particular target was chosen by an AWS. That raises serious ethical, humanitarian, legal and security concerns.

According to leading researchers in the field of artificial intelligence and robotics, the deployment of AWS will be practically, if not legally, feasible within years.

The Need to Look Beyond IHL

The focus of scholarly inquiry into the legality of AWS has mostly been on compliance with IHL rules on the conduct of hostilities. Comparably, little attention has been given to the impact of AWS on human rights protection. This study aims to fill this gap by analyzing the constraints and requirements that IHRL places on the use of force by means of an AWS, both, in times of peace and during armed conflict, in relation to the conduct of hostilities and for law enforcement purposes.

‘IHL would never be the sole, and, often, it would not be the primary legal frame of reference to assess the legality of AWS’ stresses Maya Brehm.

Limited Scope for the Use of an AWS During Armed Conflict

As IHL permits the ‘categorical targeting’ of persons based on their status or imputed membership in a group (e.g. combatants), there is scope for the lawful use of an AWS for the conduct of hostilities.

However, to ensure that targeting rules can be applied in a manner that effectively protects the victims of war, human agents must bound every attack appropriately in spatio-temporal terms and retain sufficient control over an AWS to recognize changing circumstances and to adjust operations in a timely manner. ‘This calls for active and constant control over every individual attack’

AWS are Difficult to Reconcile with IHRL

To preserve life, the use of potentially lethal force for law enforcement purposes can only be justified if it is absolutely necessary and strictly proportionate in the specific circumstances. Lethal force cannot be applied automatically or categorically. ‘Human beings must therefore be actively and even personally involved in every instance of force application. AWS are difficult to reconcile with IHRL due to the need to individuate the use of force’, says Maya Brehm.

Furthermore, the algorithm-based targeting of security measures can be de-humanizing, objectifying and discriminatory. ‘Human involvement in such processes serves as an essential procedural safeguard to uphold human dignity and human rights’ according to Brehm.

To allow an independent assessment of the lawfulness of the use of force, ‘human agents must remain involved in targeting processes in a manner that enables them to explain the reasoning underlying algorithmic decisions in particular circumstances’ recalls Maya Brehm.

A Legal Duty to Exercise Meaningful Human Control can Help Ensure Compliance with IHL and IHRL

Whereas legal norms already regulate and limit the use of AWS, controversies and uncertainties about the applicability and meaning of existing norms diminish their capacity to serve as a guidepost. In addition, accommodating new practices within the existing legal framework bears the risk that existing rules are preserved formally but filled with a radically different meaning.

‘An explicit legal requirement to exercise meaningful human control in the use of weapons can help ensure compliance with the norms that safeguard humanity’ stresses Maya Brehm. ‘Action is urgently needed and the CCW Group of Governmental Experts is well placed to formulate such a requirement.’