Save

Lethal Autonomous Weapon Systems under International Humanitarian Law

In: Nordic Journal of International Law
Author:
Kjølv Egeland International Law and Policy Institute, Oxford University, UK

Search for other papers by Kjølv Egeland in
Current site
Google Scholar
PubMed
Close
Download Citation Get Permissions

Access options

Get access to the full article by using one of the access options below.

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institution

Purchase

Buy instant access (PDF download and unlimited online access):

$40.00

Robots formerly belonged to the realm of fiction, but are now becoming a practical issue for the disarmament community. While some believe that military robots could act more ethically than human soldiers on the battlefield, others have countered that such a scenario is highly unlikely, and that the technology in question should be banned. Autonomous weapon systems will be unable to discriminate between soldiers and civilians, and their use will lower the threshold to resort to the use of force, they argue. In this article, I take a bird’s-eye look at the international humanitarian law (ihl) pertaining to autonomous weapon systems. My argument is twofold: First, I argue that it is indeed difficult to imagine how ihl could be implemented by algorithm. The rules of distinction, proportionality, and precautions all call for what are arguably unquantifiable decisions. Second, I argue that existing humanitarian law in many ways presupposes responsible human agency.

Content Metrics

All Time Past 365 days Past 30 Days
Abstract Views 4952 809 104
Full Text Views 1039 82 15
PDF Views & Downloads 1355 190 36