Lethal Autonomous Weapon Systems under International Humanitarian Law

In: Nordic Journal of International Law
Kjølv EgelandInternational Law and Policy Institute, Oxford University, UK

Search for other papers by Kjølv Egeland in
Current site
Google Scholar
View More View Less
Download Citation Get Permissions

Access options

Get access to the full article by using one of the access options below.

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institution


Buy instant access (PDF download and unlimited online access):


Robots formerly belonged to the realm of fiction, but are now becoming a practical issue for the disarmament community. While some believe that military robots could act more ethically than human soldiers on the battlefield, others have countered that such a scenario is highly unlikely, and that the technology in question should be banned. Autonomous weapon systems will be unable to discriminate between soldiers and civilians, and their use will lower the threshold to resort to the use of force, they argue. In this article, I take a bird’s-eye look at the international humanitarian law (ihl) pertaining to autonomous weapon systems. My argument is twofold: First, I argue that it is indeed difficult to imagine how ihl could be implemented by algorithm. The rules of distinction, proportionality, and precautions all call for what are arguably unquantifiable decisions. Second, I argue that existing humanitarian law in many ways presupposes responsible human agency.

Content Metrics

All Time Past Year Past 30 Days
Abstract Views 3139 540 85
Full Text Views 807 125 7
PDF Views & Downloads 900 241 11