Given the swift technologic development, it may be expected that the availability of the first truly autonomous weapons systems is fast approaching. Once they are deployed, these weapons will use artificial intelligence to select and attack targets without further human intervention. Autonomous weapons systems raise the question of whether they could comply with international humanitarian law. The principle of proportionality is sometimes cited as an important obstacle to the use of autonomous weapons systems in accordance with the law. This article assesses the question whether the rule on proportionality in attacks would preclude the legal use of autonomous weapons. It analyses aspects of the proportionality rule that would militate against the use of autonomous weapons systems and aspects that would appear to benefit the protection of the civilian population if such weapons systems were used. The article concludes that autonomous weapons are unable to make proportionality assessments on an operational or strategic level on their own, and that humans should not be expected to be completely absent from the battlefield in the near future.
Singer, supra note 4, at 32, describing how us soldiers in Iraq would equip a mrcbot robot with a Claymore anti-personnel landmine, and use it as a scout in close combat operations in urban areas. See also the capabilities of the Samsung sgr-A1 sentry robot, see for example M. Prigg, ‘Who goes there? Samsung unveils robot sentry that can kill from two miles away’, http://www.dailymail.co.uk/sciencetech/article-2756847/Who-goes-Samsung-reveals-robot-sentry-set-eye-North-Korea.html (last accessed 26 March 2015).
Singer, supra note 4, at 74.
Marra and McNeil, supra note 14, at 47.
See Prigg, supra note 7.
Dinstein, supra note 36, at 5.
Sharkey, supra note 3, at 789.
Singer, supra note 4, at 76–77.
Kellenberger, supra note 19, at 812.
Watkin, supra note 56, at 22.
Watkin 2005, supra note 57, at 19. R. Geiss, ‘The Principle of Proportionality: Force Protection as a Military Advantage’ 45(1) Israel Law Review 91 (2012), at 77.
Geiss, supra note 65, at 77.
Backstrom and Henderson, supra note 17, at 493–494, quoting Kellenberger and Boothby with approval: “It would seem beyond current technology to be able to program a machine to make the complicated assessments required to determine whether or not a particular attack would be lawful if there is an expectation of collateral damage”.
Singer, supra note 4, at 396.
See however Boothby, supra note 42, at 178–179, who rejects, in the context of taking feasible precautions in attack, the suggestion that the “value of the soldier’s life is to be regarded as lower than that of the civilian”.
See the discussion in Geiss, supra note 65, at 73 and 74, referring to Fenrick, Solis and Oeter.
Geiss, supra note 65, at 79.
Geiss, supra note 65, at 88.
See for example Singer, supra note 4, at 409, who uses the example of a manned airplane that becomes detected by a radar and is allowed to attack that radar out of self-defence with a missile, whereas dropping a nuclear bomb on the radar station out of self-defence could have repercussions that could include a worldwide nuclear war.
See Grut, supra note 75, at 11.
Backstrom and Henderson, supra note 17, at 494.
Singer, supra note 4, at 396.
Singer, supra note 4, at 408–410. Although the owner of a pet cannot entirely predict the actions of the animal, he remains liable to pay for damage it does to the life, limbs or property of others.