Save

Ensuring Lawful Use of Autonomous Weapons

An Operational Perspective

In: Journal of International Humanitarian Legal Studies
Author:
Camilla G. Cooper Associate Professor, Norwegian Defence Command and Staff College, Oslo, Norway

Search for other papers by Camilla G. Cooper in
Current site
Google Scholar
PubMed
Close
Open Access

Abstract

The use of autonomous weapons systems (‘aws’) is the source of extensive discussions within the international legal community and beyond. After years of discussing definitions, the discourse is slowly moving on to discuss aws in light of existing law of armed conflict (‘loac’) rules. This article aims to support these discussions by providing a military legal perspective. aws offers great potential benefits to both soldiers and civilians, and control mechanisms already in place for military operations may be employed to define when and how aws can lawfully be used. aws can reduce the exposure of soldiers to dull, dirty and dangerous environments and the risk of incidental civilian harm. To exploit these potentials and ensure legality, regulators need to understand how military forces employ and control the use of force to support their operations, and military planners and decision-makers need to understand the limits of and possibilities within loac.

1 Introduction*

With significant advances in miliary technology, such as the development of autonomous weapon systems (‘aws’), follows the question of how best to exploit the new possibilities it offers. Will it require changes to how military forces operate or can it be integrated into current doctrines and practices? This may sound like operational rather than legal questions, but in fact it is both. In order to maximise the potential offered by new technology, it is necessary to understand the new weapon systems well enough to be able to predict their outcome with a sufficient degree of certainty. Only that way can the military forces employing the technology be confident that the weapons system will help ‘accomplish the mission effectively and efficiently’1 within the margins of applicable law. This includes avoiding unintended and undesirable incidents as this will likely undermine operational effectiveness, for instance because it creates risk for own forces, alienates those it is intended to protect, or directly counteracts the operational plan. Furthermore, for most armed forces, it is a value in itself to protect the civilian population from unnecessary harm and to generally conduct operations in a lawful manner.

During armed conflict, the use of force is regulated by the law of armed conflict (‘loac’), and aws are no different in this regard. As will be further elaborated on in Part 2, autonomy is perceived in this article as a system’s ability to behave in a desired manner or achieve the goals previously imparted to it by its operator, without the need for further human interaction, and aws are weapon systems which involve autonomous technology in one or all of the elements of the decision-making cycle concerning the use of weapons.

There are currently no rules of international law specifically dealing with aws as it has not been possible to come to an agreement on what constitutes autonomy.2 As Jenks notes, ‘the international community cannot even agree about what they disagree about’.3 This means that, for the time being, States are left to do their own analysis of what lawful use of aws looks like, and although some State positions are released, they are generally too vague to provide operational guidance to military commanders and their legal advisers.4 As there is already considerable State practice on the use of drones, the need for further guidance is particularly great for the use of autonomous ground systems. These aws will potentially operate in civilian populated areas and therefore interact with civilians. They will also face challenges like physical obstacles limiting their ability to obtain more favourable positions and the risk of being harmed or tampered with.

This article argues that the general rules of loac covers the need for regulating aws to a large extent. However, the element of autonomy does challenge the law in some areas, particularly the evaluative rules in loac such as necessity, feasibility, excessiveness, and good faith. These are important because the fog of war requires flexible rules that can be adapted to the situation, and it recognises that the access to information will vary greatly.

One of the questions raised in relation to the use of aws is whether these systems will be able to make such judgemental decisions. However, this query is based on the misconception that aws replace human decision-making and must therefore be expected and assessed as if they are humans.5 This process of attributing human characteristics or behaviour to an object is known as anthropomorphism, and in the case of aws, it skews the understanding of the role of such systems. aws are merely a modern type of a weapon system, and autonomy is a function in such systems. The existence and use of autonomy do not release the humans involved of their legal obligations to ensure that the use of force is lawful;6 rather, aws ‘are machines run by computers, and computers generally do what they are programmed to do.’7 As a result, the responsibility for the use of aws is distributed according to the traditional loac rules on individual and command responsibility. The use of aws nonetheless requires us to apply loac in new ways because it shifts the need to carry out the legal analysis from the execution phase to the planning, and even programming, phases. Furthermore, as aws may adapt, systems and procedures must be put in place to ensure their continued legality.

This article will analyse some of the practical challenges relating to distinction and precautions in attack in light of the loac as this law stands today. As will be shown, loac is constructed in a manner which ensures there are no gaps or lacuna, thereby ensuring that all new technology is somehow regulated. However, there is a clear need for guidance on how to use aws lawfully. In particular, it is important to ensure that the development and use of aws does not result in greater harm to civilians.

2 aws and Military Use

One of the challenges in discussing autonomous weapon systems is that there is no common definition or understanding of the characteristics of such systems. Autonomy may be found in various degrees and components, and discussions on the topic include both systems that exist now or are expected to exist in the near future, and systems which are now technologically impossible.

As mentioned above, the approach taken in this article is that autonomy is ‘the ability of a system to behave in a desired manner or achieve the goals previously imparted to it by its operator, without needing to receive the necessary instructions from outside itself on an ongoing basis’.8 This means that the description of a system as autonomous does not preclude the possibility of human oversight or override, should the need arise,9 or as the US definition stipulates: ‘This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override operation of the weapon system, but can select and engage targets without further operator input after activation.’10 Furthermore, it means that autonomy is not perceived as synonymous with artificial intelligence (‘ai’), although the two in many areas are closely related, and it is a prerequisite for highly autonomous systems.

aws are weapon systems which employ such technology in one or all of the elements of the decision-making cycle concerning the use of weapons. In military literature, this cycle is commonly summarised in terms of the ‘ooda-loop’, a tool developed by usaf Col. John Boyd,11 and degrees of autonomy have been described in relation to the loop – whether it is ‘in’, ‘on’ or ‘out of’ the loop.12 ooda is short for ‘observe, orient, decide, and act’, and the loop is intended to improve and speed up decision-making.13 According to Boyd, the recipe for a successful operations is to:

Observe-orient-decide-act more inconspicuously, more quickly, and with more irregularity as basis to keep or gain initiative as well as shape and shift main effort: to repeatedly and unexpectedly penetrate vulnerabilities and weaknesses exposed by that effort or other effort(s) that tie-up, divert, or drain-away adversary attention (and strength) elsewhere.14

This concept has since been further developed, and current military doctrines describe the targeting decision cycle as: ‘Find, Fix, Track, Target, Engage, Exploit, Assess, abbreviated to F2T2E2A’.15 The main difference between this and the ooda-loop is the requirement to track the target to ensure continued positive identification until the time of attack, and the emphasis on the need to exploit the effects of the attack and assess the positive and negative results of the attack. With regard to the use of aws, this longer targeting cycle illustrates the breadth of tasks that are involved in an attack and also the variety of tasks where autonomy may be employed to enhance the process. However, the simplicity of the ooda-loop makes it more suitable for understanding aws.

In manual systems, all four functions in the ooda-loop are carried out by people. According to McFarland, ‘the purpose of developing autonomous systems is to assign part or all of the loop to a machine in order to realize some operational advantage such as greater speed or endurance, lower cost or less risk to an operator’s life.’16 aws can, among other things: deal with large amounts of information in a short time; scan areas for preapproved targets such as buildings, vehicles or persons; and track the target for longer periods of time without being detected. It may also be able to remain in the area after an attack has been carried out, collecting information which is important for the Battle Damage Assessment (‘bda’). Furthermore, the aws can maintain readiness during prolonged operations, and small aws can carry out tactical tasks which would otherwise require a fully equipped unit with necessary logistical and medical support. This means that aws has the potential to compensate for many of the causes of human errors, such as stress, fear, sleep deprivation, prejudice and cognitive overload.17 Although it may be easier to accept human errors than machine errors leading to the death of innocent people, this is not reflected in law. As will be further explained, the requirement is, and should be, to choose the means and methods that entail the least risk of civilian harm.18

As will be further elaborated on below, the technological improvements offered by aws will also benefit the civilian population through the enhanced ability to take precautions in attack. Nonetheless, the primary motivation for States to invest in new technology for military use appears to be the potential for improving the conditions for own forces. When drone production was scaled up in the late 1980s, the expression ‘dull, dirty and dangerous’, now known as the three Ds, was introduced to explain their purpose:

When used, uav s [unmanned aerial vehicles] should generally perform missions characterized by the three Ds: dull, dirty, and dangerous. Dull means long-endurance missions which, in the future, could continue for several days. Dirty means jobs such as detecting chemical agents and their intensity; certainly a good manned mission to avoid if possible. Dangerous missions for unmanned vehicles are numerous and growing. Two that come to mind, however, are reconnaissance deep behind enemy lines and suppression of enemy air defenses.19

The combination unmanned platforms and autonomous control capabilities not only enables aws to operate in areas and situations which are considered too dangerous for humans, but they enable aws to operate in the desired manner or achieve the intended purpose even if the situation changes or human interaction with the aws becomes difficult or undesirable.20 For example, if the communication link to the operator is lost or entails a security risk.

When determining the ability of aws to be used in compliance with loac, it is useful to distinguish between low and high levels of autonomy:

A highly autonomous system is one that can execute most or all of the ooda loops required to achieve some goal, using only high-level instructions from its operator as guidance in the decision stage of a loop, such that the ‘nearest’ human functions similarly to a commander. A system with a lower level of autonomy is only able to execute lower-level loops, or only certain parts of a loop, and so must work together with a human operator to achieve a high-level goal, casting that person as more of a collaborator.21

From a loac perspective, it is the highly autonomous systems and those systems where autonomy is used in the crucial stages such as making the determination that a person or object is a lawful target, or how or where to carry out the attack, that are of particular interest.

In order to understand how aws can be used in a lawful manner, it is also necessary to understand how force is used in military operations. Most use of force is controlled through the targeting process, known as ‘joint targeting’. Joint targeting is the process of selecting and prioritising targets and finding the appropriate way of dealing with them in order to achieve desired effects in accordance with the operational plan. According to nato doctrine, ‘it links the tactical actions to strategic end state via operational objectives by engagement of prioritized targets.’22 Targeting is either dynamic or deliberate.23 The level of detail that is possible to plan ahead of the attack determines whether it is a deliberate or dynamic process. Dynamic targeting is the use of force24 against targets which are within a preapproved target set, but where the location or exact identity of the target is not known in advance. Deliberate targeting, also known as pre-planned targeting, is the use of force against known targets in known locations, thereby enabling detailed planning in advance of executing the target. In addition, force may be used in combat engagement situations, for example where a unit or camp is attacked or about to be attacked by opposing forces and use of force in response to this.25 Finally, sometimes force is used in order to secure freedom of movement, such as moving a vehicle which is blocking the road or destroying barbed wire to enable entry into an area. This is not attacks as such, but destruction which may be justified if it is based on imperative reasons of military necessity.26

To ensure that all use of force supports the operational plan and Commander’s objectives, the parameter for using force is controlled through various mechanisms such as rules of engagement, target lists and methods for collateral damage estimation. As will be explained, these control mechanisms will play an important role in ensuring the lawful use of aws.27

The scope for employing aws in a particular situation must be analysed on the basis of the complexity of the task, the complexity of the operational environment, and the need for human interaction. The possibility for ensuring lawful use of aws is greater the more information is known in advance or the simpler the task is, while at the same time, the need for autonomous technology is greater when there is less time or opportunity for humans to plan everything in detail and convey this to a weapons system. Further guidance on how to ensure legality in situations where humans do not have the ability to specify all details regarding an attack, and instead rely on the benefits of autonomous technology, is therefore crucial if we are to exploit the full potential of aws. This is the focus of the remainder of this article.

3 The Choice of Means and Methods

According to Additional Protocol i (‘ap i’) Article 35(1), ‘the right of the Parties to the conflict to choose methods or means of warfare is not unlimited’.28 All means and methods of warfare are regulated by loac, and the rules cover both their nature and use. The legality of the inherent characteristics of the means and methods of warfare are governed by both general and specific rules, commonly known as weapons law. The general rules are the prohibition on means and methods of warfare which, by their nature, are expected to cause unnecessary suffering or superfluous injury on combatants,29 and the prohibition on means and methods that cause indiscriminate harm to civilians.30 Furthermore, means and methods which are ‘intended, or may be expected, to cause widespread, long-term and severe damage to the natural environment’31 are also prohibited. These general rules are complemented by treaties regulating or prohibiting the use of specific weapons which are unlawful per se or where practice has shown that there is a considerable risk of unlawful use. Examples include the prohibition on the use of anti-personnel mines, biological and chemical weapons, and certain types of cluster munitions.

Both the inherent legality and use of aws must be assessed in the light of these rules. The initial assessment of legality of new weapons, means or methods of warfare is referred to as the ‘Article 36 review’, named after the ap i requirement to review means and methods to ensure they comply with international law.32 It is intended to ensure that all means and methods that are made available to the military are lawful to use and that any necessary limitations to ensure legality are imposed. Admittedly, many States have some way to go to fully integrate this obligation into their acquisition systems, in particular the novel mechanisms required to test and evaluate weapons with autonomous functions.33 For instance, if the aws employs machine learning and is therefore be expected to change how it operates, the Article 36 review must be repeated periodically to ensure continued lawful use,34 but it is not clear how often such reviews must be repeated. Furthermore, as Saunders and Copeland explain, ‘States must consider what technical standards an aws must meet in terms of trust, predictability, explainability, and reliability to comply with the law.’35 aws therefore pose new challenges to an already complicated process. However, an in-depth study of these challenges is beyond the scope of this article.36

In the choice of means of warfare for a particular target, the expected accuracy of the weapon is a central consideration. Because the aws may both select targets and attack them, the accuracy of an aws relates both to the ability to determine that a target is within the set it is programmed to strike, and the ability to strike the actual target. Both aspects must be taken into account when assessing the suitability of employing an aws for a given attack or operation.37 According to McFarland, ‘that requires a measure of accuracy which includes the overall behaviour of the weapon system from the time an operator directs it at a target or targets, not just the behaviour of the final weapon component’.38

Those calling for a ban on aws commonly refer to the Martens Clause in ap i. This is intended to ensure that there is no lacuna in which new technology can be used uncontrollably: ‘In cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.’39 The clause is said to be important where new technologies challenge the existing rules, such as cyberoperations and aws.40

However, the Martens Clause should be used with caution as there is no general agreement on its meaning or how it should be applied in practice. It will therefore not be relied upon in this article. Although humanitarian considerations must be balanced with military necessity, this balance is already set out in existing rules. Furthermore, the public perception towards new technology is not uniform and may be based on fears relating to technologically unrealistic expectations influenced by films such as the Terminator,41 making ‘the dictates of public conscience’ an ambiguous basis for limiting the use of aws. Or in the words of Arkin: ‘Let us not stifle research in the area or accede to the fears that Hollywood and science fiction in general foist upon us.’42 Instead, it is important to clarify how existing rules can be used to govern the use of aws in a manner which maximises their potential for achieving the loac goal of protecting victims of war.

loac requires the parties to ‘take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental loss of civilian life, injury to civilians and damage to civilian objects’.43 This means that aws cannot be used to conduct attacks if they are expected to cause more civilian harm than less technologically advanced weapons with direct human control.44 It also means that in circumstances where the aws is expected to perform better than humans, States may in fact be required to use them.45 Although the discussion on aws to a large extent is influenced by the perceptions that aws are incapable of complying with loac and that humans, unlike aws, can be expected to always comply with loac, as Trabucco and Heller point out and as will be further elaborated on in this article, neither assumption is necessarily true.46

Finally, if the aws does not operate as expected, inter alia due to unexpected bias or technical errors, those involved in their use may not be liable for the error, but they would be required to cease using the system until the error has been corrected.47

4 The Ability of aws to Adhere to the Principle of Distinction

Regardless of the weapon used, the initial determination when using force in armed conflict is whether the intended target is a lawful object of attack. This follows from the principle of distinction, which according to the International Court of Justice (‘icj’) is one of the core principles of loac.48 The principle requires parties to a conflict to take constant care to spare the civilian population, civilians, and civilian objects in their conduct of military operations49 and ‘at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations only against military objectives’.50 The principle and its corresponding rules require that before carrying out an attack, the person or object of attack is ascertained as a lawful target in accordance with loac, otherwise the attack cannot be carried out.

The principle of distinction is enforced amongst others by the prohibition on indiscriminate weapons and attacks. Under weapons law rules, there is an obligation to abstain from using weapons which are not capable of being directed at a lawful target or where the effect may not be limited to the lawful target.51 The ability to comply with the principle of distinction is central to the abovementioned Article 36 review. Weapons which are not inherently indiscriminate must nonetheless be used in a discriminate manner, meaning they may only be directed at lawful targets and not cause excessive harm to civilian persons, civilian objects or the civilian population (this is known as targeting law).52

Usually, the assessment of whether a weapon is inherently unlawful is undertaken at the development or acquisition stage as part of the Article 36 assessment, while the selection of targets is considered part of the targeting process or combat engagement. However, where the aws is programmed to select targets to attack, even if this is most likely from a given set of target categories, the ability to accurately select targets must be included in the Article 36 assessment,53 giving the Article 36 review a more central role than usual in ensuring loac compliance.54 If the aws is deemed indiscriminate, either in general or in certain areas, it would be prohibited to use in such circumstances.55 For example, it may be prohibited to use in civilian populated areas, but lawful in areas where the military objectives are clearly separated from civilians, as is the case for the use of incendiary weapons.56

In order to comply with the principle of distinction, the aws must therefore be programmed with the ability to recognise who and what is a lawful target. It is common practice to use preapproved target lists to control the categories of targets which military forces are permitted to attack in a particular conflict, and this system is useful and important for controlling the use of force by aws. Lawful targets are military objects, combatants, and civilians directly participating in hostilities, and the target lists help clarify what and who will fall into these categories in a concrete armed conflict where soldiers are operating, or aws are to be used. For aws, such programming would entail that anyone or anything not falling into the target categories, will not be permissible targets of attack. By setting these parameters, it is in other word humans, not systems, that determine who and what is a potential lawful target, even if it is the aws which selects the actual target of attack.57 Those involved in their use must therefore ensure that the systems only select lawful targets, amongst others, during the Battle Damage Assessment. This is particularly important if the aws is expected to have the ability to adapt through machine learning.

Next, the aws must be expected to be able to locate and identify targets in the operational theatre. The natural first step in a targeting cycle is therefore to gather information about the intended target and its surroundings. Some targets are easier to identify than others, such as military objects like military type weapons or vessels, vehicles, and aircraft, and there is therefore a substantial scope for using aws to attack such targets. Here the challenge will not be distinction but rather the legal considerations required to be made when carrying out the attack, such as the duty to avoid harm to civilians and to cancel attacks expected to cause excessive harm to civilians.

For other objects, their status as military objectives and hence lawful targets depend on their current location, whether their purpose or use is making an effective contribution to military action and whether the attack is expected to offer a definite military advantage.58 Both assessments require an understanding of the object’s role in the operation which may be complicated to program into an aws.59 For example, a civilian bridge may become a lawful target if it is expected to give an effective contribution to the opposing forces’ freedom of movement, but once the opposing forces have passed the bridge, the assessment of it meets the threshold of becoming a military objective must be carried out again. Similarly, a civilian building would become a lawful target if it used as a temporary military headquarters but will regain its civilian protection once the troops leave that building or area.

The International Committee of the Red Cross (‘icrc’) has proposed that aws should never be used against these so-called dual-use objects, whereby civilian objects become militarised for a shorter or longer period of time.60 However, the need to impose time and space limitations on the attack of dual-use objects within the approved target sets is not unique to aws; soldiers will also need further guidance on these matters. Amongst others, the period in which a dual-use target may be considered a lawful target may be restricted. The need to impose limitations on the use of force in certain circumstances to ensure its lawfulness is, in other words, nothing new and does not mean that the intended means or method is inherently unlawful.

The use of force against persons raises similar but nonetheless different issues. The aws must be technically able to make the distinction between a combatant and a civilian, and between protected civilians and those who have lost their protection from attack on the basis of their direct participation in hostilities.61 The former division is the easier: combatants usually wear uniforms, and the protected persons within the military, such as medical and religious personnel, are expected to wear a visible protective emblem, commonly the red cross on a white background.62

A more challenging determination to make, for both soldiers and those who program or employ aws, is whether there is sufficient certainty that a person directly participates in hostilities.63 The concept of direct participation in hostilities has been subject to extensive discussion, inter alia sparked by interpretive guidance on the topic issued by the icrc in 2009.64 The disagreements concern both the question of how long the protection is lost, especially whether it can be lost for longer period of times, and the types of acts which may amount to direct participation. The programming of the aws will therefore have to reflect the national positions on these questions, and if used in multinational operations, those involved must be aware of differences in national interpretations.

Direct participation in hostilities as a basis for targeting is particularly important in non-international armed conflicts where the State’s opponents are not combatants, and it is also important in determining who must be taken into account in a proportionality assessment. Due to the complexity of the determination, the possibility for the lower levels of command and soldiers to attack such persons has in many cases been limited to situations in which there is little doubt. Examples have included the use of military type force like employing a rocket-propelled grenade, production of improvised explosive devices (‘ied s’) or involvement in combat engagement. Similar limitations may be included either in the programming of the aws or in mission specific rules of engagement (‘roe’) setting out the scope for applying aws. Identifying typical cases of direct participation in hostilities must done on the basis of the known tactics, techniques and procedures (‘ttp s’) of the armed group in question as well as the pattern of life of the civilian population in the area.

Being able to identify whether a person has become hors de combat is another challenge, both for soldiers and for those programming and using aws. In the heat of combat, where shooting is still ongoing, the troops involved will be entitled to return fire even if there may be injured personnel on the other side. The principle of proportionality does not apply to this category of persons, as it regulates the relationship between military advantage expected to be gained from an attack on the one hand and the anticipated harm to protected civilians and civilian objects, not persons who have become hors de combat, on the other.65 Instead, combatants are protected by the prohibition on unnecessary suffering and superfluous injury and the rule that persons who become hors de combat are no longer lawful targets.66 This is yet another example of subjective and context-dependent loac rules which creates challenges for programming into an aws.67 It is therefore necessary to impose a limitation to ensure that the combatant is still a lawful target, such as requiring that the person is still firing his or her weapon. Furthermore, the munition included in aws intended to be used against persons must comply with the prohibition on unnecessary suffering and superfluous injury and not be among the types limited to objects, as is the case with explosive or inflammable projectiles.68 The need for such limitations must be determined in the Article 36 assessment of the aws and its munitions.

Although the challenges in complying with the principle of distinction are not unique to aws, in the case of aws, there will be a time delay between the assessment of the legality of a target category and the time for engagement of the target which must also be taken into account. This time delay will primarily affect the assessment of proportionality rather than distinction, but the status of a person or object as a lawful target could, in some cases, change in a relatively short time. The use of a dual-use object can change, or a combatant may become hors de combat. It is therefore not sufficient to assess whether the target is lawful at the time of programming the aws or the decision of employment. This potential for change must be taken into account when setting the period and defining the parameters in which the aws are permitted to operate in. For example, dual-use facilities should only be on the list of approved targets for such time that it is reasonable to expect the use to still be ongoing.

Even though the principle of distinction is an absolute rule, and considered by the icj to be jus cogens, one of the realities of war is that absolute certainty in many, if not most, cases will be impossible to achieve. As Schmitt and Widmar explain: ‘doubt is a persistent and pervasive factor in combat’.69 This is reflected in the formulation of the rules in loac, especially in the requirements for how far the parties are required to go to adhere amongst others to the distinction rule. As mentioned above, the warring parties are required to take constant care to spare civilians. In relation to targeting, rather than focusing on the degree of doubt that may be acceptable when making decisions during the fog of war, loac rules specifies what taking constant care to spare the civilian population entails. Those who plan or decide upon attacks, are required to do ‘everything feasible’ to verify that a target is lawful before attacking and to take ‘all feasible precautions’ in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental civilian harm.70 This applies to the use of both manned and un-manned systems, and irrespective of the technology involved.

The term ‘feasible’ is generally understood as meaning ‘that which is practicable or practically possible, taking into account all circumstances ruling at the time, including humanitarian and military considerations’.71 Relevant factors in this determination include risk to own forces or other security risks; the expected humanitarian benefits from the precaution; and resource considerations such as the availability of expensive weapon or ammunition, or alternative approaches or weapon systems.72 The decision to carry out the attack must be ‘reasonable’, meaning that a reasonable attacker in similar circumstances would, when faced with the information reasonably available at the time, make the same decision.73 Phrased differently, the decision must be based on ‘common sense and good faith’,74 and if this was the case, the attacker has acted lawfully, even if the conclusions later proved to be wrong. As mentioned above, a distinction must be made between the initial use revealing errors such as bias and continued use without renewed testing. Only the latter may be within what is considered reasonable to expect commanders and staff to be aware of. This combination of subjective honesty and objective reasonableness is commonly known as the ‘Rendulic rule’, which originated from the Hostage Case.75 It is reflected in military manuals, such as the U.S. dod manual: ‘In assessing whether the obligation to take feasible precautions has been satisfied after the fact, it will be important to assess the situation that the commander confronted at the time of the decision and not to rely on hindsight’.76 As mentioned above, in case of system malfunctioning, a distinction must be made between the initial use of aws revealing errors such as bias, and continued use without renewed testing. Only the latter will be within what is considered reasonable to expect commanders and staff to be aware of and therefore be held responsible for.

This means that those planning and deciding an attack must collect information about the target, and based on their assessment of the information reasonably available at the time, only order the attack to be carried out if they honestly believe that the target is lawful. In addition, the necessary precautions set out below concerning the execution of the attack must be taken, such as doing everything feasible to avoid or at least minimize harm to civilians and adhering to the limits of proportionality (see below). If the attack is to be carried out by an aws, those who decide and plan the attack will have a duty to ensure that the system can comply with these requirements in the context it is being used. An aws must be capable to sufficiently ascertain the status of an object or person to be considered a lawful target before engaging the target, and the use must be limited to those circumstances where this can be achieved.77 As Trabucco and Heller explain, only allowing a weapon to autonomously target those categories ‘that are, without question, targetable’,78 as suggested by Sassoli, ‘might be [a] unsatisfying [fix] in terms of aws’s effectiveness, but it would essentially guarantee that aws will not violate the principle of distinction’.79

Furthermore, the need for human control must be assessed. As McFarland explains: ‘Wherever a particular aws is to be used in a particular attack, direct human involvement is needed to whatever extent the autonomous capabilities of the aws are inadequate to abide by all legal constraints applicable to that attack.’80 This requires an understanding of how the aws functions, including its abilities and limitations. Only then will it be possible to determine which level of human control is legally required.81

As a result, the use of aws requires armed forces to think differently in order to comply with the principle of distinction. First of all, the aws can only be used if the persons involved have sufficient confidence in the system to enable them to honestly believe that the system will be able to comply with loac.82 Provided this is the case, the operators and their commanders must take all feasible precautions to ensure that aws are provided with a set of lawful targets and that the target set remains legally valid for the time and space they decide to use it, and that the aws remains able to comply with its given parameters despite potential machine learning. As explained, due to the complexities of distinction, the scope for using aws against persons is smaller than for objects, and military objects are easier to program than dual-use ones. The decision to employ an aws to attack a given target without further need for human involvement must be taken on the basis of the aws’ ability to locate, identify and hit the target. If the aws is deemed sufficiently able to do so, the requirement to take feasible precautions with regard to distinction has been met. This applies even if the aws proves to malfunction – if the user honestly believed it would function lawfully, and this belief was reasonable, he or she cannot be held responsible for the malfunctioning. However, continued use of the aws would depend on a reassessment of its ability to operate lawfully.

5 Risk of Civilian Harm and Proportionality

The next step in the process of ensuring that an attack is carried out in a lawful manner is to assess the risk of harm to civilians and whether this may be reduced. In order to achieve this, the aws must be programmed to identify whether there are persons or objects in the vicinity of the target that are not within its approved target set (i.e. non-targets). aws which are not sufficiently able to do this can only be used in areas where civilians are not expected to be present.83

As explained above, the ability to collect and analyse large amounts of information in short time periods is one of the strengths of autonomous technology, as is the ability to operate in areas that are too dangerous or difficult for soldiers. Furthermore, they will not become so stressed or exhausted that they to lose concentration and miss vital information or make human errors. Although discussions surrounding aws are commonly based on an implicit assumption of human superiority, as Trabucco and Heller explain, ‘the superiority of human judgment is difficult to reconcile with statistics that indicate civilian death and destruction are endemic to modern armed conflict. Nearly half of all civilian casualties during the war in Afghanistan, for example, were caused by human misidentification.’84 As a result, the use of such technology may enhance the ability of States to comply with loac and protect civilians.85 However, it is important to be aware of any limitations inherent in the information collection systems. For example, the aws may be encoded with a percentage level of certainty required before reporting on the existence of what its programmed to look for. As a result, the fact that no persons are reported to be in the area does not preclude the possibility of people being there undetected.86

If the identified non-targets are within the area expected to be affected by the aws’ munition and civilian losses must therefore be anticipated, the next question is whether it is feasible to reduce the risk of harm.87 Certain persons and objects are entitled to enhanced protection, meaning that the threshold for permitting any harm to them is higher. This includes hospitals, medical personnel, cultural property, and objects essential to the survival of the civilian population. These may be programmed into a no-strike list, and the aws can be programmed to cancel any attack expected to harm anyone or anything on that list.

For other types of non-targets, the anticipated civilian harm must be assessed in relation to the military advantage expected to be achieved. In what is known as the proportionality rule, loac prohibits attacks ‘which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated’.88 This proportionality assessment must be done on a case-by-case basis, and must be undertaken both during the planning or decision stage and again at the execution stage if the need arises.89

It is a particularly complex rule, requiring a balancing of anticipated harm to non-participating civilians, with the expected ‘direct and concrete military advantage’ gained by attacking the military objective. The assessments of the military advantage sought to be achieved and of how this is weighed against the civilian harm anticipated are both evaluative and subjective, requiring knowledge and understanding of the operation and the role the proposed attack will play in the larger plan. Such understanding and assessments are difficult, if not impossible, to program into a system, particularly when recalling the abovementioned requirement to apply ‘common sense and good faith’ in such determinations. This does not preclude the use of aws, but as explained in relation to Article 36 assessments, it will entail limitations on its use to ensure legality.90

Although some argue that future aws will become technically able carry out the proportionality analysis,91 at the moment, the proportionality assessments must be made before using the aws, when defining its scope of operation. The responsibility for ensuring an aws does not carry out indiscriminate attacks remains with those who plan and decide upon attacks, regardless of the weapon used.92 If the decision is made to use aws to carry out attacks, they must honestly believe that the system is capable of accurately locating and identifying targets within the preapproved target list, identify the presence of non-targets and function in a manner which complies with the proportionality principle.93 If they do not, they must either use a different weapon or increase the degree of human control to ensure lawful use.94 Because these beliefs are difficult to second guess, when assessing whether it was lawful to use a aws in a given situation, in practice, the focus is on whether the attacker had taken sufficient precautions before the attack in order to ensure it was lawful, including gaining as much information as practically possible.95 For the aws, this means that the decision to use the aws will be assessed on the basis of the procedures and parameters applied to ensure the aws had the right information and its means were assessed suitable to the task.

One way to exert human control with the proportionality assessment, is to program the aws to employ the Collateral Damage Estimation (cde) methodology. cde a method used to control the use of force in situations where incidental civilian harm is anticipated, amongst others by raising the target engagement authority to higher levels of command, the higher the risk of civilian harm. It would, for example, be possible to program the aws to only carry out attacks where the nearest collateral concern is outside the effect radius of the aws’s weapon, with the result that a proportionality analysis is not required. The aws could also be programmed with more complex codes which accepts very low risks or limited civilian harm, expected to meet the proportionality requirement for the target set the aws is programmed to attack. In such cases, the proportionality analysis will be undertaken in advance by humans, thereby ensuring that the subjective criteria presented above are complied with. The question here is what is technically possible:

Civilian harm attributable to a proportionality calculation performed by an aws control system does not appear to differ legally from that attributable to a proportionality assessment done manually by a human combatant. The code which is run to perform that proportionality calculation is, after all, an expression of a human decision process.96

The extent to which proportionality can be assessed in advance will also depend on the type of operation. As explained above, when force is used in military operations in order to further the aims of the operation, rather than dealing with an attack (combat engagement) or securing freedom of movement, it is either done on the basis of a deliberate targeting cycle or in dynamic operations. The formality of these processes depends on the level they are executed and the type of targets. Tactical levels may use an informal targeting process, but the general approach is nonetheless the same.97

The scope for using aws more proactively is greater where elements of the attack is planned ahead, leaving more time to assess the capability of the aws, the nature of the target and the pattern of life in the area. For instance, combat engagement situations are usually quite complex. aws may still play an important role in defending military forces from incoming attacks, especially where the lawful targets may be identified on the basis of the threat and the military advantage in using force is clear.

Deliberate targeting entails pre-planning an attack in great detail and is suitable for immovable targets such as buildings or other infrastructure. In addition, the targets must be of a type which are expected to remain a lawful target until the attack is carried out, such as targets which are military by their ‘art’ or which are militarised by their future use. In such cases, many of the considerations required by loac may be taken in advance as part of the planning, including the status of the target, the risk of harm to civilians and the most appropriate way to attack it. These considerations are, to a large extent, dealt with through the cde process. Emergent issues, such as civilians driving past the target, will not be possibleto take into account at the planning stage, but must be determined at the time of attack or as close in time as feasible. Deliberate attacks will generally be less legally complicated to carry out with an aws, leaving less room for error, and are even carried out today by missiles like the Naval Strike Missile. However, the scope for taking advantage of the benefits introduced by aws, such as its ability to collect and process large amounts of information in a short time, is also reduced. In fact, if the use of an aws is programmed in too great detail, it will no longer operate as an autonomous weapon, but rather as an automatic one.

In dynamic operations, the target categories will be preapproved, but the exact location of the target, whether persons or objects, will not be known at the planning stage. Those who carry out the attack will therefore bear greater responsibility for ensuring all feasible precautions are taken in order to protect civilians from the effects of the attack. This concerns both the formal requirement set out in the mission specific cde procedures and any emerging concerns that may arise at the time of attack. Here, the scope for taking advantage of autonomous technology is greater. The aws may be given the task of achieving a certain effect, but be left to determine how, when or where to do it. At the same time, the use of aws in dynamic targeting also entails a further need for human oversight and control because of the potential that several critical legal considerations remain after the aws is activated, and some of these require evaluative assessments which will be difficult to program into an aws. The person in charge of using the aws must therefore impose limits on the aws’s ability to operate in time and space so that the outcome of the aws’s process is expected to be lawful. It could, for instance, only be permitted it to operate in areas where the probability of civilian presence is low. As explained by Schuller: ‘[I]f the sophisticated targeting computer onboard the aircraft only allows it to vector towards unpopulated areas in order to attack positively identified enemy tanks during an international armed conflict, our concerns over civilian casualties may be reduced.’98 Alternatively, the aws may be programmed to require human confirmation of the legality of the attack before it may continue, or it may be necessary to have continuous human oversight both of the systems processes and the area it is operating in, thereby limiting its ability to operate autonomously.

The current scope for using highly autonomous weapons is, in other words, limited to situations where the target’s status as a military objective is relatively constant and the risk of civilian harm is low.99 In other situations, the autonomy should be limited to some of the elements of the ooda-loop or the system should be required to seek approval before carrying out the attack, thereby ensuring closer human interaction and control. For instance, the aws may be used to locate and find the best way to attack targets in complex scenarios if those involved in the decision to employ it have already observed the situation and analysed the information in light of the context, including the opposing forces ttp s and civilian pattern of life. The decision to use the aws will rest with the personnel involved, thereby ensuring the required legal considerations are taken into account. Finally, as the requirement in loac is to do everything feasible to ensure the protection of civilians, human oversight should be required in any situations where changes to the environment are expected, in order to ensure that the aws actually work as expected.

6 The Duty to Cancel or Suspend Attacks

Another area where the use of aws may raise new challenges is with regard to the requirement in loac to cancel or suspend an attack if it becomes apparent that the attack is likely to violate the principles of distinction or proportionality.100 It may be that the information which the targeting decision is based on proves to be wrong, or that the situation has changed and thereby changes the proportionality assessment. To ensure that the use of aws comply with this duty, the aws needs to be programmed to cancel or suspend its attack if something occurs that means that the initial requirements for attack are no longer fulfilled.101 Unlike traditional weapons, one of the benefits of aws is that they can be programmed to deal with a variety of situations, for which the necessary legal assessments have been made in advance. As a result, the duty to cancel will apply when the situation has changed in such a manner that would render the use of the system unlawful.

At the same time, it is important to keep in mind that the legality of carrying out attacks is context dependent, and the legal requirements include evaluative assessments which may be difficult to comply with through the programming of target sets and circumstances. This means that although aws may be expected to deal with smaller changes in the situation, it will be harder to program aws to deal with changes in the situation which affect the assessment of distinction or proportionality carried out before activating the aws. In such situations, the aws is likely to require human interaction and approval in order to continue the planned attack. Any technical limitations on the aws ability to cancel or suspend its attack, or on the possibility for human override, must be taken into account when the decision to employ the weapon is made. As explained above, aws can be operator-supervised, including the ability of operators to override the operation of the weapon system and still be considered autonomous.”102

As a result, in order to ensure compliance with the duty to cancel or suspend attacks if it becomes apparent that the attack is likely to violate the principles of distinction or proportionality, the use of aws should be subject to one of the following limitations: there must either be a possibility of human oversight and override, or the aws must only be used in situations where changes which would invalidate the legal assessment are not expected after the decision to employ the aws has been taken. This assessment can be based on the knowledge of the area or by limiting the time gap in which the aws can operate, thereby reducing the risk of significant changes to the situation.103

7 Use of aws in the Land Domain

Although the law, to a large extent, applies in the same way in all domains, they all raise different practical challenges with regard to autonomy. The land domain is inherently complex due to the presence of civilians and the variations in the terrain and weather conditions in which the aws will be expected to operate. This will affect the parameters for using unmanned ground vehicles (‘ugv s’) with autonomous functions in a lawful manner, especially in civilian populated areas such as cities. The risk of causing incidental civilian harm is considerably higher than, for instance, in the air and maritime domains.104 Furthermore, mere movement may be enough for a ugv to cause damage to civilians. Accidental harm caused during transport is considered an attack and must therefore be assessed on the basis of different rules. Destruction of property must be limited to that which is imperatively demanded by the necessities of war,105 but there is not such acceptance for knowingly causing harm to civilian persons.106 ugv s must therefore be programmed in a manner which is not expected to harm civilian persons, should also avoid causing extensive harm to civilian objects and property.107

At the same time, the potential benefits offered by autonomous technology are also significant in the land domain as soldiers fighting armed conflicts are routinely exposed to dull, dirty and dangerous tasks. Robots are already in use for dangerous reconnaissance tasks, explosive ordinance disposal and transport of heavy equipment,108 and making such robots autonomous will make these processes more efficient. Investment in ugv and aws for the land domain will therefore enhance the protection of soldiers while at the same time offering the benefit of an advanced ability to protect civilians as set out above. However, the reduced risk to soldiers cannot come at the expense of civilians, at least not when a choice made with the intention of reducing risk to own forces also increases the risk of harm to civilians. As mentioned, military forces are obliged to take all feasible precautions to avoid or minimize civilian harm and this also applies to the choice of means and methods. A general practice of shifting the risk from own forces onto civilians would also violate the general duty to take constant care to spare the civilian population, civilians and civilian objects when conducting military operations.109

In other words, the potential complexity of land operations drives a need for further caution to be taken when relying on aws. One way to address this complexity is to limit the use of aws to simpler tasks, such as working as a defensive shield to stop an ongoing attack, using its technology to identify the best position to carry out an attack which is then controlled by humans, or attacking objects in areas where civilians are not expected to be. Second, the presence of non-participating civilians and the possibility that they, for instance, may suddenly walk past a military objective, means that the situation may change quickly, potentially affecting the legal analysis of an attack. The above-mentioned challenges of lawful targets only fulfilling the requirements for a short period of time, such as dual use targets and civilians directly participating in hostilities, also call for a close control with the use of aws in the land domain.

Finally, autonomous ground systems are more exposed to unwanted physical interference than in other domains where software hacking and ai interference techniques, such as spoofing the image classification algorithm, are the main potential causes for interference. Persons with hostile intentions may physically tamper with aws and cause it to malfunction or may take control of it and use it for other purposes than what it is designed for, potentially in ways which violate loac. As a result, when used on the ground, aws require more advanced anti-tampering devices compared with other domains.

8 Concluding Remarks

Boyd’s ooda-loop is commonly referred to as a way to understand the importance of the increased speed and efficiency autonomous technology has to offer in the context of military operations. Ironically, Boyd was sceptical of new technology. According to his research, ‘[e]volution of tactics did not keep pace with increased weapons lethality developed and produced by 19th century technology (….). [T]echnology was being used as a crude club that generated frightful and debilitating casualties on all sides’.110

Advances in technology do not automatically result in military changes today either, especially where the technology offers new possibilities and therefore requires the military to operate differently to take full advantage of it. When introducing new technology and weapon systems, it is therefore important to have a plan for exploiting the new technology in a productive manner, rather than merely causing more suffering. Just as important, however, is having a plan for ensuring that the necessary changes are made to training and education so that military personnel are able to understand the new technology sufficiently well to be able to use it in a lawful manner.

There are several measures States can and should put in place to ensure lawful use of aws. First, they must ensure the Article 36 reviews are conducted in the best way possible, including making sure that those involved fully understand both the legal, technical and operational aspects of using aws. Second, States must ensure that their Armed Forces have people who can train their troops in the operational possibilities and legal parameters of such weapons.111 Third, when the decision is made to deploy aws to a given operation, the personnel involved in the use of aws must have the requisite knowledge and understanding of the system and the circumstances of its intended use to ensure lawful use.112 This includes determining the exact form or degree of human interaction required to ensure legality. Due to the technical understanding required to appreciate the possibilities and limits of aws, one way of ensuring this would be to require the involvement of specialists with advanced knowledge of the system in any decision to employ it. Finally, procedures must be in place to ensure human involvement whenever or wherever the aws are not expected to be able to comply with the required legal assessments independently.

One of the characteristics of using aws is that those who plan and carry out attacks defines the time and space within which the aws can attack a selection of targets from a pre-approved target set. This absence of human involvement in the final decision to attack has been met with scepticism in many parts of society, particularly if used against humans.113 However, as Schmitt explains, even ‘a fully autonomous system is never completely human-free. Either the system designer or an operator would at least have to program the system to function pursuant to specified parameters.’114 Such human involvement is essential for carrying out the many evaluative and subjective assessments inherent in loac and because humans ultimately remain responsible for the outcome of aws use. Despite suggestions that this human involvement must be ‘meaningful’,115 there is no such requirement in loac,116 and current weapon systems do not necessarily include this type of human control.117 Instead, the focus should be on ensuring the degree of involvement necessary to ensure compliance with loac rules in the respective circumstances of intended use. The United Kingdom, for instance, has expressed that lawful use of aws requires ‘context-appropriate human involvement’.118

A point sometimes missed by those concerned about aws is the fact that aws are developed to improve processes otherwise carried out by humans. If the aws operates unpredictably or unlawfully, it is no longer in the State’s interest to invest in its development or use.119 A completely unpredictable aws can just as easily harm its own soldiers or the civilian population it was intended to protect, or merely undermine the military operation through rampant acts. Furthermore, as pointed out above, the loac requirement to do everything feasible to choose means or methods of warfare which reduce the risk of incidental civilian harm means that aws cannot be used to conduct attacks if they are expected to cause more incidental harm than weapons with direct human control.120 The duty of commanders and the operators is, therefore, to assess the risk involved in using the aws in the time and space intended, and to manage these risks by taking all feasible precautions. In order to make this assessment, they must have sufficient certainty about the targets the aws is expected to engage so that they can justify the decision to use an aws. This in turn requires that the aws are programmed in a way which makes it possible to predict their potential outcomes, otherwise the operator cannot ensure its lawful use. Just like soldiers require clear rules to operate within, aws must be programmed and controlled. Only when we have a firm understanding of how to control aws can we determine the degree of human control needed.121

As a final note, the fear of aws is in many cases a result of a lack of understanding and trust in the system. However, it is not always the case that ‘the devil we know’ is better. The brutality of war puts a formidable and inhuman strain on the soldiers involved and may cause human error.122 aws therefore has the potential for improving loac compliance, at least eventually.123 In the words of Sassoli:

Only human beings can be inhuman and only human beings can deliberately choose not to comply with the rules they were instructed to follow. To me, it seems more reasonable to expect (and to ensure) a person who devises and constructs an autonomous weapon in a peaceful workplace to comply with ihl than a soldier on the battlefield or in a hostile environment. A robot cannot hate, cannot fear, cannot be hungry or fired and has no survival instinct.124

*

This article is written as part of a research project led by the Estonian Military Academy on ethical, social and legal aspects of integrated modular unmanned ground systems (iMUGS). It is an expanded version of the author’s parts of the report on International Legal Aspects of iMUGS, presented to iMUGS in Brussels in May 2022. The report was written together with Dr Cecilie Hellestveit.

1

nato, ‘Comprehensive Operations Planning Directive, Version 3.0’ (aco, 15 January 2021), 4–71.

2

Michael N. Schmitt, ‘Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics’ (2013) Harvard National Security Journal Features 1, <https://harvardnsj.org/wp-content/uploads/2013/02/Schmitt-Autonomous-Weapon-Systems-and-IHL-Final.pdf>, 8.

3

Chris Jenks, ‘False Rubicons, Moral Panic, and Conceptual Cul-De-Sacs: Critiquing & Re-framing the Call to Ban Lethal Autonomous Weapons’ (2016) 44 Pepperdine Law Review 1, 13.

4

See e.g. UK Ministry of Defence, ‘Ambitious, Safe, Responsible – Our approach to the delivery of ai-enabled capability in Defence’ (June 2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1082991/20220614-Ambitious_Safe_and_Responsible.pdf>, appendix C: ‘Lethal Autonomous Weapon Systems (laws)’.

5

For an excellent discussion on the human-centric approach to understanding aws, see Masahiro Kurosaki, ‘Toward the Special Computer Law of Targeting’, in Claus Kress and Robert Lawless (eds), Necessity and Proportionality in International Peace and Security Law (Lieber Studies Volume 5, oup 2021).

6

See e.g. Marco Sassoli, ‘Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to Be Clarified,’ (2014) 90 International Law Studies Series, US Naval War College 308, 323; Jeroen van den Boogard, ‘Proportionality and Autonomous Weapon Systems’ (2015) 6 Journal of International Humanitarian Legal Studies 247, 281; Sigrid Redse Johansen, ‘So Man Created Robot in His Own Image: The Anthropomorphism of Autonomous Weapon Systems and the Law of Armed Conflict’ (2018) 5(2) Oslo Law Review 89, 101.

7

Alan L. Schuller, ‘At the crossroads of control: the intersection of artificial intelligence in autonomous weapon systems with international humanitarian law’ (2017) 8(2) Harvard National Security Journal 379, 391. See also Lauren Sanders and Damian Copeland, ‘Holding Autonomy to Account: Legal Standards for Autonomous Weapon Systems’ (Lieber Institute, 15 September 2021) <https://lieber.westpoint.edu/holding-autonomy-account-legal-standards-autonomous-weapon-systems/>.

8

Tim McFarland, Autonomous Weapon Systems and the Law of Armed Conflict: Compatibility with International Humanitarian Law (cup 2020), 35.

9

See also Schmitt (n 2) 4.

10

U.S. Department of Defense (‘dod’), ‘Autonomy in Weapon Systems: Directive 3000.09’ (25 January 2023) <https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf>, 21.

11

Harald Høiback, ‘Luftmakt – høyde, hastighet og rekkevidde’, in Harald Høiback and Palle Ydstebø (eds), Krigens vitenskap (Abstrakt forlag 2012), 280.

12

Charles P. iv Trumbull, ‘Autonomous Weapons: How Existing Law Can Regulate Future Weapons’ (2020) 34 Emory International Law Review 533, 539.

13

John R. Boyd, ‘A discourse on winning and losing’ in Grant T. Hammond (ed) (Air University Press 2018), 148, <https://www.airuniversity.af.edu/Portals/10/AUPress/Books/B_0151_Boyd_Discourse_Winning_Losing.pdf>).

14

Boyd (n 13) 148.

15

nato Standard ajp-3.9, ‘Allied Joint Doctrine For Joint Targeting’ (Edition B, version 1, nato Standardisation Office, November 2021) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1033306/AJP-3.9_EDB_V1_E.pdf>, 5-2.

16

McFarland (n 8) 35.

17

See also Ronald Arkin, ‘Lethal Autonomous Systems and the Plight of the Non-combatant,’ 137 (July 2013) aisb Quarterly, 1–2; Lena Trabucco and Kevin Jon Heller, ‘Beyond the Ban: Comparing the Ability of Autonomous Weapon Systems and Human Soldiers to Comply with ihl’ (2022) 46(2) The Fletcher Forum of World Affairs 21–23.

18

Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol i) (adopted 8 June 1977, entered into force 7 December 1978) 1125 unts 3 (‘ap i’) art 57(2)(a)(ii).

19

Brian Tice, ‘Unmanned Aerial Vehicles – The Force Multiplier of the 1990s’ (1991) 1 Airpower Journal 41 <https://web.archive.org/web/20090724015052/http:/www.airpower.maxwell.af.mil:80/airchronicles/apj/apj91/spr91/4spr91.htm>.

20

McFarland (n 8) 81.

21

Ibid 35–36.

22

nato (n 15) 1–1.

23

Ibid 1–12.

24

Targeting may also include other forms of operations, such as influence operations, but this is beyond the focus of this article.

25

nato (n 15) 1–5.

26

Hague Convention (iv) Respecting the Laws and Customs of War on Land and Its Annex: Regulations Concerning the Laws and Customs of War on Land (adopted 18 October 1907, entered into force 26 January 1910) (‘Hague Convention iv’), reg art 23(g). There is some disagreement on whether or not this provision has been replaced by art 52(2) of Additional Protocol i of 1977 to the Geneva Conventions of 1949 (ap i). The ila study group on the conduct of hostilities concluded it is, while the more recent Oslo Manual argues that art. 23(g) regulates destruction not amounting to attacks. See the International Law Association’s Study Group on the Conduct of Hostilities in the 21st Century: International Law Association, ‘The Conduct of Hostilities and International Humanitarian Law: Challenges of 21st Century Warfare’ (2017) 93 International Law Studies 322, 347–349; and Yoram Dinstein and Arne Willy Dahl, Oslo Manual on Select Topics of the Law of Armed Conflict (Springer Open 2020), 93–97.

27

See also Camilla Cooper, ‘Programming systems like soldiers – using military control mechanisms to ensure aws are operated lawfully’ (Lieber Institute Articles of War, 7 November 2022) <https://lieber.westpoint.edu/using-military-control-mechanisms-ensure-aws-are-operated-lawfully>.

28

Most of the rules of loac expressly dealing with the protection of civilians from attack are set out in 1977 Additional Protocol i to the Geneva Conventions of 1949 (ap i). Although the protocol is only applicable to international armed conflicts, the relevant provisions are also considered customary international law applicable to non-international armed conflicts. icrc, ‘Customary International Humanitarian Law’ (International Humanitarian Law Databases) <https://ihl-databases.icrc.org/en/customary-ihl/v1>), rules 14–19.

29

ap i art 35(2). Most of the rules of loac expressly dealing with the protection of civilians from attack are set out in 1977 Additional Protocol i to the Geneva Conventions of 1949 (ap i). Although the protocol is only applicable to international armed conflicts, the relevant provisions are also considered customary international law applicable to non-international armed conflicts. icrc (n 28) rules 14–19.

30

ap i art 51(4)–(5).

31

ap i art 35(3).

32

icrc, ‘A guide to the legal re-view of new weapons, means and methods of warfare: Measures to Implement Article 36 of Additional Protocol i of 1977’ (January 2006) <https://www.icrc.org/en/doc/assets/files/other/icrc_002_0902.pdf>.

33

Sanders and Copeland (n 7).

34

Eric Talbot Jensen, ‘Autonomy and Precautions in the Law of Armed Conflict’ (2020) 96 International Law Studies Series 576, 596.

35

Sanders and Copeland (n 7).

36

For a useful analysis, see Damian Copeland, Rain Liivoja and Lauren Sanders, ‘The Utility of Weapons Reviews in Addressing Concerns Raised by Autonomous Weapon Systems’, (2023) 28(2) Journal of Conflict & Security Law 285–316.

37

McFarland (n 8) 93.

38

Ibid 93.

39

ap i art 1(2).

40

See e.g. Human Right Watch, ‘Heed the Call – A Moral and Legal Imperative to Ban Killer Robots’ (21 August 2018) <https://www.hrw.org/report/2018/08/21/heed-call/moral-and-legal-imperative-ban-killer-robots>).

41

McFarland (n 8) 111.

42

Arkin (n 17) 4.

43

ap i art 57(2)(a)(ii).

44

Schmitt (n 2) 24; Sanders and Copeland, (n 7).

45

See also Trabucco and Heller (n 17) 27.

46

Ibid 17.

47

On the question of manufacturers’ liability for such errors, see Sanders and Copeland (n 7).

48

According to the International Court of Justice, this is one of the cardinal principles of loac, the other being the prohibition on causing unnecessary suffering: Legality of the Threat of Use of Nuclear Weapons (Advisory Opinion) [1996] icj 226, para 78.

49

ap i art 48 and 57(1).

50

ap i art 48.

51

ap i art 51(4).

52

ap i art 48.

53

McFarland (n 8) 93.

54

Kurosaki (n 5) 431–434.

55

Sassoli (n 6) 324.

56

Protocol on Prohibition or Restrictions on the Use of Incendiary Weapons (adopted 10 October 1980, entered into force 2 December 1983) UN, a/conf. 95/15, 27.10.1980, Annexe i (‘ccw Protocol iii’), art 2. See also Sanders and Copeland (n 7).

57

Schuller (n 7) 416, and Trumbull (n 12) 573–574.

58

ap i art 52(2).

59

Trumbull (n 12) 576.

60

icrc, ‘icrc position on Autonomous Weapon Systems’ (12 May 2021) <https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems>.

61

ap i art 51(3).

62

ap i art 8(1).

63

Trumbull (n 12) 576.

64

Nils Melzer, ‘Interpretive Guidance on the Notion of Direct Participation in Hostilities under International Humanitarian Law’ (icrc, May 2009) <https://www.icrc.org/eng/assets/files/other/icrc-002-0990.pdf>. The experts involved in the project later published the following critique: Ryan Goodman et al, ‘Forum: The icrc Interpretive Guidance on the Notion of Direct Participation in Hostilities Under International Humanitarian Law’, (2009–10) 42 New York University Journal of International Law and Policy 641.

65

ila (n 26) 357–359.

66

ap i art 35(2) and 41.

67

See also Boogard (n 6) 259. See, however, Trabucco and Heller (n 17) 25, where they argue that the determination is factual rather than context-dependent and therefore should not be problematic to program into an aws.

68

Declaration Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes Weight, (adopted 11 December 1868, entered into force 11 December 1868) (‘St Petersburg Declaration’); ccw Protocol iii.

69

Michael N. Schmitt and Eric Widmar, ‘The Law of Targeting’, in Paul AL Ducheine, Michael N. Schmitt and Frans Osinga (eds.), Targeting: The Challenges of Modern Warfare (Asser Press 2016), 128–129.

70

ap i art 57(2).

71

UK, Reservations to the 1977 Additional Protocol 1 <https://ihl-databases.icrc.org/ihl/NORM/0A9E03F0F2EE757CC1256402003FB6D2?OpenDocument>, §b. Similar or identical declarations or statements were made by several States, e.g. Canada, Germany, Netherlands, Algeria, Austria, Italy, Belgium, Ireland, and Spain. All reservations and declarations are available at <https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/States.xsp?xp_viewStates=XPages_NORMStatesParties&xp_treatySelected=470>. See also Unites States Department of Defense (‘U.S. DoD’), Law of War Manual (June 2015, updated July 2023, Office of the General Counsel of the Department of Defense, Washington) <https://media.defense.gov/2023/Jul/31/2003271432/-1/-1/0/DOD-LAW-OF-WAR-MANUAL-JUNE-2015-UPDATED-JULY%202023.PDF>, 192–193; Certain Conventional Weapons Convention (1980): Protocol ii on Prohibitions or Restrictions on the Use of Mines, Booby-Traps and Other Devices (Geneva, 10 October 1980, UN Doc a/conf.95/15, 27.10.1980), Annex i art 3(4); Protocol iii on Prohibitions or Restrictions on the Use of Incendiary Weapons (Geneva, 10 October 1980, UN Doc a/conf. 95/15, 27.10.1980), Annex i, art 1(5); Amended Protocol ii on Prohibitions or Restrictions on the Use of Mines, Booby-Traps and Other Devices (as amended on 3 May 1996, UN ccw/conf.i/ 16), art 3(10); and icrc, cihl, Commentary to Rule 15.

72

U.S. DoD (n 71) 193–194; Schmitt and Widmar (n 69) 137; and ila (n 26) 377–378.

73

Prosecutor vs Galić (Judgement, Trial Chamber) it-89-29-t (5 December 2003) para 58; Trumbull (n 12) 554; Schmitt and Widmar, (n 69) 137.

74

Yves Sandoz, Christopher Swinarski, and Bruno Zimmermann (eds), Commentary on the additional protocols of 8 June 1977 to the Geneva Conventions of 12 August 1949 icrc (Martinus Nijhoff Publishers 1987), para. 2198. On the relationship between feasible and reasonable, see ila (n 26) 375.

75

United States vs. List et al. (‘The Hostages Trial’) (Nuremberg, 1948) 11 nmt 1230, 1296–1297.

76

U.S. DoD (n 71) 195.

77

Dinstein and Dahl (n 26) 38.

78

Sassoli (n 6) 327.

79

Trabucco and Heller (n 17) 26.

80

McFarland (n 8) 119.

81

Schuller (n 7) 389.

82

Trumbull (n 12) 563.

83

Sassoli (n 6) 327 and Schmitt (n 2) 11.

84

Trabucco and Heller (n 17) 17. See also Arkin (n 17) 3.

85

Trumbull (n 12) 545–548; Eric Talbot Jensen, ‘The (Erroneous) Requirement for Human Judgment (and Error) in the Law of Armed Conflict’ (2020) 96 International Law Studies Series 26, 56; Sassoli (n 6) 310; McFarland (n 8) 35–36.

86

See also McFarland (n 8) 121.

87

ap i art 57(2)(a)(ii).

88

ap i art 51(5)(b).

89

ap i art 57(2)(a)(iii) cf. 57(2)(b).

90

See also Jensen (n 85) 52.

91

Kurosaki (n 5) 416–417.

92

Boogard (n 6) 259.

93

Trumbull (n 12) 563.

94

McFarland (n 8) 119.

95

Sigrid Redse Johansen, On military necessity (cup 2019), 134–135.

96

McFarland (n 8) 98.

97

nato (n 15) 1–11.

98

Schuller (n 7) 391.

99

Trumbull (n 12) 575.

100

ap i art 57(2)(b).

101

Jensen (n 85) 599.

102

See also dod (n 10) 21.

103

See also McFarland (n 8) 118 and 126.

104

Trumbull (n 12) 548, and Boogard (n 6) 262.

105

Hague Convention iv, reg art 23(g).

106

Dinstein and Dahl (n 26) 93–94.

107

Chris Jenks and Rain Liivoja, ‘Machine Autonomy and the Constant Care Obligation’ (Humanitarian Law and Policy, 11 December 2018) <https://blogs.icrc.org/law-and-policy/2018/12/11/machine-autonomy-constant-care-obligation>.

108

Boogard (n 6) 254.

109

ap i art 57(1). See also Boogard (n 6) 274.

110

Boyd (n 13) 66.

111

ap i art 83 and 87(2).

112

ap i art 86 and 87. See also McFarland, (n 8) 123–124 and Schuller (n 7) 389 and 419–420.

113

See e.g. Stop Killer Robot Campaign: <https://www.stopkillerrobots.org/>.

114

Schmitt (n 2) 4.

115

See e.g. Merel Ekelhof, ‘Autonomous Weapons: Operationalizing Meaningful Human Control’ (Humanitarian Law & Policy, 15 August 2018) <https://blogs.icrc.org/law-and-policy/2018/08/15/autonomous-weapons-operationalizing-meaningful-human-control/>.

116

Jensen (n 85) 28. See also Lena Trabucco, ‘What is Meaningful Human Control, Anyway? Cracking the Code on Autonomous Weapons and Human Judgment’, (Modern War Institute, 21 September 2023) <https://mwi.westpoint.edu/what-is-meaningful-human-control-anyway-cracking-the-code-on-autonomous-weapons-and-human-judgment/>.

117

For instance, as Ekelhof points out, fighter pilots rely to a large extent on the information provided by the plane’s computer systems: Ekelhof (n 115).

118

UK (n 4) appendix C: ‘Lethal Autonomous Weapon Systems (laws)’. Similar statements were made by the USA in the ccw negotiations: United States, ‘Human-Machine Interaction in the Development, Deployment and Use of Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’ (2018) §8, 9 (U.N. Doc. ccw/gge.e/2018/wp.4, 28 August 2018) <https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2018/gge/documents/GGE.2-WP4.pdf>.

119

See also Trumbull (n 12) 570–571.

120

Schmitt (n 2) 24. See also Sanders and Copeland (n 7).

121

Schuller (n 7) 389.

122

Arkin (n 17) 2.

123

Trabucco and Heller (n 17) 21.

124

Sassoli (n 6) 310. See also Trumbull (n 12) 546–547.

Content Metrics

All Time Past 365 days Past 30 Days
Abstract Views 0 0 0
Full Text Views 572 372 45
PDF Views & Downloads 1222 840 110