Save

Regulatory Choices at the Advent of Gig Warfare

In: Journal of International Humanitarian Legal Studies
Author:
Mark Klamberg Professor, subject director Public International Law, Stockholm University, Stockholm, Sweden
Visiting scholar, American University School of International Service, Washington, D.C., USA(2023–2024)
Visiting scholar, Carter School for Peace and Conflict Resolution at the George Mason University, Arlington, VA, USA(2023–2024)

Search for other papers by Mark Klamberg in
Current site
Google Scholar
PubMed
Close
Open Access

Abstract

Regulation of military ai may take place in three ways. First, existing rules and principles in ihl is already or could be extended via reinterpretation to apply to military ai; second, new ai regulation may appear via “add-ons” to existing rules, finally, regulation of military ai may appear as a completely new framework, either through new state behavior that results in customary international law or through a new legal act or treaty. By introducing this typology, one may identify possible manners of regulation that are presently under-researched and/or ignored, for example how Rules of Engagement (roe) may be a way to control the use of military ai. Expanding on existing scholarship, the articles discusses how military ai may operate under different forms of military command and control systems, how regulation of military ai is not only a question of “means” but also “methods” of warfare and how the doctrine of supervisory responsibility may go beyond the doctrine of command responsibility. In the case that fully-automated Lethal Autonomous Weapons Systems (laws) are available and considered for use, it is suggested that their use should be prohibited in densely populated areas following the same logic as incendiary weapons. Further, one could introduce certain export restrictions on fully-automated laws to prevent proliferation to non-state actors and rogue states.

1 Conceptualizing the Problem

Various metaphors are used to describe how artificial intelligence (ai) will influence and be utilized as part of warfare. The exhausted popular scenario of a digital overlord such as Skynet in the Terminator franchise appears less likely, an alternative analogy of how ai is and will be used in warfare would be existing gig-services such as Uber. The Gig-analogy suggests that ai will operate as decision-support systems, providing one source of information amongst others under consideration. The human decision maker and agents are free to take the system’s “advice”, “directions” or not doing so,1 similar to computer software used in retail and hospitality to algorithmically schedule workers.2 This has several implications on how armed forces are organized, regulated and the question of responsibility in case of unlawful actions.

This article will describe three ways of regulating military ai. By introducing this typology, one may identify possible manners of regulation that are presently under-researched and/or ignored, for example how roe may be a way to control the use of military ai. The article also seeks to expand upon existing scholarship by discussing how military ai may operate under different forms of military command and control systems (C2), how regulation of military ai is not only a question of “means” but also “methods” of warfare and how the doctrine of supervisory responsibility may go beyond the doctrine of command responsibility

Most of the present scholarship focus on laws, weapon systems that, once activated, can select and engage targets without further intervention by a human operator.3 While most states probably understand the risks with using such weapons vis-à-vis the civilian population and objects, states also have an incentive to have a comparative military advantage vis-à-vis existing and potential future rivals. Together with the technical complicity and rapid development these factors make regulation at the international level difficult. Thus, the dilemma is how to regulate autonomous weapons systems in a manner which is both acceptable for the states concerned while at the same time providing an efficient and sufficient protection for protected persons and objects.

Turning to a more under-researched area and an illustrative example how C2 operates in the context of military ai, consider how Ukrainian armed forces claim to have developed an Android application for their artillery forces. It has allowed the Ukrainian artillery forces to more rapidly process targeting data for the Soviet-era D-30 122mm towed howitzer, reducing targeting time from minutes to under 15 seconds.4 The Gis Arta command and control system for artillery is reported of being able to process data from drones, smartphones, rangefinders and the like. The system is described as having equivalent functionality as an electronic taxi ordering system (for example Uber), where the “call” of artillery support is “distributed” to the nearest conditional howitzer, which receives the exact coordinates of the target bringing down targeting from minutes to seconds.5 The information is relayed between units on the ground without reaching higher C2 echelons at brigade or general staff level. The Gis Arta command and control system is neither unique or the first of its kind, one may find similar existing systems or under development.6 More recently, we can find reportings on how Israel is using the “Habsora” (the Gospel) system in the armed conflict with Hamas, an ai that can generate targets which allows the Israeli armed forces (the idf) to carry out strikes on residential homes of a single Hamas member.7 The idf explains on its website that Habsora produces targets and gives recommendations to human analysts, information subsequently used by decisions-makers at brigade and division level.8 They illustrate how machines and human interact in a greater semi-artificial intelligent system where algorithms direct human operators to fire at their opponents.

These two parallel phenomena, semi-artificial intelligent command and control systems on the one hand and laws on the other hand, may coexist. What they have in common is that they will accelerate the speed of warfare and probably push decision-making downwards in military organizations. However, arguably there is also the possibility that just the opposite could happen and decisions-making is pushed upwards, as core decisions could be made at the design level or through the roe.

Part 1 of this article will initially examine what is meant with autonomous weapon systems. They pose some common as well as diverging legal challenges. Next, the potential utility and dangers of military ai is considered as well as who can be held legally responsible in the event of war crime or violation of ihl? Subsequently two main alternatives of organizing military command are presented, relevant for the subsequent discussions in this article, including how to allocate legal responsibility in case of violations of the laws of war. Part 2 examines how legal regulation of military ai may take place in three distinct ways. Part 3 will conclude that we need to redefine our understanding of how modern warfare is and will be conducted.

1.1 Autonomous Weapon Systems Defined and Human Control

Military ai is already here through decision-support systems and drones, part of the effort to remove human personnel as far from the risk of harm as possible. Many may associate the word “autonomous” as a weapons system that seeks out an enemy target on its own and decides on its own initiative whether to use a lethal force against the target. This is arguably a misconception. There is always some kind of human-computer interaction, the system has to be programmed by a human and a human operator have to decide to employ the system to a particular battlespace.9

Autonomous weapon system may be defined as “weapon systems that incorporate autonomy into the critical functions of selecting, targeting, engaging and applying force to targets”.10 The icrc has a similar understanding and notes that autonomy in critical functions is already to be found in some existing weapons to a limited extent, such as air defence systems, active protection systems, and some loitering weapons.11 Autonomous weapon systems differ from other weapon systems in that the user at the point of launch or activation of the system does not know the exact timing, location and target to be chosen by the system.12

Autonomous weapon systems can be divided into the following three categories, depending on the degree of direct control exercised by a human operator. (1) Human-controlled (human-in-the-loop) systems: Robotic weapons which are remotely controlled by a human operator. (2) Human-supervised (human-on-the-loop) systems: Robotic weapons which can carry out a targeting process independently from human command, but which remain under the real-time supervision of a human operator who can override any decision to attack. (3) Autonomous (human-out-of-the-loop) systems: Robotic weapons which can search, identify, select, and attack targets without the real-time control by a human operator.13

It is arguable uncontroversial that human “in-the-loop” systems have the capability of adhering to ihl principles because humans are involved in the targeting process.14 Instead of a comprehensive ban, the discussion on laws has by time converged around the concept of “meaningful human control”.15 The “meaningful human control” concept may appear paradoxical, if a legal framework would refute full autonomy over certain critical functions of a weapons system, by definition there cannot be full autonomy. The principle entails human involvement at different stages which in turns means that humans can be held accountable.16 In the context of the negotiations within the Certain Conventional Weapons (ccw) regime, it has been suggested that the concept of “human control” should include the following three elements.

  1. 1)The ability to redefine or modify the weapon system’s objectives or missions or otherwise adapt it to the environment; to deactivate, abort, terminate, or interrupt its operation and use as needed; and to constrain its function to self-initiate;
  2. 2)The ability to limit the scope and scale of use of the weapon system, including temporal and spatial limits, and to restrict its targeting parameters and targeting capability.
  3. 3)The ability to understand and explain the weapon system’s functioning with the view to retrospectively providing an explanation that satisfies legal and other requirements regarding the operation of the weapon system, including the attribution of responsibility and accountability.17

Moreover, even if human are in-the-loop or on-the-loop accidents may still happen. This was the experience with nuclear weapons accidents and close calls.18 Several features of laws make them vulnerable for accidents, the systems: are highly complex, tightly coupled with no slack or buffer, have multiple competing objectives beyond “safety”, and operate in a competitive context.19 The abovementioned matters ultimately relate to ethics and moral, to be discussed next.

1.2 The Utility and Dangers of Military ai

The use of military ai brings potential advantages for the actors having access to the technology as well as dangers.

ai may improve accuracy, speed, and/or scale of machine decision-making in complex environments. The technology allows its users to exceed human capabilities in tasks such as pattern recognition, prediction, optimization, and (autonomous) decision-making, essentials tasks in a military context.20 Statman notes that 1) other things being equal, military ai comply better than other tools of war with the requirements of discrimination and proportionality; 2) they enable states to reduce the risk to their own soldiers; 3) they weaken moral arguments against involvement in wars of humanitarian intervention; 4) they make it possible to respond effectively against perceived aggression without the need to engage in a full-scale war; 5) they are cheaper in comparison to human-operated tools of war and thus leave more public money available for other causes.21 Responding to the fear that military ai may have discriminatory bias relating to gender and race, it appears less likely that a military ai will themselves conduct sexually related crimes, although they may certainly conceivable that they may be deployed in an environment where such crimes are committed.

Conversely, there is a fear that military ai may put civilians at risk from the unpredictable consequences of attacks,22 represent a loss of humanity,23 moral disengagement,24 a responsibility gap,25 have a better fit with authoritarian regimes,26 and reinforce bias in decision-making27 for example in relation to gender28 but also race. Such fears may be discussed both in terms of general societal impact and respect for individual rights.29 Societal impact may in this context relate to changes in the equality between states, justice,30 likelihood for war,31 rational use of human, technological, economical and natural resources. The calls for regulating military ai are either explicitly or implicitly grounded in one or several of the concerns listed above. Thus, there when discussing appropriate modes of regulation, we need to define which concern we seek to alleviate.

Military ai may also create asymmetry and riskless warfare in conflicts where only one or some sides have access to laws. This is arguably nothing new, do weapons really have to be fair in terms of allocating benefits and burdens?32 However, laws are distinctive in at least two ways. First, laws do not merely reduce the risks to their operators, instead laws essentiality eliminate such risks. Second, laws may be employed by states against actors that lack the resources to buy their own.33 The question of fairness is not merely moral, it will also effect the willingness of the states who have the technology compared to those who hasn’t in their willingness to regulate, a matter revisited in section 2.3.2 below.

1.3 Attribution of Responsibility and Supervisory Responsibility

How do you allocate responsibility in a situation of complex decision-making with several commanders using a number of weapon systems with a high degree of autonomous problem-solving capacity?34 The assumption is that robots have no moral agency and as a result cannot be held responsible in any recognizable way if they causing injuries or death in violation of ihl or ihrl. ai and robots are arguably tools of various kinds, albeit very special tools, and the responsibility of making sure they act lawfully and ethically must always lie with human beings.35 Robots will always operate within the limits of its software designed by humans, and it is humans who creates the robots. Thus, the responsibility needs always need to be attributed to a human.36

Candidates for legal responsibility include the software programmers, those who build or sell hardware, military commanders, subordinates who deploy these systems (front-line operators) and political leaders.37 The product liability framework is normally confined to a lawsuit and a potential monetary fine. It is not obvious how criminal accountability can be transposed onto a programmer or engineer.38

When considering responsibility due account has to be given both to ihl and International criminal law (icl), bodies of law with significant overlap. By adding icl the focus shifts to matters relating to individual criminal responsibility, including modes of liability such as aiding and abetting, ordering and command responsibility. Both ihl and icl require that the soldier thinks and acts somewhat independently, since “just following orders” is not a valid defence for committing war crimes. As already indicated, human accountability for the employment and effects of ai weapons may be derived from the doctrine of command accountability.39 However, traditional command responsibility may in certain cases be inapplicable because of the requirement that the commander have knowledge that a subordinate is or is about to commit a crime and the commander fails to act.40 The question is if the laws should be considered as a subordinate and if command responsibility as a concept is relevant at all, the laws is rather a tool, means or weapon at the disposal of the commander. As such the commander’s responsibility is rather that of direct responsibility.41 Military commanders will not always be able to understand the programming of laws in a sufficient manner. Concepts of perpetration, the doctrine of command responsibility and the duty to take precautions in the context of laws all presume that the system has some degree of predictability which is not always the case.42 However, soldiers may also be unpredictable and superiors may not always understand why they act the way they do.43 Even when front-line operators are “in the loop”, their role may be reduced to wield veto power on the use of the weapons systems, a power the operator may be unwilling to use in an acute situation.44 The use of autonomous weapons therefore involves a risk that military personnel will be held responsible for the actions of machines whose decisions they could not control. The more autonomous the systems are, the larger this risk looms. If the machines are really choosing their own targets then we cannot hold the military commander responsible for the deaths that ensue. This fear is arguably unwarranted. Unless humanity has totally surrendered to the rule of robots, some person still has to take the decision to deploy the robot and that person could be held accountable.

One option would be to assign responsibility in advance and/or share the responsibility.45 An other option would be to consider the so-called doctrine of supervisory responsibility (German: erfolgsabwendungspflicht; Swedish: garantläran) which not only encompasses command responsibility. It also includes supervisory responsibility and criminal liability for omission in situations created by own’s previous actions and/or for phenomena which are inherently dangerous.46 The doctrine of supervisory responsibility might be able to handle situations where the narrower doctrine of command responsibility is not suitable. The Ishaq case is an example how the doctrine of supervisory responsibility is applied in an icl context, more specifically it relates to the Stockholm district court conviction of a mother who exposed her children to danger by moving with them to an area controlled by isis/Daesh in Syria and failed to protect her son from being recruited as a child soldier.47 The two doctrines are similar in the sense that they both 1) are limited to a certain groups of persons such as military commanders, corporate managers, pool guards, mountain guides, parents; 2) requirement on knowledge; 3) the commander/supervisor has a duty to prevent crime or danger; and 4) there has to be causal link between the inaction of the commander/supervisor and occurrence of the crime/injury/damage. The doctrine of supervisory responsibility has a larger scope compared to the doctrine of command responsibility not only in relation to who may be held criminally liable, but also in the sense that the object that causes the immediate crime/danger does not have to be a human. The doctrine may bring criminal responsibility in situations of accidents without a human directly triggering the accident, especially in situations when the supervisory person has indirectly induced the situation, for example a mountain guide bringing a group of untrained person up a mountain which is not criminal in itself but create a situation of criminal liability if the supervisory person is passive in the moment of an accident. Thus, the same logic and legal principle which applies to a mountain guide could arguably also be applied to a military commander deploying a laws to the battlefield.

1.4 Centralized Command or Mission Command

The discussion on legal regulation of military ai is mainly focused on weapons platforms, while there is less focus on semi-artificial intelligent command and control systems (C2). This is important for the attributions of responsibility and regulation.

A historical retrospect of military command and control would show that military organizations have been everything from individualized based on the social status of the soldier (knights in medieval times) to vertical hierarchies in that authority, responsibility, and accountability have been derived from one centre, a king, single military commander or a group forming a joint military command. In such hierarchical structures, lower ranking individuals are both subordinates and commanders; they are required to interpret the orders they are given and issue orders to their subordinates.48 Detailed political control over the armed forces obviously requires a more centralized and vertical military hierarchy.49 The conceptualization of centralized and vertical military command was challenged in Europe during the 1700s and 1800s, there was an understanding that the “fog of war” prevented efficient vertical and horizontal command. A key impetus for reform in Western military organization came in Prussia following the defeat in Jena (1807). Helmuth von Moltke the Elder (1800–1891) became the one to carry out the revolution, subordinates should be told what to do, not how to do it, they are only required to act within the purview of their commander’s intent, i.e. the goal of the overall operation and ultimately the greater objective of the war at hand.50

The German concept of Auftragstaktik, often translated to “mission command” may be defined as the conduct of military operations through decentralized execution based on mission orders. It is a mode of command that requires and facilitates initiative and decision-making on all levels of command directly involved with events on the battlefield. It encourages and requires subordinates to exploit opportunities by empowering them to take the initiative and exercise judgment in pursuit of their mission; overall alignment of a military operation is maintained through adherence to the superior commander’s intent.51 Mission command is neither a tactic nor a means of warfare, as indicated, it is rather a mode of command.52 It is also described as a necessary form of command in order to efficiently conduct maneuver warfare (also called Blitzkrieg).53 A key idea is to operate inside (faster than) the adversary’s tactical and operational decision loops.54

Combined forces operations require greater coordination at a higher level which decreases the space for mission command.55 Only some weapon systems, units and command networks allow for the use of mission command. They require a common situation awareness and understanding of the mission.56 Technology allows for greater centralized control, however at a lower tactical level there will arguably always be a need for mission command.57 Mission command may also serve the function of translating political goals to military goals and guidance.58 This may be done through roe,59 a mode of regulation to be further discussed in section 2(2).

How is this relevant for military ai? There was a belief in the 1990s among Western militaries, held under the banner of Revolution in Military Affairs (rma) that emerging communications, information, surveillance, and technical intelligence capabilities would lift the fog of war and “allow unprecedented awareness of every aspect of future operations”.60 It would permit all-seeing headquarters to plan in detail, make perfect decisions, control organizations closely, apply resources efficiently, and direct operations linearly toward mission accomplishment.61 rma would bring back the possibility of vertical and centralized command. However, constant and complex communication from a distance require significant bandwidth and may thus be subject to intentional interference from an adversary and delays (lagging).62 Even though technology may allow certain centralization it also allows every soldier and commander on the field to be connected. Any information superiority63 that was previously only available for the higher tiers of command can now also be accessible for lower tiers of command.64 This challenges the assumptions of rma. In order to avoid such interruptions, there will be an incentive to make part of network or individual nodes self-contained and self-governing without any immediate human involvement.

There might be actors who believe ai should be used in a centralized command structure. Freedberg argues that “Russia and China tend to see automation as a way of imposing central, top-down control and by passing fallible human subordinates”.65 In contrast, the “US military is looking at ai as a tool to empower human beings all the way down to individual pilots in the cockpit and junior non-commissioned officers (nco s) in the trenches.”66 In other words, while technological advancement in the form of communications and surveillance may make more information available to higher echelons in command and control-structures, ai might push a substantial part of command and control downwards in a military organization.

Regardless of the degree autonomy is granted, there will always be some human involvement at some stage, if only to program the system or order the system to engage in hostile action against a specific actor. As long as this is the case, the moral, and arguably also the legal responsibility stays in the human domain.67 Legal concepts of attribution and responsibility are normally grouped either in a vertical manner (superior-subordinate) or horizontal manner (co-perpetration, joint criminal enterprise, aiding and abetting). The doctrine of supervisory responsibility may fill perceived gaps. There is somewhat of a presumption that, if there is control, a human will act as superior or controller of the lethal autonomous weapons system (laws) in a hierarchical, vertical and centralized military organization. However, it is conceivable and maybe more likely that sensors, autonomous weapons and human soldiers who are on the actual battlefield all will be nodes in a greater, horizontal network,68 a “Military Internet of Things”.69 Obviously, all military organizations will need some kind of command-and-control, however such command may be restricted to establishing mission objectives while leaving the execution to a more decentralized structure. Thus, in order to understand the contours and the legal challenges of ai warfare it is necessary to understand how such warfare will be integrated in larger military command structures. Thus, it becomes relevant to discuss the two main alternatives in military command structures as indicated above: vertical command and mission command.70 It is conceivable that within the same military organization both types of command may exist in parallel or jointed at different in points of interface.71 However, the absence of any command is not an option, then there would be no organization at all.

2 Three Ways of Regulating Military ai

The basic premise is that “[t]he right of belligerents to adopt means of injuring the enemy is not unlimited.”72 Further, in the development, acquisition or adoption of a new weapon, means or method of warfare, states parties are obligated to determine whether the weapon’s employment would, in some or all circumstances, be prohibited by the international law.73 This means that laws have to be tested at various stages of their development, at the conception/design phase, development of prototypes and before the system is fielded.74 Every state party are legally obligated to decide on their own how this review is conducted.75 Considering that procurement of military equipment is mainly done by States, who are also the main addressees of ihl and related norms, part of the solution could be in controlling procurement, especially when items are not bought off the shelf. This would in turn require the involvement of the relevant people in the armed forces in the early stages of design and procurement of laws.

The ihl regulatory framework and discourse distinguishes between “means” and “methods” of warfare, both relevant for military ai. The term “means” refers to the physical means that belligerents use to inflict damage on their enemies during combat, i.e. weapons, weapons systems or platforms employed for the purposes of attack in an armed conflict.76 In addition to specific treaty bans there is a general prohibition to employ weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering.77 A significant part of scholarship and policy discussion focus on banning law s in general or whether a specific weapons system can be deployed.78 That relates to the means of warfare, the premise of this article is that we should also focus on the methods of warfare.79

The term “methods” refers to the tactics or strategy used in hostilities to defeat the enemy by using available information on him together with weapons, movement and surprise. Example of prohibited methods of warfare include:80 perfidy,81 terror,82 starvation,83 reprisals against non-military objectives,84 and indiscriminate attacks,85 damage to the natural environment86 or to works and installations containing dangerous forces;87 ordering that there shall be no survivors;88 pillage;89 taking hostages;90 improper use of distinctive emblems and signs;91 and attacks on persons hors de combat92 or parachuting from an aircraft in distress.93

One could imagine regulation of military ai both in terms of means and methods, elaborated upon in section 2.3 below. If one would move one level of abstraction up above the traditional ihl discourse and categorization, legal regulation of a military ai may take place in three distinct ways: 1) existing rules and principles in ihl is already or could be extended via reinterpretation to apply to military ai; 2) new ai regulation may appear via “add-ons” to existing rules, 3) regulation of military ai may appear as a completely new framework, either through new state behavior that results in customary international law or through a new legal act or treaty.94 The following sections use this three-tiered categorization.

2.1 Applying Existing Rules and Principles of ihl

The principles of distinction, proportionality, and precautions in attack, which call for complex assessments based on the conditions present at the time of the attack decision as well as while the attack is underway, can be used to determine some limitations on the use of military ai. These assessments must be made by combatants close enough to the attack. Where these assessments are used as part of the planning process, they remain applicable and are to be respected during the attack’s implementation.95 Some human control is arguable needed in order to make complex and context-specific judgments as required by ihl.96

The principle of distinction requires that the parties to a conflict at all times distinguish between civilians and combatants.97 The principle also requires parties to the conflict to distinguish between civilian and military objects. This is a principle of customary international law.98 The principle is applied by laws through the system’s sensors, computers for processing and ultimately its effectors (weapons).99 A significant challenge is whether these systems may distinguish between combatants and non-combatants. The civilian-military distinction may vary depending on context, a particular challenge in the context of “asymmetrical warfare” and “urban warfare”.100 The later category of non-targetable persons includes combatants that have surrendered or are wounded. Moreover, laws may have difficulties to distinguish between non-targetable civilians and lawfully targetable civilians, i.e. those directly participating in the hostilities or members of levée en masse.101 It is compounded by robot’s inability to understand context, and the difficulty of applying of ihl language in defining non-combatant status in practice, which must be translated into a computer program.102 This may be less of a problem in areas where there no or very few civilians, for example an armed conflict at open sea.103

The use of laws may also come at odds with the principle of proportionality which prohibits attacks “which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.”104 It is rule of customary international law applicable in iac s as well as niac s.105 The principle relies on a contextual weighing of potential harm to civilians and civilian objects on the hand and the potential military advantage of the attack on the other. This requires a subjective assessment.106 The open-endedness of the rule of proportionality combined with the complexity of circumstances may result in undesired and unexpected behaviour by laws, with deadly consequences. Humans may be superior in the ability to ‘frame’ and contextualize the environment.107 The framing problem refers to determining the scope of what is relevant which in turn will impact the ability to decide on appropriate action. laws have to be programmed to distinguish between relevant and irrelevant information which may come at odds with them being deployed in an open-ended environment. The framing problem is compounded by rules and threshold which relate to reasonableness or doubt,108 for example article 50(1) of ap I which provides that “[i]n case of doubt whether a person is a civilian, that person shall be considered to be a civilian.” A counter-argument would be that it is indeed desirable and possible agree upon a formula for both human operators and laws on how to frame the environment and calculate proportionality. This must be made on case-by-case basis but does not need to be an entirely subjective determination.109 The icty appears to use a “reasonable person” standard when assessing proportionality,110 such a standard is rather an objective standard than a subjective one.111

The principle of precaution requires that several precautionary measures must be taken before launching an attack and against the effects of attacks.112 It arguably also extends to the whole planning phase of an armed deployment and concerns all persons involved in preparations, including commanders the manufacturers and programmers of laws.113 laws which do not endanger the operator’s life arguably permits a higher degree of precaution compared to scenarios where human operators or manned vehicles are used for reconnaissance.114 laws may in certain cases make it easier to comply with the abovementioned principles: they can act in a self-sacrificial manner where target identification is uncertain or where measures of self-defence would result in excessive civilian harm.115 They can also process more data more quickly while a human combatant could be overwhelmed with information.116 laws may not necessarily rely on the same inputs as a human combatant, instead a reasonable standard is that they should be able to perform at least with the same degree of reliability.117 Conversely, the need for continuous re-assessment on appropriate precautionary measures may require human involvement. This may limit the deployment of currently existing laws to military platforms (military aircraft, warships and military vehicles) in situations in which there less risk of collateral damage to civilians or civilian objects.118

The principles of distinction, proportionality, and precautions are illustrative examples of how existing rules and principles are applicable to military ai. Without providing a conclusive list of existing rules and principles the next section will describe the phenomena of “add-ons” to existing rules.

2.2 roe Acting as “Add-Ons” to Existing Rules

An “add-on” in this context relates to rules which are specific to military ai and closely integrated into more general rules without creating a separate legal regime in the form of a treaty. An example from a field other than ihl could be the ai-related provisions that were added to the 1968 Vienna Road Traffic Convention through an amendment in 2015.119

As indicated above the function of translating political goals to military goals and guidance may be done through Rules of Engagement (roe).120 roe is a tool for command and control of the use of military force.121 The U.S. Department of Defence has concluded that beyond taking account of ihl, roe will be essential when deploying laws in operational situations.122 ihl and roe have to be part of the coding that goes into laws, they can have built into their code international mandates for interventions, or restrictions imposed by the roe of the state concerned. In relation to permissible use of force the standardised nato roe provides that “[t]he capability and preparedness to inflict damage can be taken to exist when certain tactical events occur. These may include … the deployment of remote targeting methods”123 which arguably could include the use of military ai. In collective actions, this could even allow the use of laws belonging to one state by other states. For example, loitering munition artillery with autonomous capacity to stay airborne for some time, identify a target, and then attack could be restricted by design to operate in certain areas by the owner of the munition, even if the deployment and activation is done by forces of another state. Similarly, states could provide support to another state without deploying troops but only contributing with autonomous weapons, without the fear that they can be misused (for example Ukraine).

roe are the internal rules or directives military forces (including individuals) that define the circumstances, conditions, degree, and manner in which the use of force, or actions which might be construed as provocative. roe are neither subject of regulation in any multilateral treaty nor domestic law. They are military directives which play an important role in implementing ihl obligations. Most states have some form of roe to guide their combatants124 roe are based on law, policy and operations concerns.125 The roe thus narrows down what operators of ai applications may or may not do at the tactical level (i.e. on the ground). roe delegates power the right to use force, coercion and other aspects that could be perceived as provocative to different decision levels of a military organization.126

roe are in a sense similar to the “code of conduct” approach, i.e. national codes on development and procurement of autonomous weapons. In comparison with multilateral conventions, national codes of conduct may be more flexible capable of quickly adapting to technological advances. However, such codes are neither internationally binding, nor do they involve international oversight.127

Is roe a form of regulation or merely a tool for implementing law and policy? Normally, the later would be the correct view. However, when specific treaty rules on certain means or methods of warfare are absent or sparse, roe may assist in identifying relevant customary international law (cil). If a larger group of states adopts the same roe, such as the nato member states have done,128 this is not only applicable for those states, it may contribute to the formation of cil. As Cooper points out “roe may nonetheless affect customary law development, or at least interpretation of current rules, in so far as it provides an indication of what the States consider to be lawful”.129 Moreover, she also notes that “nato roe are developed to enable a unified approach to the use of force in order to accomplish [a distinct] common mission”,130 which ties well into the discussion in section 1.4 above on mission command.

There are proposals to constrain laws through programming, for example by designing an ‘ethical” algorithm with an embedded feedback loop that would either allow the laws to deploy its weapons in a particular instance or forbid it from doing so.131 This should be done in a manner that the programming and roe are consistent with each other. The notion of “meaningful human control” mentioned above can be implemented when adopting roe.132 This is also consistent with the Lessig’s message that computer code may regulate conduct in the same manner as law does,133 i.e. that technology may be a form of regulation and the idea that the design of technological systems may be used for the advancement of public policy: to govern “by design”.134 In the absence of human oversight such as in highly autonomous laws, the ai-enabled weapon system must be programmed with mission-type orders, commander’s intent, roe, and ihl as well as the logic to discern the legality of an engagement.135 To be more concrete, in the roe programmed into a laws one would not only include algorithm’s that would be alignment with the Geneva Conventions iiv and their additional protocols,136 one could also add restrictions preventing the use of military ai in densely populated areas. As such it would be an add-on and integrated into a more general legal framework.

2.3 New Legal Framework(s) for Military ai

A third option would be to regulate of military ai via a completely new framework, below two conceivable venues are explored: 1) subjecting military ai to an arms control regime or arms trade regimes and/or 2) introducing new regulations on the methods of ai warfare.

2.3.1 Subjecting Military ai to an Arms Control Regime or Arms Trade Regimes

Military ai will create a comparative military advantage for the states in control of the technology. It may provide significant strategic advantages to the actors(s) controlling the technology which can upset balances of power or disrupt previously stable global governance arrangements.137 There is a growing perception that the development of military ai is escalating into a strategic arms race. For instance, ai-based laws might alter how power is now distributed worldwide and how international law currently governs the use of force. There is a chance for a regulatory arms race since states with less regulation may have an advantage when creating these ai applications.138 However, a total ban on laws appears improbable.139 Moreover, the dilemma of strategic imbalances is arguably not solved through ihl as these rules are not really designed to deal with questions of strategic stability, instead this is a matter of arms control regimes.140

If military ai already is or is about to escalate into a strategic arms race, can it be compared with the nuclear arms race during the cold war and be subjected to similar limitations? Although the threshold to getting military ai capabilities is lower than constructing nuclear weapon, there is still a barrier, Maas notes that “cutting-edge ai still requires very large (and rapidly increasing) amounts of computational power”.141 Maas also asks how viable arms control regimes are for military ai.142 When comparing nuclear weapons arms control regime, he notes that they have some similarities: Both offer a strong and “asymmetric” strategic advantage; both involve dual-use components, technologies, and applications, which makes a blanket global bans of the technology politically difficult to enforce and maybe even undesirable; and both technologies involve an initial high threshold in technology.143 There are also differences, while nuclear weapons with the two exceptions of 1945 have never been used, it is likely that laws will see daily and regular use.144 Whereas existing arms control regimes on nuclear weapons are premised on the difficulty of hiding (for example uranium) enrichment facilities, missile launch sites, or nuclear tests, it is plausible that development of ai is, amongst other things more discrete.145

At this stage one needs to emphasize that law is not always able to restrain the powerful, sometimes and maybe more often law is used to preserve or reinforce power asymmetries. This is not necessarily something bad, it could be good that access to military ai is limited to only a few state actors and not generally available. This can be achieved through public regulation, private business initiatives or a combination thereof.

Thus, the desirable regulation from the viewpoint of the few states that today possess the technology would be to introduce arms control and arms trade regimes to restrict transfer to non-state actors and rogue states. Such a regime could be introduced through different means, it could be a treaty, joint policy decisions within regional organizations such as the EU or bilateral agreements.

2.3.2 Introducing New Regulations on the Methods of ai Warfare

The restrictive framework of ihl also includes “methods”, additional protocol 1 to the Geneva Conventions provide that “[i]n any armed conflict, the right of the Parties to the conflict to choose methods or means of warfare is not unlimited.”146 As stated above, the term “methods” refers to the tactics or strategy used in hostilities to defeat the enemy by using available information on him together with weapons, movement and surprise. Considering that a complete ban of law s as a means of warfare is unrealistic, and maybe also unwarranted, a more suitable approach would be to limit how they are used which rather relates to a limitation of method. When searching for similar limitations, the prohibition on indiscriminate attacks appear most relevant for law s. The concern that most critics and “abolitionists” raise is that law s may attack members of the civilian population. One could argue there is already a limit through the principle of distinction which requires that the parties to a conflict at all times distinguish between civilians and combatants.147 However, this may not adequately address the risk of malfunction or lack of proper judgment in the operating system of the law s. This risk would appear the highest in situations where military units operate close or among a civilian population. If this is the case, the regulation on incendiary weapons (napalm, white phosphorus bombs) may provide inspiration.

The Protocol on Prohibitions or Restrictions on the Use of Incendiary Weapons to the Convention on Certain Conventional Weapons (Protocol iii) allows the use of such weapons while containing certain restrictions for such use.148 Article 2(1) of the Protocol applies the prohibition under general rules of ihl, of attacks on civilians, including by way of reprisals, to incendiary weapons, in essence repeating the principle of distinction. The substantial additional restriction is found in article 2(2) and (3) containing a prohibition “to make any military objective located within a concentration of civilians the object of attack by air-delivered incendiary weapons” and “to make any military objective located within a concentration of civilians the object of attack by means of incendiary weapons other than air-delivered incendiary weapons”.

During the negotiations of Protocol iii several states were arguing for a total ban on the use of incendiary weapons. There was no support for that,149 hence the current regulation allows for their use against military units, but not in populated areas. This could serve as a model on how to restrict the use of laws. Notably, Protocol iii does not regulate the production, transfer or stockpiling of incendiary weapons. This might be added to a future protocol on laws.

The ccw regime could allow progressive, modular, or iterative expansion of its scope. However, Maas has noted some general problems with the regime familiar in international law (lack of mechanisms for verification or enforcement of compliance) and more specific problems for the regime since it is limited to indiscriminate or excessively injurious weapons which does not fit well with some of the challenges associated with military ai in areas such as strategic stability and safety. Maas still argues that the idea of a modular treaty regime for military ai – directed by broader criteria and concerns than are present in the ccw – is at least somewhat promising.150

laws are primarily discussed in the context of the ccw which indicates that the international community anticipates that the matter is regulated under ihl, i.e. a regulation of how weapons are used. The main alternative would be to subject laws to an arms control regime and disarmament law. With such an alternative the weapon is prohibited when it comes to development, production, stockpiling and use. This may prove difficult since the technology is of dual-use character where civilian use and developments provides the main impetus.151

Within the ccw framework, there is a Group of Governmental Experts (gge) that has been involved in the discussion that requires some form of human control in the use of laws.152 The ccw gge have agreed on the principle that human should retain and exercise control over weapons systems.153

If regulation in ccw becomes impossible or unavailable, there are two conceivable alternative forms of regulation: 1) a UN convention which would require a decision in the UN General Assembly in the form of a resolution and negotiations 2) outside the UN framework as was done with the convention on anti-personnel mines (Ottowa) and convention on cluster munitions (Oslo). It is unlikely that the most important countries from a military and technological perspective would be involved and become committed.154 While there was a wide recognition that the use of biological weapons, chemical weapons, blinding lasers, and anti-personnel landmines are contrary to the principles of ihl, the same cannot (yet) be said of autonomous weapons systems.155 With a moderate approach to regulation there may still be a chance that more militarily and technologically advanced countries will join a regulatory regime, ideally this should happen within the ccw framework. If that is not possible a UN convention would appear as the second-best alternative, regulation outside the ccw and UN should be avoided if possible.

3 Conclusions

This article has in addition of mapping alternative regulations strategies distinguished between two parallel phenomena, semi-artificial intelligent command and control systems on the one hand and laws on the other hand. Military ai already manifests itself and increasingly will do so in different forms. Thus, it is reasonable that regulation will take different forms.

Part of the mainstream discussion relates to the concept of “meaningful human control” which refutes full autonomy for an ai over weapons systems. The logical consequence is to discuss which factors determine when human control is needed. Rather than introducing a total ban, the challenge is to determinate relevant degrees and modes of “meaningful human control” over military ai, especially relating to the targeting process.156 However, even in the case of fully autonomous weapons systems more stringent rules of engagement can be hard coded into the system. When it comes to semi-artificial intelligent command and control systems, the fire designator who observes a potential target may still be a human who transmits their observations to the command and control systems where is processed by an ai with the suggested course of action presented to a human commander. As long as the semi-artificial intelligent command and control system has presented all the options, including opportunities and risks, in a correct manner the fire designator and human commander will arguably still be responsible for any potential errors they commit. In the case that fully-automated laws are available and considered for use, it is suggested that their use should be prohibited in densely populated areas following the same logic as incendiary weapons. Further, one could introduce certain export restrictions on fully-automated laws to prevent proliferation to non-state actors and rogue states.

Given the rapid technological development, attempts of detailed regulation specifying which specific laws are to be allowed or prohibited will arguably only be of limited value. Instead we should focus on how ai and laws are used. Ideally this article will introduce a discussion how mission command, roe and regulation on certain methods of warfare are interconnected when military ai is used or considered to be used. Appropriate limitations on the means and methods of ai warfare have to be implemented into training, military units, computer code and roe in a seamless and integrated way.

Acknowledgements

I would like to thank Kevin Jon Heller, Jonas Tallberg, Johannes Geith, Magnus Lundgren, Eva Erman, Sonia Bastigkeit-Ericstam and Mikael Enberg for suggestions and views on the draft. This article is part of the project “The Global Governance of Artificial Intelligence” funded by The Wallenberg ai, Autonomous Systems and Software Program – Humanities and Society (wasp-hs), 2021–2023.

1

Binns, Reuben and Veale, Michael ‘Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the gdpr’, 11(4) International Data Privacy Law, 2021, 319–332, 322; Kraska, James, ‘Command Accountability for ai Weapon Systems in the Law of Armed Conflict’, 97 International Law Studies 2021, 407–447, 408.

2

Wood, A. J., Algorithmic Management: Consequences for Work Organisation and Working Conditions, Seville: European Commission, 2021, jrc124874, 4. https://joint-research-centre.ec.europa.eu/publications/algorithmic-management-consequences-work-organisation-and-working-conditions_en.

3

Michael R., Curtis, The Principles of Mission Command Applied to Lethal Autonomous Weapon Systems (US Army Command and General Staff College: School of Advanced Military Studies, 2020), 3.

4

Meyers, Adam, ‘Danger Close: Fancy Bear Tracking of Ukrainian Field Artillery Units’ Crowdstrike Blog, 2016; ‘Use of Fancy Bear Android Malware in Tracking of Ukrainian Field Artillery Units’ Crowdstrike Global Intelligence Team, 2016.

5

‘Как украинские программисты увеличили скорость ответа артиллерии в 40 раз (How Ukrainian programmers increased the speed of artillery response by 40 times)’ Inforesist, 2015; gis Arta website; https://gisarta.org/en/; Goncharenko, Roman, ‘Why is the US sending ‘downgraded’ weaponry to Ukraine?’ Goncharenko, 2023, 25 March 2023.

6

esl Advanced Information Technology GmbH (Austria), accs – Artillery Command & Control System, available at <https://www.eslait.at/index.php/en/c4i-systems/c4isys-accs-en/185-c4i-accs-en>; ncia/nato accs Management Organisation, Air Command and Control System (accs) < https://npc.ncia.nato.int/Pages/accs.aspx>: “the first fully integrated system in nato, enabling planning, automatic tasking, battlespace management and task execution for all types of air operations.”

7

Yuval Abraham, A mass assassination factory’: Inside Israel’s calculated bombing of Gaza, +972 Magazine, 30 November 2023 available at https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/; Harry Davies, Bethan McKernan and Dan Sabbagh, ‘The Gospel’: how Israel uses ai to select bombing targets in Gaz, The Guardian, 1 December 2023.

8

idf website, הצצ למפעל המטרות של צה"ל הפועל מסביב לשעון (A glimpse of the idf’s target factory that operates around the clock), 2 November 2023 available at https://www.idf.il/144833/.

9

Solis, Gary D., The Law of Armed Conflict (Second Edition, Cambridge: Cambridge University Press, 2016), 537.

10

Draft Protocol on Autonomous Weapon Systems (Protocol vi), Submitted by Argentina, Ecuador, El Salvador, Colombia, Costa Rica, Guatemala, Kazakhstan, Nigeria, Palestine, Panama, Peru, Philippines, Sierra Leone and Uruguay, ccw/gge.1/2023/wp.6, 11 May 2023, article 2(1).

11

icrc, ‘International Humanitarian Law and the Challenges of Contemporary Armed Conflicts’, 2019, 29.

12

Ibid, 29.

13

Melzer, Nils, ‘Human rights implications of the usage of drones and unmanned robots in warfare’ Study for the European Parliament’s Subcommittee on Human Rights, 2013, 6; Klamberg, Mark, ‘International Law in the Age of Asymmetrical Warfare, Virtual Cockpits and Autonomous Robots’ in Ebbesson, Jonas and others (eds), International Law and Changing Perceptions of Security, 152–170 (Leiden and Boston: Brill Nijhoff, 2014), 165–166; Sassòli, Marco, International Humanitarian Law: Rules, Controversies, and Solutions to Problems Arising in Warfare (Cheltenham: Edward Elgar, 2019), 517.

14

Petman, 2017, 18.

15

Maas, Matthijs M., ‘How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons’, 40(3) Contemporary Security Policy, 2019, 285–311, 300, Petman, 2017, 58–60; Ett effektivt förbud mot dödliga autonoma vapensystem som är oförenliga med folkrättens krav, rapport till Folkrätts- och nedrustningsdelegationen, 14 April 2021, 12 and 20. Similarly, see icrc, 2019, 29–30 and 32 and Petman, 2017, 58 and 71. Compare with Trabucco, Lena and Heller, Kevin Jon, ‘Beyond the Ban: Comparing the Ability of ‘Killer Robots’ and Human Soldiers to Comply with ihl’, 46 Fletcher Forum of World Affairs, available at ssrn: https://papersssrncom/sol3/paperscfm?abstract_id=4089315, 2022, 2.

16

Petman, 2017, 59 and 71.

17

Draft Protocol on Autonomous Weapon Systems (Protocol vi), article 2(2).

18

Maas, 2019, 300.

19

Ibid, 301–302.

20

Ibid, 285–286; Curtis, 2020, 5–6.

21

Statman, Daniel, ‘Drones and Robots: On the Changing Practice of Warfare ’ in Lazar, Seth and Frowe, Helen (eds), The Oxford Handbook of Ethics of War (Oxford: Oxford University Press, 2015); Arkin, Ronald, ‘Lethal Autonomous Systems and the Plight of the Noncombatant’(137) aisb Quarterly 2013, 4–10, 5–7 and Trabucco and Heller, 2022, 4–5.

22

icrc, 2019, 29.

23

Human Rights Watch, Report: Losing Humanity: The Case Against Killer Robots, 19 November 2012.

24

Sharkey, Noel, ‘Saying “No!” to Lethal Autonomous Targeting’, 9 Journal of Military Ethics, 2010, 369–383, 371–372.

25

Himmelreich, Johannes, ‘Responsibility for Killer Robots’, 22(3) Ethical Theory and Moral Practice, 2019, 731–747; Erman, Eva and Furendal, Markus, ‘The Global Governance of Artificial Intelligence: Some Normative Concerns’ Moral Philosophy and Politics, 2022, 8.

26

Maas, Matthijs M., ‘Innovation-Proof Global Governance for Military Artificial Intelligence? How I Learned to Stop Worrying, and Love the Bot’, 10(1) Journal of International Humanitarian Legal Studies, 2019, 129–157, 148.

27

Compare with Erman and Furendal, 2022, 17–19.

28

Rapport till Folkrätts- och nedrustningsdelegationen, 14 April 2021, 9.

29

Simmonds, Nigel E., Central Issues in Jurisprudence: Justice, Law and Rights (Fourth Edition, London: Sweet & Maxwell, 2013), 17–19 and 294–297.

30

Erman and Furendal, 2022, 16–21.

31

Klamberg, 2014, 167; Petman, 2017, 12; Maas, Innovation-Proof Global Governance, 2019, 140;.

32

Compare with Erman and Furendal, 2022, 3.

33

Frowe, Helen, The Ethics of War and Peace: An Introduction (2nd Edition, New York: Routledge, 2016), 224.

34

Bergman, David, ‘Myten om de omoraliska drönarna: Hur autonoma vapensystem kan leda till högre moral i krigföring’, 4 Kungl Krigsvetenskapsakademiens Handlingar & Tidskrift, 2015, 43–57, 48.

36

Sassòli, 2019, 526.

37

Klamberg, 2014, 168.

38

Petman, 2017, 46.

39

Kraska, 2021, 432.

40

Protocol Additional to the Geneva Conventions of 12 August 1949 and Relating to the Protection of Victims of International Armed Conflicts (Protocol I) of 8 June 1977, articles 86(2) and 87.

41

Sassòli, 2019, 527.

42

Petman, 2017, 50–51.

43

Heller, Kevin Jon, ‘The Concept of ‘The Human’ in the Critique of Autonomous Weapons’, 14 Harvard National Security Journa, Forthcoming, 2023, 3.

44

Petman, 2017, 45.

45

Sparrow, Robert, ‘Killer Robots’, 24(1) Journal of Applied Philosophy, 2007, 62–77, 69–73; Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, 9 April 2013, Human Rights Council, Twenty-third session (Human Rights Council 2013), paras. 76–79 and 81.

46

Asp, Petter, Jareborg, Nils and Ulväng, Magnus, Kriminalrättens grunder (2nd edition, Uppsala: Iustus, 2013), at 106–116; Larsson, Frida, ‘Styvföräldrars garantansvar’(3) Juridisk Tidskrift, 2013/14, 651–662; for German doctrine, see Kaufmann, Armin, Die Dogmatik der Unterlassungsdelikte (Göttingen: Schwartz, 1959), 14, 49, 51–54 and 306 et seq.; Beulke, Werner and Wessels, Johannes, Strafrecht Allgemeiner Teil – Die Straftat und ihr Aufbau (42nd, C.F. Müller, 2012), 288–289; Brammsen, Joerg, Die Entstehungsvoraussetzungen der Garantenpflichten, Die Entstehungsvoraussetzungen der Garantenpflichten (Berlin: Duncker & Humblot, 1986), 116 et seq.; Köndgen, Johannes, Selbstbindung ohne Vertrag (Tübingen: 1981), 163 et seq.; Gründewald, Anette, Zivilrechtlich begründete Garantenpflichten im Strafrecht? (Berlin: Duncker & Humblot, 2000), 46; commented by Sjögren, Anders, ‘Högsta domstolen prövar garantläran’ Svensk Juristtidning, 2014, 170–184. See also.

47

Prosecutor v. Ishaq, Stockholm district court, B 20218–20, Judgement 4 March 2022, 4–5, 18–20, 34–37.

48

Shamir, Eitan, Transforming Command (Stanford Security Studies, 2011), 9; Wedin, Lars, ‘Uppdragstaktik i historia och nutiden’ in Ahlgren, Patrik, Engelbrekt, Kjell and Wedin, Lars (eds), Uppdragstaktik på svenska (Stockholm: The Royal Swedish Academy of War Sciences and the Swedish Defence University, 2016) 3–19, 6.

49

Wedin, 2016, 11; Holmqvist, Mathias, ‘Uppdragstaktik och informationsoperationer’ in Ahlgren, Patrik, Engelbrekt, Kjell and Wedin, Lars (eds), Uppdragstaktik på svenska (Stockholm: The Royal Swedish Academy of War Sciences and the Swedish Defence University, 2016) 87–110, 91.

50

Shamir, 2011, 14–15.

51

Ibid, xi and 3.

52

Wedin, 2016, 2016, 3.

53

Ibid, 3 and 7.

54

Ryan, Mike, It during this period of shock when Ukraine can seize the most ground, and destroy the largest number of enemy troops. And it is exactly what they are doing. The Ukrainians, using mission command, are operating inside the Russian tactical and operational decision loops. (18 September 2022, available at https://twitter.com/WarintheFuture/status/1571563744710299648).

55

Wedin, 2016, 13.

56

Ibid, 11.

57

Ibid, 14 and 16.

58

Ibid, 10–11.

59

Ibid, 15.

60

Field Manual 1, The Army, Headquarters, Department of the Army, Washington, DC, 14 June 2001, 36; Shamir, 2011, xii.

61

Shamir, 2011, xii.

62

Bergman, 2015, 44.

63

Defined as ”The operational advantage derived from the ability to collect, process, and disseminate an uninterrupted flow of information while exploiting or denying an adversary’s ability to do the same”, Joint Publication 3–13, Information Operations, Washington, D.C. incorporating change 1, November 20, 2014 (U.S. Joint Chiefs of Staff), gl-3.

64

Neretnieks, Karlis, ‘Uppdragstaktiken är död – leve uppdragstaktiken’ in Ahlgren, Patrik, Engelbrekt, Kjell and Wedin, Lars (eds), Uppdragstaktik på svenska (Stockholm: The Royal Swedish Academy of War Sciences and the Swedish Defence University, 2016) 20–34, 31.

65

Freedberg Jr., Sydney J., ‘Attacking Artificial Intelligence: How To Trick The Enemy’ in Artificial Intelligence: The Frontline of a New Age in Defense (Breaking Defense, 2019) 17–18, 18.

66

Ibid, 18. See Curtis, 2020, who has applied the principles of mission command to laws.

67

Bergman, 2015, 44.

68

Van Rompaey, Léonard, ‘Shifting from Autonomous Weapons to Military Networks’, 10 Journal of International Humanitarian Legal Studies, 2019, 111–128.

69

Kraska, 2021, 408.

70

Shamir, 2011, 15–17.

71

Holmqvist, 2016, 91.

72

Hague Convention iv – Respecting the Laws and Customs of War on Land and annexed regulations adopted 18 October 1907, article 22.

73

Additional Protocol I to the Geneva Conventions, Article 36.

74

Solis, 2016, 538.

75

Rapport till Folkrätts- och nedrustningsdelegationen, 14 April 2021. In Sweden it is regulated in Förordning (2007:936) om folkrättslig granskning av vapenprojekt.

76

icrc, ‘Means of warfare’ <https://casebook.icrc.org/glossary/means-warfare> accessed 12 August 2022.

77

ap I, article 35(2).

78

Klonowska, Klaudia, ‘Article 36: Review of ai Decision-Support Systems and Other Emerging Technologies of Warfare’, 23 Yearbook of International Humanitarian Law, 2020, 123.

79

Compare Van Rompaey, 2019.

80

Most examples were drawn from icrc, ‘Methods of warfare’ <https://casebook.icrc.org/glossary/methods-warfare> accessed 12 August 2022.

81

Hague Convention iv, article 23(b) and (f); ap I, articles 37 and 83(3)(f).

82

ap i, article 51(2); Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol ii), 8 June 1977, article 13(2).

83

ap i, article 54(1); ap ii, article 14.

84

Geneva Convention for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field of 12 August 1949, article 46; Geneva Convention for the Amelioration of the Condition of Wounded, Sick and Shipwrecked Members of Armed Forces at Sea of 12 August 1949, article 47; Geneva Convention Relative to the Treatment of Prisoners of War of 12 August 1949, article 13(3); Geneva Convention Relative to the Protection of Civilian Persons in Time of War of 12 August 1949, article 33(3), ap ii, article 20.

85

ap i, articles 51(4)(a)(b), 51(5)(a)(b).

86

Convention on the prohibition of military or any hostile use of environmental modification techniques, 10 December 1976; ap i, articles 35(3), 55.

87

ap i, articles 56 and 83(3)(c); ap ii, article 15.

88

Hague Convention iv, article 23; ap i, article 40; ap ii, article 4(1); Melzer, Nils, Targeted Killing in International Law (Oxford: Oxford University Press, 2008), 367–371.

89

gc i, article 15(1); gc ii, article 18(1); gc iv, articles 16(2) and 33(2), ap ii, articles 4(2)(g) and 8.

90

gc iv, articles 34 and 147; ap i, article 75(2)(c).

91

gc i, articles 38, 44, 53 and 54.

92

Common article 3 of the 1949 Geneva Conventions; ap i, article 41.

93

Ibid, article 42.

94

Tallberg, Jonas and others, ‘The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research’ ssrn, 2023, 11.

95

icrc, 2019, 30.

96

Petman, 2017, 26.

97

ap i, articles 48 and 51(2).

98

Legality of the Threat or Use of Nuclear Weapons, icj, Advisory Opinion, 8 July 1996, paras 78–79.

99

Petman, 2017, 16–17.

100

Ibid, 28–29.

101

Hague Convention iv, article 2; Klamberg, Mark, ‘Exploiting Legal Thresholds, Fault-Lines and Gaps in the Context of Remote Warfare’ in Ohlin, Jens David (ed), Research Handbook on Remote Warfare (Cheltenham: Edward Elgar Publishing, 2017) 186–209, 207–208; Klamberg, 2014, 163–164.

102

hr Council Report, 2013, para. 67.

103

Solis, 2016, 539; Petman, 2017, 31.

104

ap i, articles 51(5)(b) and 57(2)(a)(iii).

105

icrc, Customary International Humanitarian Law (Henckaerts, Jean-Marie and Doswald-Beck, Louise eds, Cambridge: Cambridge University Press, 2005), vol.. i, 46–49, rule 14.

106

Petman, 2017, 35 and 36.

107

hr Council Report, 2013, para. 71.

108

Petman, 2017, 32.

109

Sassòli, 2019, 521.

110

Prosecutor v. Galić, (Case No. it-98-29-T), icty T. Ch., Judgement and opinion, 5 December 2003, para. 58.

111

Sassòli, 2019, 521.

112

ap i, articles 57 and 58.

113

Petman, 2017, 41.

114

Solis, 2016, 542.

115

Petman, 2017, 7.

116

Henderson, Ian S., Keane, Patrick and Liddy, Josh, ‘Remote and autonomous warfare systems: precautions in attack and individual accountability’ in Ohlin, Jens David (ed), Research Handbook on Remote Warfare (Cheltenham: Edward Elgar Publishing, 2017) 335–370, 341–342.

117

Henderson, Ian S., Keane, Patrick and Liddy, Josh, ‘Remote and autonomous warfare systems: precautions in attack and individual accountability’ in Ohlin, Jens David (ed), Research Handbook on Remote Warfare (Cheltenham: Edward Elgar Publishing, 2017) 335–370, 343.

118

Petman, 2017, 42.

119

Convention on Road Traffic 1042 unts 17, adopted in Vienna 8 November 1968, amendments of articles 8 and 39 accepted 6 October 2015; Kunz, Martina and Ó hÉigeartaigh, Seán, ‘Artificial Intelligence and Robotization’ in Geiss, Robin and Melzer, Nils (eds), Oxford Handbook on the International Law of Global Security (2020) 624–640, 629.

120

Wedin, 2016, 15. For definition, see nato, Military Decision on mc 362/1 – nato Rules of Engagement (downloaded 28 May 2023 from https://govtribe.com/file/government-file/rfpactsact1646-mc-362-1-nato-roe-dot-pdf): “[d]irectives to military forces that define the circumstances, conditions, degree, and manner in which force, or actions which might be construed as provocative, may be applied”. See also Sanremo Handbook on Rules of Engagement.

121

Cooper, Camilla Guldahl, nato Rules of Engagement: On roe, Self-Defence and the Use of Force during Armed Conflict (Brill Nijhoff, 2020), 25, 79–87.

122

U.S. Department of Defense, 8 May 2017.

123

Appendix 1 to Annex 1 of nato roe.

124

Solis, 2016, 474.

125

Roach, J. Ashley, ‘Rules of Engagement’, 36(1) Naval War College Review, 1983, 46–55, 46–49; Solis, 2016, 479.

126

Solis, 2016, 479.

127

Petman, 2017, 63.

128

As noted by Cooper, 2020, 38, ”nato develops new roe for all its operations”.

129

Ibid, 49.

130

Ibid, 37.

131

Petman, 2017, 61.

132

Ibid, 71.

133

Lessig, Lawrence, Code and Other Laws of Cyberspace (Basic Books, 1999).

134

Mulligan, Deirdre K. and Bamberger, Kenneth A., ‘Saving Governance-by-Design’, 106 California Law Review, 2018, 697–784.

135

Curtis, 2020, 37.

136

gc i; gc ii; gc iii; gc iv; ap i; ap ii.

137

Maas, 2019, 285.

138

Tallberg et al., 2023, 9.

139

Solis, 2016, 535.

140

Maas, Innovation-Proof Global Governance, 2019, 141.

141

Maas, 2019, 290.

142

Ibid, 285.

143

Ibid, 288.

144

Ibid, 289.

145

Maas, Innovation-Proof Global Governance, 2019, 144.

146

ap i, article 35(1).

147

Ibid, articles 48 and 51(2).

148

Protocol on Prohibitions or Restrictions on the Use of Incendiary Weapons to the Convention on Certain Conventional Weapons (Protocol iii), Geneva, 10 October 1980.

149

Bring, Ove, Nedrustningens folkrätt (Stockholm: Norstedts, 1987), 260–263; Henckaerts and Doswald-Beck, 2005, vol ii, Ch. 30 §§9–73; Bring, Ove and Körlof, Anna, Folkrätt i krig, kris och fredsoperationer (Fourth Edition, Stockholm: Norstedts Juridik, 2010), 182–184.

150

Maas, Innovation-Proof Global Governance, 2019, 153.

151

Rapport till Folkrätts- och nedrustningsdelegationen, 14 April 2021, 5–6; Maas, 2019, 289 and 294.

152

Group of Governmental Experts of the High Contracting Parties to the ccw ‘Report of the 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (laws)’ (2017) UN Doc ccw/gge.1/2017/3; Group of Governmental Experts of the High Contracting Parties to the ccw ‘Report of the 2018 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’ (2018) UN Doc ccw/gge.1/2018/3 as referenced by Maas, Innovation-Proof Global Governance, 2019, 130–131.

153

Rapport till Folkrätts- och nedrustningsdelegationen, 14 April 2021, 12.

154

Ibid, 20.

155

Petman, 2017, 67.

156

Ibid, 73–74.

Content Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 822 822 53
PDF Views & Downloads 1374 1374 60