Shifting from Autonomous Weapons to Military Networks

In: Journal of International Humanitarian Legal Studies

The persistent anthropomorphism of lethal autonomous weapons systems (laws) as the replacement for human soldiers creates irrelevant expectations of physical embodiment and cognitive individualization. This anthropomorphism taints the analysis and discussions on the adaptation of international humanitarian law (ihl) by excluding relevant technologies from the scope of discussions.

Shifting from laws to a network-centric sociotechnical systems perspective allows to remedy the under inclusiveness of the laws perspective by shifting away from the salient features of laws, in favour of a focus on the interactions with, and influence that the technology has on human decision-making in warfare. By criticizing the relevance of the technological focus of the current diplomatic process, the paper argues that the network-centric perspective is not only more accurate, but also more helpful and practical in adapting ihl to the armed conflicts of the twenty-first century.

Abstract

The persistent anthropomorphism of lethal autonomous weapons systems (laws) as the replacement for human soldiers creates irrelevant expectations of physical embodiment and cognitive individualization. This anthropomorphism taints the analysis and discussions on the adaptation of international humanitarian law (ihl) by excluding relevant technologies from the scope of discussions.

Shifting from laws to a network-centric sociotechnical systems perspective allows to remedy the under inclusiveness of the laws perspective by shifting away from the salient features of laws, in favour of a focus on the interactions with, and influence that the technology has on human decision-making in warfare. By criticizing the relevance of the technological focus of the current diplomatic process, the paper argues that the network-centric perspective is not only more accurate, but also more helpful and practical in adapting ihl to the armed conflicts of the twenty-first century.

1 Introduction

The discussions on the emerging technologies in the area of lethal autonomous weapons systems (laws) have had trouble agreeing on a common framework of definitions. The discussions are held within the structure of the Convention on Conventional Weapons (ccw) since 2014, where the initial mandate and subsequent focus have been on laws.1 The initial informal Meeting of Experts recently became a Group of Governmental Experts (gge), which has been ‘mandated with examining emerging technologies in the area of lethal autonomous weapons systems ... to allow for an exchange of views and expertise on this multi-faceted issue’.2 Even in its current gge form, the ccw process has trouble agreeing even on the need for definitions.

For some delegations, a working definition of laws is essential to fully address the potential risks posed. For others, absence of an agreement on a definition should not hamper discussions or progress within the ccw.3

For instance, the concept of ‘autonomy’, central to the acronym laws, receives very different definitions by States, ranging from quasi-sentience and near-human intent4 to action without human control,5 while some state that autonomy and human control are mutually exclusive.6 The terms lethal, autonomous, weapons, systems, are qualified as both cumulative and exclusive.7 This means that for a machine to fall under the scope of the mandate of the ccw, it would have to qualify under each individual terms of the acronym, and not have a single feature that is in opposition to those same qualifiers. Any system that is sub- or non-lethal, non-weaponized or with non-kinetic effects, that is piloted (or directed, or supervised, for those States that consider autonomy and human control as mutually exclusive) will be excluded by a number of delegations.8 The word ‘emerging’ is also contentious:9 some States will consider that currently existing technologies are not part of the discussions, which excludes systems like drones. Regardless, States have been making propositions for regulating emerging technologies in warfare, and for reinterpreting the principles of ihl. Those propositions include inter alia a new protocol for the ccw, and a soft law instrument such as a common political declaration.10

There has been a lot of academic engagement with the topic of laws, criticizing the anthropomorphic nature of the concept,11 noting the communication problems that hamper discussions,12 noting the analogy of human soldiers and machine soldiers.13 Some authors also push against the very concept of laws by saying there is more that matters than the individual autonomous weapons, which implies that laws as a concept is underinclusive. This has been referred to with terms such as ‘networked-enabled command and control’,14 ‘network-centric warfare’,15 ‘computer networks’,16 ‘human-machine configurations’,17 ‘war-algorithm’,18 technology that is ‘modular’,19 ‘systems that collaborate’,20 ‘complex distributed systems’ that are ‘networked together’,21aws distributed across multiple machines’,22 ‘Internet of Battle Things’.23 While most of those references are critical of the ccw’s conceptualization of laws, the concepts of networks and distribution are mostly used as examples of how laws can be deployed in many different ways. Scholars in other fields have underlined the inappropriateness of the concept of ‘autonomy’ from a legal perspective,24 the deep and complex ties that exist between human beings and technology,25 and how science and technology prompt reactions from legal institutions – such as treaty negotiating bodies like the ccw – which in turn shape the furthering of scientific progress and the development of technology.26

This paper criticizes the relevance of the concept of laws and proposes to shift to a more inclusive perspective based on the interactions and influences that exist within networked sociotechnical systems. laws is not an accurate framework for discussions, negotiations, or regulation. This is because laws and autonomy are anthropomorphic. laws are perceived as the replacement for human soldiers, and that makes us believe they possess some of the inherent features of a human soldier. Those irrelevant features include physical embodiment, mental individualization, and weaponization. This makes the ccw’s discussions – and potential resultant instruments – underinclusive by excluding systems that could have very negative effects on the use of force, which a thought experiment illustrates. The paper suggests taking a networks perspective instead. The technology used in warfare is part of a sociotechnical system, where the key is not the individual nodes – whether human or machine – but the possible interactions between different systems that might be different bodies but which share a common mind, and between humans and machines. What becomes salient in such a perspective is the influence that the technology has on human decision-making. It does not matter whether a system is lethal or not, autonomous or not. What matters is that those systems ‘configure us’,27 and constrain the types of decisions that human beings can make, even when they have control over the systems. The networks perspective is then replaced in a Science, Technology and Society framework in order to argue that it is not only more accurate, but also more helpful for driving discussions and negotiations forward.

2 Persistent Anthropomorphism of laws

The debate on laws has not properly revealed and cast aside the anthropomorphism that animates the discussions. The debates at the ccw regularly denounce anthropomorphism,28 and delegates often remind themselves that they are not talking about science-fiction scenarios.29 This does not stop them from comparing laws to soldiers, as laws become the replacement for human soldiers.30 For instance, the costs for producing, manipulating or maintaining laws are compared with the costs for training, feeding, housing, and social security for human soldiers.31 In the narrative thus created, laws become the replacement for the soldier, in body and mind.

The first source of this anthropomorphism is the language used to describe the capabilities of the systems. Claims that laws will either be more able or less capable of empathy, ethical behaviour, or humane behaviour, make us attribute human properties and mental states to the systems.32 This ‘wishful mnemonic’33 that is created through the language we use to talk about laws also inserts the system into the legal framework of ihl in reference to human soldiers. What human soldiers can or should do becomes the benchmark for how to assess autonomous systems. The characteristics of a human soldiers as a weaponized, active, and individualized combatant become the legal framework of reference for laws. The problem lies where the essential qualities of emerging or current technologies are not that they are weaponized, individualized, and embodied – meaning that they are physically incarnated, which leads to discard purely digital entities such as cyber-weapons. Talking about machines with an unchecked anthropomorphic vocabulary leads to expecting too much of the machines,34 but it also makes us expect irrelevant capabilities and overlook or disregard relevant elements such as the influence of the technology. This is because the legal expectations put on laws mirror what we expect either of unintelligent rifles, or from human soldiers.35

Framing laws as the replacement for human soldiers leads to exporting the human characteristics of embodiment and individualization onto laws. Human beings are embodied in the sense that we each possess a physical body that allows us to evolve and interact in the physical world. For us, body and mind are one related unit, instead of being distributed between different bodies. This may be contrasted to the cyber-world where embodiment is not possible or required. ‘Embodiment’ is described by some as one of the most prominent and legally relevant aspect of robotics,36 so qualifying laws as embodied is not distinctive of the ccw, as this tendency is observable in other robotics debates. It is this requirement of embodiment that excludes cyber-weapons from the scope of the laws debates at the ccw.

This embodiment is tied to the individualization of different persons. Each individual person is a different entity from another individual person. We do not have a direct mental access to other people’s thoughts, but we can attribute mental states to others, persons we likewise perceive as individualized.37 Considering laws as individualized leads to overshadow the direct cognitive interaction laws can have with other machines, which might be providing data or orders. A system of systems might have one mind but many bodies. The difference between embodiment and individualization is that embodiment suggests a physical manifestation, whereas individualization can also portray cyber entities. Embodiment and individualization are irrelevant expectations that derive from anthropomorphizing laws.

The nature of ihl frames and feeds the anthropomorphism of laws. ihl focuses on active, weaponized combatants that are able to produce lethal kinetic force. Active combatants and weapons are intimately related, because soldiers only pose a real threat if they are weaponized and able to apply kinetic force. While non-lethal weapons are not discarded from general ihl enquiry, they are generally perceived as less relevant because more proportionate by nature.38 Legal institutions tend to give precedence to their own institutional self-understanding,39 and thus the ccw reapplies this fundamental institutional self-understanding to the new debate on laws. This restricts the scope of the debates to lethal and weaponized systems. This excludes non- or sub-lethal weapons, non-weaponized systems and cyber-systems from the enquiry. Bearing lethal weapons – as suggested by the cumulative lethal and weaponized in the laws acronym – is a parameter of the scope of the ccw’s discussions on emerging technologies, and it is likely to become a legal parameter for defining the object of a would-be new protocol of the ccw. Individualization and embodiment are further enforced because an illusory separation is created between the laws and the other bodies that constitute the one mind of the network of systems (such as command interfaces and intelligence gathering systems).

In a way, the International Committee of the Red Cross makes an interpretation of the two criteria of lethality and weaponization in relation to autonomy when it defines ‘critical functions’ as the ability to ‘select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention’.40 Although this definition does not strictly limit itself to lethality, as confirmed by the word ‘neutralize’, it is still based on kinetic force. That there are critical functions suggests that there are non-critical functions. Focusing on critical functions excludes non-critical (a synonym for unimportant) functions,41 because they are implicitly qualified as irrelevant, or unimportant by the vocabulary used.

The focus of the laws debates is the loss of control over autonomous entities. Autonomy and control are pictured as a zero-sum game where the level of control by humans is directly and inversely proportional to the autonomy of the systems.42 In a way, this is what is implied by the concept of meaningful human control. This concept was first introduced by an ngo, Article 36,43 which states that total loss of control over weapons is unlikely, but that the control that is left should be meaningful. Article 36 urged States to consider the nature of human control to be exercised over laws, and the point at which such control stops being meaningful.44 The networks perspective allows the observation that looking at the loss of human control and the increase of autonomy is not necessarily relevant or comprehensive. This is because the networking of many different technological nodes, at a military and non-military level, does not merely retrieve or allow control by humans. The network more or less visibly supersedes that control through influence over the human decision-makers.45

The notion of autonomy defined as the absence of human control is untenable. Stating that there should be human control over laws amounts to making control external. It is the ultimate objective of ‘behaviour-based robotics’ to be rid of external control.46 The problem is that within this philosophical tradition and within the ccw, external control is equated with human control. This is also tied to the notion of embodiment, where the system that is autonomous is the one that has a body directly connected to its decision-making ‘mind’ at the exclusion of any other actor.47 While this vision may be relevant to civilian systems that exist in a vacuum, it is not the case for military systems that will always be integrated in a sociotechnical network. Positing that there is external – or human – control reduces control to the external factors of control, such as the human controller’s hand. This misrepresents the direct connection that exists between different machines, where there would also be internal control between different systems. This control would be internal because it would be part of the one mind. laws might be under human control, but that might not be the only source of control over the system’s behaviour.

3 Shifting Towards the Networks Perspective

Instead of trying to understand the gap between autonomy and control, we need to examine the effects that ‘particular human-machine configurations’48 integrated in sociotechnical systems have on the battlefield and on decision-making in warfare. This is what the ‘networks perspective’ tries to do: recentre the human beings within the networks as the main subject of focus, instead of looking at the features of the technological platforms, and how those features insert themselves in the ihl framework. What becomes visible is that the interaction between human beings and machine is no longer a mere two-way relationship where humans control the technology and where the technology creates a usable output. Instead, the relationship becomes more diffuse and circular, where the technology is configured by us, but also configures us.49 The networks perspective that is proposed gives a more appropriate perspective to expose how different technologies, and not just laws, configure or influence us, and how this affects warfare. Influence is the capacity of the technology to ‘configure us’ at more or less visible levels.50

The relevance of the networks perspective as well as some of the shortcomings of the ccw’s current perspective can be illustrated with a thought experiment. The US military currently uses a Disposition Matrix for suggesting drone strike targets, software that can suggest profiles of interest, make connections between individuals based on drone footage, and biometric data among other sources.51 For the sake of the thought experiment, let us suppose it is developed and improved well beyond its current form. It now has direct access to satellite live feeds, improvised explosive devices history, in addition to its earlier access to biometric data and other social data, and a multitude of other sources of information. It is given full control over some non-weaponized systems, and partial control over some laws which it can reroute to a point of interest but is otherwise unable to order attacks. This suggests that even though the Matrix could oversee part of the behaviour of the system, this oversight would exclude control of the laws’ weapons or critical functions, and thus be irrelevant to a legal regime drafted in the continuation of the current framework of the ccw’s discussions which uses ‘weaponization’ as one of its definitional corner stones.

The Matrix analyses the live feed provided by satellites over a territory where an armed conflict takes place. It detects a convoy of vehicles and determines that it might be involved in an enemy operation. It directs its reconnaissance systems in the area to obtain better footage of the convoy, and parallels the new information with biometric data, social media activity and improvised explosive device (ied) concentration history of the zone. At this stage the Matrix determines that there is a very high chance that it is a military convoy and reroutes one of the nearby loitering laws. At this stage, it is doubtful whether this situation would fall under the scope of a legal regime based on weaponization. This is because the interaction between the Matrix and laws would be limited to movement, and would exclude targeting or attacking.

Once the laws arrives in the zone, it recommends the target for attack to a human controller. The controller has access to the live feed from the laws, and to the Matrix’s analysis, which indicates that the superposition of data from social media, biometrics, local ied history, and satellite footage, among others, identifies the convoy as a military one with 85 per cent certainty. The human controller authorizes the attack, which ends up destroying eight school busses: negative outcomes are still possible when the systems are under human control.

The problem is that the live footage and requirement for confirmation (as opposed to leaving a window of cancellation opportunity) could be enough to legally qualify the system as being under human control. The decision thus controlled might even legally disqualify the system as an autonomous system, thus excluding the operation from a potential ccw protocol’s regulatory grasp. Because the Disposition Matrix is a cyber-system, it would fall outside of a legal regime exclusively addressing physically embodied systems. This is corollary to the focus on kinetic force central to ihl, where physical entities are favoured over cyber-systems that cannot project force the same way. If the Matrix had direct control over the use of the weapons of another system, it might fall under the scope of a weapons-based protocol. But in this example, the Matrix can only ‘order’ the laws to move to a certain location, effectively setting up the potential for violence while not pulling the trigger. This sets the scene for laws’ effect on the battlefield, and it configures the types of decisions that the human controller can make. This suggests that even non-weaponized cyber capabilities can have a lot of influence over the conduct of warfare.

In this case, the information and analysis provided by civilian satellites,52 reconnaissance drones,53 and cyber-spy systems is sent to the command interface far away from the battlefield, through the US military commissioned civilian cloud services,54 each technology further shaping the interaction and the influence over the human decision-makers. The command interface will analyse the data provided by the other machine nodes, orders the satellite and drones to focus their data gathering on a certain zone, and suggests a course of action to a human commander. The different data-gathering nodes mentioned earlier feed continuous live data which laws on the battlefield use for highlighting targets to the human soldiers supervising them; the system on the battlefield might command a cyber capability to shut down the enemy’s electricity grid at the moment they engage.55 It does not make sense to think of the different systems present on the battlefield as individualized, embodied entities because those two qualities are only loosely correlated with the capacity of the network to project force. Once we shift to a networks perspective, we can assess the whole network of systems, and not just the tip-of-the-iceberg laws, which do not make decisions in cognitive-independence from other systems. It allows better observation of how the human beings’ decisions are influenced by the technologies.

laws with very low levels of autonomy can still have a strong influence on the conduct of warfare and on the decisions made by human beings. For instance, a system similar to the sgr-A1 could have a mode of functioning where it can only draw the human agent’s attention towards a trespasser and ask for the authorization to fire, with no set time limit. The technology still mediates the interaction between the supervisor and the target. First, it is the system that flags potential targets, and there the system is already making a decision affecting the outcome of the operation, even if the critical functions are entirely supervised by a human controller. Depending on the functioning and communication capacity of the system, the type of information the human supervisor receives also has a certain constraining effect on the end decision, and this system is at no point making a decision to select and attack targets without human control.

The networks perspective observes that the sociotechnical system that incorporates laws is much wider than what is described in the debates on autonomous weapons. Sociotechnical systems theory recognizes the inherent interaction that exists between people and technologies, and studies avenues for optimizing those interactions.56 It describes how we are shifting from platform-centric computing towards network-centric computing.57 The interactions in those sociotechnical systems create the conditions for success, and comprise linear and causal relationships (the ones that originate from classical technology), and chaotic relations (humans).58 With network-centric technologies, the behaviour of the technology also becomes chaotic.59 Casting a sociotechnical systems theory on laws is a welcome advancement,60 and can be used as a stepping stone towards the networks perspective.

The military network comprises machine and human nodes, military and civilian nodes. Those are divided geographically by the battlefield – the zone where most of the force projection is done, notwithstanding very relevant criticisms of the very notion of battlefield as a spatially and temporally defined concept.61 There will be military and civilian, machine and human nodes. The machine nodes are divided between systems on the battlefield (laws, BigDog,62 swarms, drones, etc.) and systems off the battlefield (satellites, cloud system, command interface, etc.). The human nodes are divided between persons on and away from the battlefield. There are military nodes all across the network (human soldiers, human commanders, laws, command interface, etc.). On the other hand, the civilian nodes (human operators of the civilian satellites, the system that allows the continuous updating of the data on cloud, etc.) are primarily located outside the battlefield. It is useful at this stage to note the term node can further anthropomorphize the technology and dehumanize human beings, where it equates humans and machines. On the other hand, node allows the focus to be on the interactions instead of the components of the network.

The machine nodes can constrain the behaviour of other machine nodes (commanding action), influence the behaviour (sharing data and analysis), or recommend actions. They can constrain the behaviour of the human nodes by the same means, except that the machines would probably not give commands to human beings. The human nodes become influenced by the data and suggestions made by the system, and can in turn require the systems to perform certain actions; but those orders that the human commanders would give to either laws or to soldiers will be influenced and shaped by the technology’s capacities and interactions. Taken individually, the machine nodes might not be autonomous but, taken as a whole, the system of systems exhibits autonomy.63 There might be no autonomous system – whether weaponized or not – in the network, but the influence from the machines might set the possibilities and limits for action on the battlefield, whether those actions are performed by laws or human soldiers.

4 Networks in the Service of Diplomacy

The networks perspective contests the legal and policy choices made by the diplomats at the ccw. Casting a Science, Technology and Society studies framework on the ongoing process, one could say that the ccw, as a legal and diplomatic institution, interprets the social impacts of the technology, and constructs the environment in which the science and technology may exist – even if this construction is made through inaction.64 Law, and science and technology have in a common an interest in the authority of knowledge.65 At the ccw, framing laws as lethal, autonomous, weaponized, embodied, and individualized entities is the perspective with most authority. The purpose of this article has been to contest the diplomats’ and lawyers’ choices in defining the relevant technology. Scientists preoccupied with the distributed and networked nature of the technology have talked at the ccw’s earlier Meetings of Experts,66 but their interventions did not manage to shift the conversations away from an embodied and individualized understanding of laws.

There is a tension between the quality of scientific knowledge and the diplomatic objectives of the ccw. Sheila Jasanoff describes a ‘cascade of deference’ that law should show to science.67 The more objective and strong a scientific claim is, the more deference the law should show to science.68 Concurrently, if the scientific claims are weaker, the law should show more deference to its own institutional self-understanding and values.69 There are four levels in this cascade of deference based on diminishing levels of strength of the scientific claims: objectivity, consensus, precaution, and subsidiarity. Objectivity and consensus cannot describe the ccw’s debates, because that would mean that at the very least, the ‘relevant scientific communities have been able to set aside all theoretical and methodological disagreements to come together on a shared position.’70 In comparison, insofar as the diplomatic process recognizes potential ‘severe and irreversible harm’ with the use and deployment of laws,71 one can say that the objective of those discussions is to achieve precaution.72 While many personalities have raised awareness on the risks of developing laws,73 there does not seem to be strong scientific claims of ‘significant probability of grave, possibly irreparable, harm’.74 As a comparison, climate change is described as belonging to the precaution level of deference.75 Subsidiarity, however, seems to more accurately describe the strength of the scientific claims concerning autonomy, because facts are either non-existent, or greatly contested, there is no epistemic consensus and the ‘situations reflect participant’s inability to agree on a common framing of a problem that would allow parties to engage in rational technical debate’, or there is insufficient knowledge to rule out either two positions as invalid.76 Diplomats and commentators seem to intuitively realize that there is a significant and grave risk of harm, but it does not seem at this stage that there is the necessary consensus to establish whether precaution or subsidiarity qualifies the level of knowledge better.

A networks approach to the emerging technologies in warfare would be a more serviceable truth than autonomy. A serviceable truth departs from truth claims towards useful science, it furthers the shift from science in action towards science for action by aiding the law in delivering justice instead of exclusively trying to speak the truth.77 A networks approach services justice by providing not merely a mirror of nature, but additionally by providing service to those that need ihl’s protection.78 There is epistemic uncertainty surrounding the (in)ability of machine learning and other artificial intelligence-based techniques to enforce ihl principles of proportionality and distinction, and therefore it is hard to allow those systems to autonomously make decisions in the application of these principles. Instead of focusing on the features and abilities of the technology – as does autonomy – the networks perspective focuses on the end results of the deployment of the technology, both on potential targets, and on its users’ ability to make informed decisions. It is a more serviceable truth because it would allow the focus to be on a technological and scientific level where there is less uncertainty. The uncertainty in autonomy is not made non-existent with this shift, but it is made less relevant.

If we want to properly regulate the means and methods of warfare of the twenty-first century, it is important to shift away from the autonomy paradigm, and based on the debates, there is reason to believe that such a turn could occur.79 It is important that the discussions also consider how the technology constrains the types of decisions that can be made on the battlefield, even when those decisions are made by human beings. The ccw process has had good conversations on the potential lowering of conflict thresholds and increase in insecurity that emerging technologies could bring. Talks on how technology could be hacked and misappropriated by non-State actors have also been fruitful and the member States are showing interest in talking more about those relevant issues.80 The process has one substantial victory: it has raised awareness on the importance of art 36 reviews of new weapons, which a growing number of States are developing.81 Technology has been described as a modality of constraint on human behaviour, just like law, markets, and social norms.82 It allows new behaviour as much as it restrains some other behaviours, more or less visibly and sometimes not as a voluntary choice of its designers.83 This is no less true for laws and emerging technologies in warfare than it is for the internet.84 If we persist in this individualized, embodied and anthropomorphic perspective of technology in warfare, we will be unable to assess relevant decision-making or decision-influencing elements of the networks that might be essential in the re-interpretation and adaptation of ihl standards. Those elements might ultimately prove irrelevant for ensuring the maintaining of ihl principles in armed conflict, but we need to go through this assessment before we can be rid of this concern.

5 Conclusion

We can better reinterpret ihl to govern new technologies of warfare by shifting to a networks perspective instead of focusing on laws. This is because laws are anthropomorphized as human soldiers, and this restricts the discussions to irrelevant features. Where human soldiers are individuals with a physical body and usually bearing weapons, those features are not inherently relevant for new technologies in warfare, whether autonomous or not. Since those features animate the discussions on laws regardless, it makes the process underinclusive by excluding relevant technologies. The networks perspective portrays the technology, and its influence on human decision-making, more accurately. This is because it turns away from the irrelevant features associated with laws (autonomy, individualization, embodiment, weaponization) to instead focus on the interactions between different technological and human nodes, and on the influence that the technologies have on our abilities to make decisions. Such a shift would be helpful and more practical for reinterpreting ihl for the armed conflicts of the twenty-first century. Networks are more serviceable for reinterpreting ihl than autonomy, because networks would reconcile the precautionary objectives of disarmament fora with the state of scientific and technological development by focusing on a less contentious perspective.

The States Parties to the ccw need to amend the mandate of the Group of Governmental Experts on emerging technologies in the area of laws. This new mandate should tone down the lethal, autonomous, and weapons elements, and refocus on the systems element. This would be a good starting point to reduce the weight of our anthropomorphic expectations for robot soldiers, and allow us to scrutinize more closely the influence of systems on military decision-making. Using systems as a stepping stone towards the networks perspective makes it easier for the diplomatic process because it makes the change less radical, since the process is already familiar with the notion of systems, and since systems can be easily described as part of a sociotechnical system of systems, ie a network. If the emerging part of the mandate is toned down as well, it would allow looking at how current technologies already part of an Internet of Battle Things are used, such as the US military command system that includes surveillance drones, weaponized drones and the Disposition Matrix. Slowly the process can then turn to less intuitively relevant technologies, such as semi-civilian satellites.85 The risk of sticking to laws is to end up with another Amended Protocol ii (APii) on mines and boobytraps (1980, amended 1996).86APii was a failure because its scope was underinclusive and limitative; and had so many exceptions that in the end its signatories could keep on using mines in many different contexts. Proof of its disability lies in the subsequent adoption of the Anti-Personnel Mine Ban, also known as Ottawa Convention (1997).87 If a new protocol to the ccw were to be drafted based on a definition of laws that refers to embodiment, individualization, weaponization, and human control, such an instrument would be underinclusive. It would have a narrow and inflexible scope of application that would ultimately prove inapplicable or easily bypassed.

An initial version of this argument was presented at the conference Beyond Killer Robots: Networked Artificial Intelligence Disrupting the Battlefield, organized on 15–16 November by the University of Copenhagen (Faculty of Law, Centre for International Law, Conflict and Crisis, and the AI and Legal Disruption research group). I would like to thank the attendees of the conference – in particular Hin-Yan Liu, Matthijs Maas, and Anders Henriksen for valuable feedback on the original idea, as well as Niél Conradie, Amin Parsa, and Gary Schaub for thought-provoking discussion during the panel session. I would also like to thank two anonymous reviewers for key input, and the editors for their feedback. Any remaining errors or inaccuracies are my own.

1

Meeting of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (ccw msp), ‘Final Report: 2013 Session (Geneva, 14–15 November 2013)’ (16 December 2013) UN Doc ccw/msp/2013/10 para 32. All documents are available at under ‘Discussions on emerging technologies in the area of laws’.

2

See the ‘2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (laws)’ subsection of the ’Discussions on emerging technologies in the area of laws’ section of the unog website: daa4966EB9C7C12580CD0039D7B5?OpenDocument> accessed 27 March 2019.

3

Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (ccw gge), ‘Report of the 2018 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (Geneva, 9–13 April 2018 and 27–31 August 2018)’ (23 October 2018) UN Doc ccw/gge.1/2018/3 para 27a. See also ccw gge, ‘Report of the 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (laws) (Geneva, 13–17 November 2017)’ (22 December 2017) UN Doc ccw/gge.1/2017/3 para 16f.

4

United Kingdom, ‘Working towards a Definition of laws’ (Statement, Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 11–15 April 2016) para 4, stating that a fully autonomous laws ‘is capable of understanding, interpreting and applying higher level intent and direction based on a precise understanding and appreciation of what a commander intends to do and perhaps more importantly why’; see also United Kingdom, ‘Statement for the General Exchange of Views’ (ccw gge, 9 April 2018) para 2.

5

Noel Sharkey (Expert Presentation on Technical Issues, Informal Meeting of Experts on Lethal Autonomous Weapon Systems, Geneva, 13–14 November 2014).

6

France, ‘Characterization of a laws’ (Non-Paper, Informal Meeting of Experts on Lethal Autonomous Weapon Systems, Geneva, 11–15 April 2016) 1: ‘laws should be understood as implying a total absence of human supervision’.

7

See, eg, France and Germany, ‘Statement (under Agenda Item “General Exchange of Views”)’ (ccw gge, Geneva, 9–13 April 2018) para 7.

8

ccw gge, ‘Report of the 2018 Session’ (n 3) annex iii, paras 3–12.

9

Ibid annex iii, paras 5–6.

10

Ibid para 33.

11

Noel Sharkey and Lucy Suchman, ‘Wishful Mnemonics and Autonomous Killing Machines’ [2013] aisb Quaterly 14; Karolina Zawieska, ‘Do Robots Equal Humans? Anthropomorphic Terminology in laws’ (Expert Presentation, Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 13–17 April 2015); Lucy Suchman and Jutta Weber, ‘Human-Machine Autonomies’ in Nehal Bhuta and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (Cambridge University Press 2016).

12

Merel A C Ekelhof, ‘Complications of a Common Language: Why It Is So Hard to Talk about Autonomous Weapons’ (2017) 22 Journal of Conflict and Security Law 311.

13

Hin-Yan Liu, ‘Categorization and Legality of Autonomous and Remote Weapons Systems’ (2012) 94 International Review of the Red Cross 627.

14

Guy H Walker, Neville A Stanton and Daniel P Jenkins, Command and Control: The Sociotechnical Perspective (Ashgate 2009).

15

Ibid; Sharkey and Suchman (n 11).

16

Heather Roff, ‘To Ban or Regulate Autonomous Weapons: A US Response’ (2016) 72 Bulletin of the Atomic Scientists 122.

17

Suchman and Weber (n 11).

18

Dustin A Lewis, Gabriella Blum and Naz K Modirzadeh, ‘War-Algorithm Accountability’ (Research Briefing, Harvard Law School Program on International Law and Armed Conflict 2016).

19

Ibid 16.

20

Ibid 18.

21

Maya Brehm, ‘Defend the Boundary: Constraints and Requirements on the Use of Autonomous Weapons Systems under International Humanitarian and Human Rights Law’ (Geneva Academy of International Humanitarian Law and Human Rights 2017) 20 and 44.

22

Heather M Roff and David Danks, ‘“Trust but Verify”: The Difficulty of Trusting Autonomous Weapons Systems’ (2018) 17 Journal of Military Ethics 8.

23

Alexander Kott, Ananthram Swami and Bruce J West, ‘The Internet of Battle Things’ (2016) 49 Computer 70.

24

Ryan Calo, ‘Robotics and the Lessons of Cyberlaw’ (2015) 103 California Law Review 53; Jack B Balkin, ‘The Path of Robotics Law’ (2015) 6 California Law Review 45.

25

Walker, Stanton and Jenkins (n 14).

26

Sheila Jasanoff, Science at the Bar: Law, Science and Technology in America (Harvard University Press 1995); Sheila Jasanoff, ‘Serviceable Truths: Science for Action in Law and Policy’ (2015) 93 Texas Law Review 1723.

27

Suchman and Weber (n 11) 100.

28

ccw gge, ‘Report of the 2018 Session’ (n 3) para 26h; Zawieska (n 11).

29

France (n 6).

30

See, eg, Ronald Arkin, ‘The Case for Banning Killer Robots: Counterpoint’ (2015) 58 Communications of the acm 46; see also Bonnie Lynn Docherty, Mind the Gap: The Lack of Accountability for Killer Robots (Human Rights Watch 2015).

31

Michael Schmitt, ‘War, Technology and the Law of Armed Conflict’ in Anthony M Helm (ed), The Law of War in the 21st Century: Weaponry and the Use of Force (US Naval War College 2006) 149.

32

Sharkey and Suchman (n 11); Zawieska (n 11).

33

Sharkey and Suchman (n 11).

34

Zawieska (n 11).

35

Liu (n 13).

36

Ryan Calo, ‘Robotics and the Lessons of Cyberlaw’ (2015) 103 California Law Review 53, 531.

37

Alan M Leslie, ‘Theory of Mind’ in International Encyclopedia of the Social and Behavioral Sciences (Elsevier 2001).

38

David Fidler, ‘The International Legal Implications of “Non-Lethal” Weapons’ (1999) 21 Michigan Journal of International Law 51.

39

Jasanoff, ‘Serviceable Truths’ (n 26) 1736.

40

icrc, ‘Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons’ (August 2016) 8.

41

ccw gge, ‘Report of the 2018 Session’ (n 3) para 33, where only ‘critical functions’ are mentioned.

42

For a criticism of autonomy, see also Calo (n 24); Balkin (n 24).

43

Article 36, ‘Structuring Debate on Autonomous Weapons Systems: Memorandum for Delegates to the Convention on Certain Conventional Weapons (ccw)’ (Briefing Paper, 2013).

44

Ibid.

45

Matthijs M Maas, ‘Regulating for “Normal AI Accidents”: Operational Lessons for the Responsible Governance of Artificial Intelligence Deployment’, Proceedings of the 2018 aaai/acm Conference on AI , Ethics, and Society: aies ’18 (acm Press 2018).

46

Suchman and Weber (n 11) 86.

47

Ibid 86–87.

48

Ibid 100.

49

Ibid.

50

Ibid.

51

Amin Parsa, Knowing and Seeing the Combatant: War, Counterinsurgency and Targeting in International Law (Lund University 2017).

52

Nathan E Clark, ‘Blurred Lines: Dual-Use Networks for Satellite Remote Sensing’ (Beyond Killer Robots: Networked Artificial Intelligence Disrupting the Battlefield, University of Copenhagen, 15 November 2018).

53

Amin Parsa, ‘Knowing and Seeing the Combatant. Drone Targeting in US Counterinsurgency’ (Beyond Killer Robots: Networked Artificial Intelligence Disrupting the Battlefield, University of Copenhagen, 15 November 2018).

54

Sophie-Charlotte Fischer, ‘The Deployment of Civilian-Developed AI Technologies in Increasingly Autonomous Weapons Systems’ (Beyond Killer Robots: Networked Artificial Intelligence Disrupting the Battlefield, University of Copenhagen, 15 November 2018).

55

Kott, Swami and West (n 23).

56

Walker, Stanton and Jenkins (n 14).

57

Ibid 3.

58

Ibid 6–7.

59

Ibid.

60

See, eg, Kyle J Behymer and John M Flach, ‘From Autonomous Systems to Sociotechnical Systems: Designing Effective Collaborations’ (2016) 2 She Ji: The Journal of Design, Economics, and Innovation 105; Ezio Di Nucci and Filippo Santoni de Sio (eds), Drones and Responsibility: Legal, Philosophical and Sociotechnical Perspectives on Remotely Controlled Weapons (Routledge 2016).

61

See, eg, Frédéric Mégret, ‘War and the Vanishing Battlefield’ (2011) 9 Loyola University Chicago International Law Review 131.

62

See accessed 27 December 2018.

63

Hin-Yan Liu, ‘The Opposite of Autonomy’ (Beyond Killer Robots: Networked Artificial Intelligence Disrupting the Battlefield, University of Copenhagen, 15 November 2018).

64

Jasanoff, Science at the Bar (n 26) 16.

65

Ibid 19–20.

66

Mark Hagerott (Expert Presentation on Operational and Military Aspects, Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 13–16 May 2014); Andrea Omicini, ‘The Distributed Autonomy: Software Abstractions and Technologies for Autonomous Systems’ (Expert Presentation, Informal Meeting of Experts on Lethal Autonomous Weapon Systems, Geneva, 13–17 April 2015).

67

Jasanoff, ‘Serviceable Truths’ (n 26).

68

Ibid 1736–37.

69

Ibid 1736.

70

Ibid 1741.

71

Ibid 1743.

72

See, eg, ccw gge, ‘Report of the 2018 Session’ (n 3) para 26, which mentions ‘potential challenges’.

73

Samuel Gibbs, ‘Elon Musk Leads 116 Experts Calling for Outright Ban of Killer Robots’ The Guardian (20 August 2017) .

74

Jasanoff ‘Serviceable Truths’ (n 26) 1745.

75

Ibid 1744.

76

Ibid 1746.

77

Ibid 1729–31.

78

Ibid 1730.

79

See, eg, ccw gge, ‘Report of the 2018 Session’ (n 3) para 27c.

80

Ibid para 32; see also the programmes of the 2014, 2015 and 2016 Meetings of Experts, which had various sessions on Human Rights, Security Issues, etc.

81

Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (adopted 8 June 1977, entered into force 7 December 1978) 1125 unts 3 (api) art 36.

82

Lawrence Lessig, ‘The Law of the Horse: What Cyberlaw Might Teach’ (1999) 113 Harvard Law Review 501; Roger Brownsword, ‘In the Year 2061: From Law to Technological Management’ (2015) 7 Law, Innovation and Technology 1.

83

See, eg, adult-content zoning on the internet in Lessig (n 82) 503.

84

Balkin (n 24); Calo (n 24); Margot E Kaminski, ‘Authorship, Disrupted: AI Authors in Copyright and First Amendment Law’ (2017) 51 UC Davis Law Review 589; Hin-Yan Liu, ‘Irresponsibilities, Inequalities and Injustice for Autonomous Vehicles’ (2017) 19 Ethics and Information Technology 193.

85

Nathan Edward Clark, ‘Blurred Lines: Multi-Use Dynamics for Satellite Remote Sensing’ (2019) 10 Journal of International Humanitarian Legal Studies 171.

86

Protocol on Prohibitions or Restrictions on the Use of Mines, Booby-Traps and Other Devices (adopted 10 October 1980, entered into force 2 December 1983) 1342 unts 137, (amended 3 May 1996, entered into force 3 December 1998) 2048 unts 93 (Amended Protocol ii).

87

Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on Their Destruction (adopted 18 September 1997, entered into force 1 March 1999) 2056 unts 211.

If the inline PDF is not rendering correctly, you can download the PDF file here.

Content Metrics

All Time Past Year Past 30 Days
Abstract Views 277 277 98
Full Text Views 226 226 154
PDF Downloads 75 75 33