[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]
Here’s my latest dispatch from the second major diplomatic conference on Lethal Autonomous Weapons Systems, or “killer robots” as the less pretentious know them. (A UN employee, for whom important-sounding meetings are daily background noise, approached me in the cafeteria to ask where she could get a “Stop Killer Robots” bumper sticker like the one I had on my computer, and said she’d have paid no attention to the goings-on if that phrase hadn’t caught her eye.) The conference continued yesterday with what those who make a living out of attending such proceedings like to describe as “the hard work.”
Expert presentations in the morning session centered on the reasons why militaries are interested in autonomous systems in general and autonomous weapons systems in particular. As Heather Roff of the International Committee for Robot Arms Control (ICRAC) put it, this is not just a matter of assisting or replacing personnel and reducing their exposure to danger and stress; militaries are also pursuing these systems as a matter of “strategic, operational, and tactical advantage.”
Roff traced the origin of the current generation of “precision-guided” weapons to the doctrine of “AirLand Battle” developed by the United States in the 1970s, responding then to perceived Soviet conventional superiority on the European “central front” of the Cold War. Similarly, Roff connected the U.S. thrust toward autonomous weapons today with the doctrine of “AirSea Battle,” responding to the perceived “Anti-Access/Area Denial” capabilities of China (and others).
Some background: The traditional American way of staging an overseas intervention is to park a few aircraft carriers off the shores of the target nation, from which to launch strikes on land and naval targets, plus to mass troops, armor, and logistics at forward bases in preparation for land warfare. But shifts in technology and economic power are undermining this paradigm, particularly with respect to a major power like China, which can produce thousands of ballistic and cruise missiles, advanced combat aircraft, mines, and submarines. Together, these weapons are capable of disrupting forward bases and “pushing” the U.S. Navy back out to sea. This is where the AirSea Battle concept comes in. As first articulated by military analysts connected with Center for Strategic and Budgetary Analysis and the Pentagon’s Office of Net Assessment, the AirSea Battle concept is based on the notion that at the outset of war, the United States should escalate rapidly to massive strikes against military targets on the Chinese mainland (predicated on the assumption that this will not lead to nuclear war).
Now, from the narrow perspective of a war planner, this changing situation may seem to support a case for moving toward autonomous weapon systems. For Roff, however, the main problems with this argument are arms races and proliferation. The “emerging technologies” that underlie the advent of autonomous systems are information technology and robotics, which are already widely proliferated and dispersed, especially in Asia. Every major power will be getting into this game, and as autonomous weapon systems are produced in the thousands, they will become available to lesser powers and non-state actors as well. Any advantages the United States and its allies might gain by leading the world into this new arms race will be short-term at best, leaving us in an even more dangerous and unstable situation.
Afternoon presentations yesterday focused on how to characterize autonomy. (I have written a bit on this myself; see my recent article on “Killer Robots in Plato’s Cave” for an introduction and further links.) I actually like the U.S. definition of autonomous weapon systems as simply those that can select and engage targets without further human intervention (after being built, programmed, and activated). The problems arise when you ask what it means to “select” targets, and when you add in the concept of “semi-autonomous” weapons, which are actually fully autonomous except they are only supposed to attack targets that a human has “selected.” I think this is like saying that your autonomous robot is merely semi-autonomous as long as it does what you wanted — that is, it hasn’t malfunctioned yet.
I would carry the logic of the U.S. definition a step further, and simply say that any system is (operationally) autonomous if it operates without further intervention. I call this autonomy without mystery. It leads to the conclusion that, actually, what we want to do is not to ban everything that is an autonomous weapon, but simply to avoid a coming arms race. This can be done by presumptively banning autonomous weapons, minus a list of exceptions for things that are too simple to be of concern, or that we want to allow for other reasons.
Implementing a ban of course raises other questions, such as how to verify that systems are not capable of operating autonomously. This might seem to be a very thorny problem, but I think it makes sense to reframe it: instead of trying to verify that systems cannot operate autonomously, we should instead seek to verify that weapons are, in fact, being operated under meaningful human control. For instance, we could ask compliant states to maintain encrypted records of each engagement involving any remotely operated weapons (such as drones). About two years ago, I along with other ICRAC members produced a paper that explores this proposal; I would commend it to others who might have felt frustrated by some of the confusion and babble during the conference yesterday afternoon.
0 Comments