[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]
In a well-attended lunchtime side event yesterday (don’t go to UN meetings for the free food; plastic-wrapped sandwiches and water or pop were the offerings, and these quickly disappeared at the hands of the horde of hungry delegates), Canadian robotics entrepreneur Ryan Gariepy spoke about why his company, Clearpath Robotics, declared last year that it does not and will not produce killer robots. With about eighty employees, Clearpath is a young, aggressive developer of autonomous ground and maritime vehicle systems, putting about equal emphasis on hardware and software. The company’s name reflects its original goal of developing mine-clearing robots, and Clearpath is by no means allergic to military robotics in general; its client list includes “various militaries worldwide” and major military contractors. Nevertheless, in a statement released in August 2014, Gariepy, as co-founder and Chief Technology Officer, wrote, “To the people against killer robots: we support you…. Clearpath Robotics believes that the development of killer robots is unwise, unethical, and should be banned on an international scale.”
Ryan Gariepy’s presentation |
At lunch yesterday, Gariepy explained some of his reasons. He sees a general tradeoff in robotic systems between “flexibility” or “capability” and “predictability” or “controllability,” and worries that military imperatives will drive autonomous weapons toward the former goals. He talked about recent findings that the same “deep learning” neural networks that Professor Stuart Russell had earlier described as displaying “superhuman” performance in visual object classification tasks are also prone to bizarre errors: uniform patterns misclassified as images of familiar objects, and images that the machines recognize correctly but fail to recognize after the addition of what to a human is an imperceptible amount of engineered (non-random) noise. This is one example of the “Black Swan” phenomenon that characterizes complex systems in general. Gariepy also talked about the low costs of subcomponents that would go into killer robots, implying that they could be produced in massive numbers.
Gariepy believes in a “robotics revolution” that can be purely benevolent: “After all, the development of killer robots isn’t a necessary step on the road to self-driving cars, robot caregivers, safer manufacturing plants, or any of the other multitudes of ways autonomous robots can make our lives better.” I and, I suspect, many readers of this blog have some questions about what kind of care robots will be able to give, and whether manufacturing plants are going to be “safer” or just not have people working in them at all (and why those people shouldn’t then be doing the caregiving). But it’s clear that we are no longer living in the military spin-off economy of the Cold War era; the flow of technology from military R&D to civilian application has largely reversed. This makes it doubtful that Clearpath really has “more to lose” than it has to gain from the free publicity that came with its declaration, and Gariepy admits it has actually helped him to recruit top-notch engineers who would rather work with a clear conscience.
In contrast with those who find they must wrestle with complexity and nuance in their quest for the meaning of autonomy (see my previous post), Gariepy’s statement took a pretty straightforward approach to defining what he was talking about: “systems where a human does not make the final decision for a machine to take a potentially lethal action.” That’s the no-go, but otherwise, he pledged that “we will continue to support our military clients and provide them with autonomous systems — especially in areas with direct civilian applications such as logistics, reconnaissance, and search and rescue.”
Ryan Gariepy, on Lake Geneva |
Fair enough, but in a conversation over beers on the quay at Lake Geneva at day’s end, I pressed Gariepy on just where he would draw the line. For example, I asked, what if a client came to him and said, “We’ve got an autonomous tank, but we don’t want you to work on the fire controls, just the vehicle navigation so it doesn’t run over anybody.” Gariepy was categorical: “You just admitted it’s a lethal autonomous weapon, so I won’t work on it.” What about a “nonlethal” weapon; suppose somebody wants to arm a drone with a taser and have it patrol their estate? Or suppose they have a missile of some sort, and they want to use an algorithm you own a patent on, not to make the missile home in on a target, but to divert it away in case it detects the presence of a human being? It would only be saving lives, then.
Gariepy threw up his hands at such questions and said, “I don’t want to think about all that. I have a business to run.” And in fairness, he is probably the only person who was sitting in the plenary sessions with his laptop open, coding. Referring to the community with nothing else to do than brainstorm and debate about the fine print of a killer-robot ban, he added, “You guys think about it, and tell me what to do.”
One of the advantages of being a private entrepreneur, he explained, is not having to make policy to govern such cases in advance. “I can change my mind, or decide as the situation arises.” Unless, that is, there is a law about the matter, and Gariepy wants a law. So he doesn’t have to think about all that.
(Edit: Expanded the penultimate paragraph, to add more detail.)
1 Comments
Comments are closed.
Thanks for your informative posts. I tend to agree with the simpler definitions of autonomy.
Type I "in the loop" LAWS. Robot decides to shoot, human presses Agree button. Robot shoots.
Type II "on the loop" LAWS. Robot decides to shoot. If human does not press Cancel button, robot shoots. (If shooting is really fast, human can hit Stop button.)
Type III "off the loop" LAWS. Robot decides to shoot. Robot shoots. Human reads log file.
These definitions capture existing weapons (Patriot Type I, Phalanx Type II). I think its a real mistake to define "autonomy" in the future tense.