Forget your robot overlords: Watch out for Lethal Autonomous Systems that make mistakes

A new paper on AI-driven LASs argues they can't always be right and that we'd better think hard about whether we should build them at all

1280px terminator
Wikipedia

Robot overlords! Pah! The biggest military danger in the near future will be hunter-killer autonomous robots. No, not far-future Terminator-type ‘droids but small, cheap bots that will be able to challenge you, figure out if you’re us or them, and, if you’re not, kill you. 

We’ve already seen such devices though they’ve not been cheap nor really smart. For example, 8 years ago Samsung Techwin announced the all weather SGR-A1, 5.56 mm robotic machine gun with an optional grenade launcher for $200,000 (it wasn't available, as far as I know, in the 2006 Neiman Marcus Christmas catalog).

If you doubt that worse than this won’t come just consider the remarkable advances in autonomous robots from flying drones for delivering goods (or ordnance) through to  Boston Dynamics “Big Dog” and “WildCat” that can navigate rough terrain and, in the case of the latter, run you down. 

Now, imagine devices like these with, perhaps, a power source such as Rossi’s E-Cat (assuming it ever gets to commercial production) or maybe a miniaturized hydrogen fuel cell for long term operation. Make ‘em cheap enough and give them durability and a weapon and they’d be some of the most dangerous machines on earth. They’d make land mines look tame because these machines would lie in wait, then, when you get near, if they decided you were the enemy, they’d get up and chase you down. And kill you. Then hide and wait for the next person to come into range. And kill them. Rinse and repeat.

But would these killerbots or Lethal Autonomous Systems (LASs) make mistakes? Absolutely. A recent paper, Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons, by Matthias Englert, Sandra Siebert, and Martin Ziegler of Technische Universitat Darmstadt explores this question using mathematic logic and theoretical computer science.

We start with well-known and obvious quandaries such as contradicting goals … and then gradually refine the setting to less apparent conflicts. This leads to a hierarchical classification based on four dilemmas, culminating in a thought experiment where an artificial intelligence (AI) based on a Turing Machine is presented with two choices: one is morally preferable over the other by construction; but a machine, constrained by Computability Theory and in particular due to the undecidability of the Halting problem, provably cannot decide which one. We thus employ mathematical logic and the theory of computation in order to explore the limits, and to demonstrate the ultimate |limitations, of Machine Ethics.

charlotte trolley car 92 Wikipedia

A trolley car

A key element of the paper concerns the Trolley Problem:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

The authors use this problem in several increasingly evil versions that present greater and greater ethical and practical complexity and then throw in the observation that conundrums arising out of such scenarios can never be solved computationally thereby ensuring that mistakes will be made.  

The paper is a fascinating and deep analysis that concludes with some trenchant remarks concerning not only the ethical implications but also the legal ones such as “Lacking an operator, who is liable for damage caused by a malfunctioning [Lethal Autonomous System]: producer or owner?”, “ If both the latter two cannot be identified, who gets charged with compensation: the AI?” and “Who is guilty when an AI commits a murder? How can AIs be deterred and possibly punished?”

The paper concludes with “Recommended Regulations concerning AIs” and while many of the recommendations are sensible and moral several of them are pretty much guaranteed to be ignored in practice, for example:

1) Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans 

 …

5) The person with legal responsibility for a robot should be attributed.

Lethal Autonomous Systems are being developed and their use is right around the corner … perhaps the next one you come to. I hope you’re wearing the right clothes and have the right password. And that the AI doesn’t make a mistake …

Related:

Copyright © 2014 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022