29 July 2009

Robots Don't Kill People, People do

Rod Furlan twittered a day in the life of a Singularity University student. At 12:08:21 PM he asked, "When a robot kills, who pulled the trigger?"

This question cycles through the public consciousness every year or so and is well illustrated by the South African National Defence Force's 'little' accident with an automated Oerlikon GDF-005 (it sprayed 500 35mm anti-aircraft shells around its firing position, killing 9 and wounding 11).

Oerlikon GDF-005, A.K.A. the T-001

At the moment no one seriously considers (except maybe the Koreans) holding the auto-turret responsible for the killings, because the system that controls the mechanical stuff isn't complicated enough to be plausibly sentient.
...as backed up by empirical research by Friedman and Millett (1997), and by Moon and Nass (1998), humans do attribute responsibility to computers. Of course, that we may be inclined to blame computers does not entail that we are justified in so doing. Although computer systems may clearly be causally responsible for the injuries and deaths that resulted from their flawed operation, it is not so clear that they can be held morally responsible for these injuries or deaths.
However, some people are really excited about the possibility that computers will eventually (sooner rather than later, yay!) be complicated enough for us to blame things on them. Without going into the background on this topic, the basic requirement for something to be responsible for its actions is that it be consciously aware of the difference between right and wrong. Since computers just do what they are programmed to do, and have no ability to understand the concept of "should," they are not responsible for anything. Computers just follow orders.

Computers totally would have let the Nuremberg Defendents off the hook.

The human brain is a system, and a computer is a system, so it is plausible that computer systems can increase in complexity and reach a par with the human brain. So, at some point we will probably have to deal with computers that actually do understand morality. Since we'll still be human, we'll probably give them a gun and tell them to go kill our enemies. However, before we can pull the "the robot did it on its own" card, we will be forced to use old-fashioned computers to kill people.

Dr. Ronald Arkin wrote a book about this, and did a few interviews, and worked on a prototype computer-based morality system. His thesis is that robots can be more moral on the battlefield than humans because they are capable of making fewer mistakes. They won't make decisions based on fear, anger or recklessness and they will evaluate every situation on its own merits instead of suffering from 'scenario fulfillment' and jumping to conclusions.

From a systems standpoint it seems fairly obvious that computers will eventually be more complicated than humans, and at that point they will probably have to start taking responsibility for their own actions (and for cleaning up that pig-stie they call a room). Until then, however, we humans will have to continue taking responsibility for robots that are put in increasingly complicated situations. Dealing with this transition period will require innovations that have not appeared yet. At some point it becomes difficult to hold a person responsible for the actions of a system they own, but that they can't possibly understand fully enough to predict its actions in all situations. Isaac Asimov built part of his career exploring the ways a robot could do totally unexpected things while blindly obeying the 3 Laws of Robotics.

We need an innovative way to interpret who is responsible for the actions autonomous (but unconscious) systems take. Even when some computers truly are unequivocally responsible for their own actions, the vast majority of computers systems will continue to be unconscious. Inevitably, some of the moral computers that we declare responsible for their own actions will assume control of non-moral computers that still aren't responsible for their own actions.

The question is, 'in the future, when a moral computer tells a non-moral computer to kill, who can I sue?'

No comments:

Post a Comment