Pages

Monday, 13 May 2013

Robot Rights



“As long as humans or animals are still tortured on this earth, we have bigger problems to tackle than the ethical situation of robots!” said one of the many outraged comments under this German newspaper article I recently read. I can’t say that I am not quite sympathetic with this view, but Kate Darling’s idea of why we should think about robot rights now is motivated in an interestingly different way.

“We should give robots similar rights to animals – they should for example not be allowed to be tortured” says Darling, who is an IP Research Specialist at the MIT Media Lab and a Ph.D. candidate in Intellectual Property and Law & Economics.

I think a lot of people stopped reading after this statement – so if you were about to, too, just bear with me for a little longer.
Because yes, we all (and even Darling) know that there is not so insignificant of a difference between animals and machines: Animals feel pain, robots don’t. They are still just machines and (even though some films try to suggest different) far from having something like a conscious experience. Referring back to Darlings suggestion, I’m not even exactly sure what the prohibition to torture robots would mean … could a thing that can’t feel be tortured? I actually don’t think so – humans might show behaviour that looks like we are torturing it, but that’s about it.

Why then, should we still protect robots? The underlying assumption of Darling is the following: It’s not about the robots (as it is, if we are honest, not about the animals either, so she thinks) – it is all about us.
She observed that many people (children, the elderly) are insecure if social robots - that seem to have some sort of freedom in their behaviour and can interact and communicate - can feel pain. But a bit more surprisingly, even perfectly sane adults anthropomorphise social robots after interacting with them for the short time span of an hour in such a way that they often refuse to “harm” or destroy them when asked to do so. And these people know that this thing is only a machine.

Darling thinks this behaviour can be explained by our “natural tendencies” that want us to discourage behaviour that could be harmful in other areas. We feel uneasy about these behaviours towards robots (and animals) because we assume – and at least partly justly – that behaviours translate: Someone who is capable to show cruel behaviour towards an animal or a robot, is more likely to show similar behaviour towards humans as well. Darling and our gut feeling might both in many cases be right as there is e.g. a significant linkage between animal abuse, child abuse and domestic abuse, but…

“It’s not about protecting the objects, it’s about protecting our societal values, protecting ourselves because it hurts us to hurt them” says Darling.
But what would a law system look like that is based on banning actions because they could be harmful if carried out in other domains? Well, it would forbid computer games that involve hurting other virtual or animated players (thus forbidding nearly all kind of computer games – yes, I know this position is encouraged by quite a few nowadays), forbidding kids to play cowboys and Indians, forbidding all books and films containing violent actions, forbidding thoughts involving …

Where would we stop? Could we really limit the banning of cruel behaviour to embodied robots if we find out that other forms of violent actions (in the environment of games or hypothetical situations) might possibly translate to a higher likelihood that the person in question might show similar behaviour towards human in real-life situations?

I do not think that anyone should encourage such a law system (and it was even more surprising for me to hear that understanding of the law proposes by a law Ph.D. student that she is).
So while of course we might make inferences about people that show obvious enjoyment when “torturing” robots – we probably can’t help that anyway – I think we should not base our prohibitions and legal regulations on false beliefs about things just because these beliefs are hold by many people, or are our natural tendencies as Darling labels it.  Put differently, I also don’t see a reason why we should make it a law to apologize to a tree before we cut it. As long as robot rights are not about robots, but only about humans I think property laws that forbid destroying someone’s robot because it is his property are just all we need.

No comments:

Post a Comment