Pages

Tuesday, 5 March 2013

Can robots behave morally better than humans ?



In response to the debate initiated by Ruairi concerning the fear of drones trying to take over the world, I thought it could be interesting to write a post about artificial intelligence and ways of simulating decision making. People usually think that drones can actually think and pick their targets without any human interventions. Despite the lack of information about the design of those drones due to their confidential aspect, the advances in neural networks show that we are still very far from creating machines which can embed large scale neural network to treat a visual input (e.g. Google X’s neural network), so we can easily guess that drones do not analyse and “understand” all the input data by themselves.

Therefore, we can also suggest that drones are only able to treat the input data to fly to a given location but there is no proper decision making involved. Drones are simple deterministic systems. Moreover, even if drones were implemented with large scale neural networks, their reactions could be assessed during the testing phase so no new behaviours could emerge from encountering new stimuli. Of course, this would mean that their systems wouldn’t be able to learn anything but it is the only safe way to keep machines predictable and so, under control.


The discussion about drones reminds me of a paper that I read a while ago. Written by Matthijs Pontier and Johan Hoorn, this paper presents a review of previous research in ethical decision making in AI, and then develops a potential way of designing an implementation of ethical decision making. Of course, their concept is not perfect and can easily attract criticisms, but it is a very good start. Most of the existing systems learn to resolve an ethical problem by looking at previously similar problems, thanks to web-based technology. That is to say, the program will look for information among millions of people instead of only getting information from the developer. Some systems currently use neural networks to be more flexible and learn from previous problems to treat new kind of issues.

The ethical decision making model designed by Pontier and Hoorn is based on three factors which are autonomy, non-maleficence and beneficence. The autonomy score defines the level of constraints applied (by people, injuries…) to the subject and is more important than the non-maleficence factor which is itself more important than the beneficence score. For each problem, a group of solutions is presented with the values of the three factors and the solution with the best overall score will represent the morally superior path to engage in in order to solve the issue.

To check the reliability of the moral reasoner, it was confronted with several ethical dilemmas in which a health care worker has to help a patient to make a decision. For example, in a case where a patient (=subject) has cancer and decides not to be treated with chemotherapy, the health care worker (=contraint) can decide whether to accept the patient’s decision or try to influence him into accepting the treatment. While accepting the decision will be considered as not beneficent and quite maleficent because the patient would die, the fact of influencing their decision will reduce their autonomy in the decision making process. For this dilemma, it is slightly better to try to make the patient change their decision, even though the decision would not be fully independent. (All the simulations are fully detailed with values in the paper.)

One of the quotes that I will retain from this paper is that : “AI makes philosophy honest”. That is to say that the way cognitive phenomena are decomposed in order to make them computable allows us to have a better understanding of all the components of reasoning. Then, it may seem strange to try to human moral reasoning when we know all too well that humans are far from being angels. On the other hand, when we consider the fact that machines are more rational than human beings, we can expect machines to behave ethically better than human beings from which their behaviour is based on.

3 comments:

  1. If morality can be boiled down to a formula or an algorithm then yes robots would be the ultimate arbiters of justice, free from any bias or prejudice whatsoever. You make a convincing argument but we know that right and wrong are not clear concepts. This is why different legal systems around the world rely to varying degrees on the laws themselves, on juries, on elders or just on someone with a funny wig making their own call. I wonder what the robot would decide on the dilemma of Jim and the Indians http://www.e-mago.co.il/e-magazine/jatiamq-kgng.html

    ReplyDelete
    Replies
    1. I think that's why the maleficence score is more important than the beneficent score. Basically, the more people it can save, the more likely it will proceed to the sacrifice... Sure, it's a cold and straightforward way to make a decision but it is honest and rational.

      Delete
  2. This work seems to borrow heavily from this:
    http://ieet.org/archive/IEEE-Anderson.pdf

    ReplyDelete