Interesting article in this month's Scientific American by Michael and Susan Leigh Anderson on machine ethics (the online version is behind a pay-wall, otherwise I'd link to it).
The basic gist of their argument is that machines - robots, more specifically - which are going to be interacting with humans on a frequent basis in the near future will be needing some kind of ethical code programmed into them. Seems like a pleasant enough premise. Where the Andersons really impress, though, is their reveal that they programmed a simplistic version of such an ethical code into a humanoid robot, Nao.
Their Nao was programmed to help administer medication to a patient, and to notify a physician when the patient had lapsed in taking said medication for a sufficiently long enough period of time to cause harm to the patient. Simple, but still pretty interesting stuff! The Andersons are certainly not the first people to consider ethical robots, though - Asimov's Three Laws of Robotics are over fifty years old now.
Here's my question, though; what sort of ethical principles should we be programming into our future robots? Let's assume that Asimov's laws won't be used as a starting point. One answer is that we should merely attempt to emulate the moral psychology found in humans and other animals - in a sense, putting any cognitive scientific theories of moral development to the test.
Alternatively; we could program the robots with an ideal, normative, theory of our choosing. Just as the first option issued a challenge towards cognitive science's view of morality, so this presents a challenge to our philosophical theories. In a sense, it forces those putting forth normative theories to put their money where their mouth is - if your proposal isn't both (semi-) rigorous and practically useful, then it's just not going to work in our hypothetical robot! Naive utilitarianism, for instance, is right out - how could the robot compute all the possible consequences of its actions?
The ethical principles that the Andersons used seemed to be a vague mix of deontological and consequentialistic theories - with the robot being programmed to have three "duties", and the probability of harm coming to the patient being considered relevant. The principles were, needless to say, very simple, designed to be programmed into a robot that had a relatively easy task to accomplish. There was no need to take into account the complexities that a more socially integrated robotic agent would have to be prepared for.
It will most likely be at least a decade - if not longer - before robots are developed that do require more complicated ethical programming. I fully expect the debate over how we should supply machines with ethics to become substantially more heated in that time period.
Further reading:
Anderson, M. & Anderson, S. L. (2007), The status of machine ethics: a report from the AAAI Symposium, Minds and Machines, 17(1), 1-10
Anderson, S. L. (2007), Asimov’s “three laws of robotics” and machine metaethics, AI & Society, 22(4), 477-493
Wednesday, 20 October 2010
Subscribe to:
Post Comments (Atom)
A couple of links you might find interesting:
ReplyDeletehttp://www.youtube.com/watch?v=ZLdvCDFriTQ
http://www.aaai.org/ojs/index.php/aimagazine/article/view/2065/2052
Michael,
ReplyDeleteThank you for the links - that video demonstration is particularly impressive!