Sunday 31 October 2010

Update on the Marc Hauser Case

Commentary on Marc Hauser is still surfacing every now and then. Some articles from the past week;

It's perhaps interesting (although hardly surprising, Black Sheep effect springing to mind) that the Harvard Crimson strikes the harshest tone out of the three links above.

In general it seems that the case - and discussion of it - is getting increasingly convoluted. It should be interesting to see what happens with it in future.

Wednesday 27 October 2010

Damasio on Youtube

It looks like Antonio Damasio has a new(ish) profile on Youtube, with a few videos already uploaded. Particularly interesting was his explanation of the thought processes behind his new book, Self Comes to Mind, which is being released in the UK in early November.

Wednesday 20 October 2010

Computing Virtue and Vice

Interesting article in this month's Scientific American by Michael and Susan Leigh Anderson on machine ethics (the online version is behind a pay-wall, otherwise I'd link to it).

The basic gist of their argument is that machines - robots, more specifically - which are going to be interacting with humans on a frequent basis in the near future will be needing some kind of ethical code programmed into them. Seems like a pleasant enough premise. Where the Andersons really impress, though, is their reveal that they programmed a simplistic version of such an ethical code into a humanoid robot, Nao.

Their Nao was programmed to help administer medication to a patient, and to notify a physician when the patient had lapsed in taking said medication for a sufficiently long enough period of time to cause harm to the patient. Simple, but still pretty interesting stuff! The Andersons are certainly not the first people to consider ethical robots, though - Asimov's Three Laws of Robotics are over fifty years old now.

Here's my question, though; what sort of ethical principles should we be programming into our future robots? Let's assume that Asimov's laws won't be used as a starting point. One answer is that we should merely attempt to emulate the moral psychology found in humans and other animals - in a sense, putting any cognitive scientific theories of moral development to the test.

Alternatively;
we could program the robots with an ideal, normative, theory of our choosing. Just as the first option issued a challenge towards cognitive science's view of morality, so this presents a challenge to our philosophical theories. In a sense, it forces those putting forth normative theories to put their money where their mouth is - if your proposal isn't both (semi-) rigorous and practically useful, then it's just not going to work in our hypothetical robot! Naive utilitarianism, for instance, is right out - how could the robot compute all the possible consequences of its actions?

The ethical principles that the Andersons used seemed to be a vague mix of deontological and consequentialistic theories - with the robot being programmed to have three "duties", and the probability of harm coming to the patient being considered relevant. The principles were, needless to say, very simple, designed to be programmed into a robot that had a relatively easy task to accomplish. There was no need to take into account the complexities that a more socially integrated robotic agent would have to be prepared for.

It will most likely be at least a decade - if not longer - before robots are developed that do require more complicated ethical programming. I fully expect the debate over how we should supply machines with ethics to become substantially more heated in that time period.


Further reading:

Anderson, M. & Anderson, S. L. (2007), The status of machine ethics: a report from the AAAI Symposium, Minds and Machines, 17(1), 1-10

Anderson, S. L. (2007), Asimov’s “three laws of robotics” and machine metaethics, AI & Society, 22(4), 477-493

Tuesday 12 October 2010

Personal Identity and Death; A Reflection

With the recent, unexpected death of an old friend, I've apparently been using thinking about personal identity as a coping mechanism. More specifically, I've been thinking about what different theories of personal identity have to say about death.

Physiological and somatic theories of personal identity are, in general, quite firm on what happens at death; the person dies. End of story. It doesn't matter whether the individual theory claims that the brain or organism as a whole bears the label "person"; if there is physical death of that thing, then the person dies with it.

Psychological theories, however, can respond in a more complex way.

The first example that sprang to my mind was Douglas Hofstadter's view of the soul, which he explicates in I Am A Strange Loop, the "sequel" to his much-vaunted (and much misunderstood) earlier work Gödel, Escher, Bach: An Eternal Golden Braid. Hofstadter's soul, it must be noted, is not the standard dualistic fare;
"The central aim of this book is to try to pinpoint the nature of that "special kind of subtle pattern" that I have come to believe underlies, or gives rise to, what I have here been calling a "soul" or an "I". I could just as well have spoken of "having a light on inside", "possessing interiority", or that old standby, "being conscious"." (p. 23)
So in Hofstadter's view saying that something is "ensouled" is simply a rather poetic variation of saying "this thing possesses consciousness" - and certainly not a claim that there is some form of non-physical substance attached to the thing in some manner. I admit to not having made the time to read GEB fully yet, but I would doubt that his views differ substantially between the two works (if, indeed, he discusses souls at all in the earlier book).

The relevance of I Am A Strange Loop to this post is that Hofstadter argues that our souls can live on after death; he begins with a discussion of how Chopin's music allows a small fragment of Chopin's mind to live on after his death, but the primary case study of the book involves Hofstadter's wife, Carol, who died suddenly of a brain tumour. Hofstadter argues that aspects of her soul lived on in his mind after her death; they shared a number of their desires and beliefs, and many years of memories.

This is a reasonable claim to make, yet it should be pointed out that by necessity Hofstadter and his wife did not share all of their mental states. Those mental states representing the body (Damasio's "proto-self", roughly) are an obvious example of such unshared mental states. The question that should be asked, then, is; how many shared mental states were there, and how significant is the loss of those that were not shared to the continuation of his wife's personal identity?

I'm not immediately sure how to answer those, nor am I in any frame of mind to adequately do so currently. My intuition is that, although it would technically be possible for there to be enough psychological connections between two individuals for the death of either not to matter in terms of psychological continuity, that such an event rarely (if ever) occurs. Although some of the deceased individual's psychological states will - almost inevitably - be continuous with the psychological states of others, there won't be enough of the deceased individual left, as it were, to claim that the individual pre-death is psychologically continuous with the "fragments" in others' minds post-death.

All that remains are aspects, small pieces, of the individual's self. That might change in future - advances in science could lead to methods of "saving" the majority of an individual's self post-biological death, such as the ever-popular "mind uploading" concept - but for now, I can't help but think that it is a sad fact of life.

Thursday 7 October 2010

Not Quite Nobel...

So it turns out that an old supervisor of mine, Richard Stephens, won this year's Ignobel Peace Prize for his research on how swearing lessens the individual's sensation of pain.

Here's a link to a PDF copy of the study; for those without the inclination (or, should that link be deleted, the ability) to read the full study, Stephens et al.'s design was relatively simple.

They began by asking their participants to give a list of words that the participant would use if their thumb was struck by a hammer, and then a list of words that they would use in describing a table. Each participant was then subjected to two variants of a cold pressor test - one in which they were instructed to repeat the first swear word on their 'hammer' list whilst their hand was submerged, and one in which they were instructed to repeat the corresponding word on the 'table' list (given that the participants were students, it's not terribly surprising that only one of sixty-seven participants failed to claim that they would utter a swear word in the event of a hammer striking their thumb). The order that each participant experienced these conditions was randomised, and prior to each trial the participants held their hand in a container filled with room-temperature water for three minutes.

The results went against their initial hypothesis; swearing increased pain tolerance (the amount of time participants held their hand in the icy water), and reduced the reported sensations of pain. Their suggestion in the final paragraphs of the paper is that swearing might increase levels of aggression, which in turn might induce hypoalgesia (a reduced sensitivity to pain).

It's certainly a nice piece of research, although I'm wondering if the effect size might have differed had alternative methods of causing pain been used. That suggestion comes from a few papers I've read that found evidence that redheads have a lower tolerance for thermal pain (e.g., the ice used by Stephens et al.), yet a higher tolerance for pain caused by electric shock. It's very, very tenuous, but suggests to my mind that we shouldn't hastily assert that all forms of pain are of the same kind - or, at least, experienced in the same way.

Regardless, it's good to see the work get such public recognition.