Monday, June 4, 2007

A Phrenology of Utilitarianism?

The Washington Post reports on the startling neurological genesis of moral reasoning -- actually not only of moral reasoning, but of a particular moral perspective. It would appear that there is, literally, a region of the brain that imposes moral side constraints, and that when it is disabled, utilitarian moral reasoning emerges.

Moral decisions can often feel like abstract intellectual challenges, but a number of experiments such as the one by Grafman have shown that emotions are central to moral thinking. In another experiment published in March, University of Southern California neuroscientist Antonio R. Damasio and his colleagues showed that patients with damage to an area of the brain known as the ventromedial prefrontal cortex lack the ability to feel their way to moral answers.

When confronted with moral dilemmas, the brain-damaged patients coldly came up with "end-justifies-the-means" answers. Damasio said the point was not that they reached immoral conclusions, but that when confronted by a difficult issue -- such as whether to shoot down a passenger plane hijacked by terrorists before it hits a major city -- these patients appear to reach decisions without the anguish that afflicts those with normally functioning brains.

Are utilitarians, then, mentally disabled?

The most interesting part of this is that folks with the defective ventromedial cortex still had moral answers -- they were just dispassionately delivered. Does this mean that morality to them is simply a learned algorithm that pairs certain outcomes with the words "should" and "good" and "bad"? Or do they still fully understand and believe in the concept of "should" but simply reach different conclusions?

On behalf of my utilitarian brothers and sisters, I think the most likely explanation from the experiment is the latter -- we can learn an algorithm against side constraints (never kill) as well as a utilitarian equation. The difference between the reasoning that emerges from this disability and that which is more commonly seen thus does not appear to be the depth of appreciation for the concept of moral duty, but the results. Alternatively, I think it is entirely possible that morality itself -- as a form of belief -- may be nothing more than an algorithmic pairing of circumstance and outcome. Either way, I don't think that it is fair to describe that form of moral reasoning as [ACCESSING DATA, ACCESSING DATA] effectively inhuman.


1 comment:

slickdpdx said...

That is really seriously about the most ephing interesting science bit I have read in years!