The Mismeasure of Morals
August 20, 2011 10:03 AM Subscribe
People with antisocial personality traits are more likely to have utilitarian ethics [PDF]
Also Science Direct link
From the abstract:
a study in which participants responded to a battery of personality assessments and a set of dilemmas that pit utilitarian and non-utilitarian options against each other. Participants who indicated greater endorsement of utilitarian solutions had higher scores on measures of Psychopathy, machiavellianism, and life meaninglessness.
The authors also make the conclusion that "These results question the widely-used methods by which lay moral judgments are evaluated, as these approaches lead to the counterintuitive conclusion that those individuals who are least prone to moral errors also possess a set of psychological characteristics that many would consider prototypically immoral", which I don't necessarily agree with, but the results of the study are still interesting.
Also Science Direct link
From the abstract:
a study in which participants responded to a battery of personality assessments and a set of dilemmas that pit utilitarian and non-utilitarian options against each other. Participants who indicated greater endorsement of utilitarian solutions had higher scores on measures of Psychopathy, machiavellianism, and life meaninglessness.
The authors also make the conclusion that "These results question the widely-used methods by which lay moral judgments are evaluated, as these approaches lead to the counterintuitive conclusion that those individuals who are least prone to moral errors also possess a set of psychological characteristics that many would consider prototypically immoral", which I don't necessarily agree with, but the results of the study are still interesting.
By the way, it's worth reading the "Discussion" section at the end of the paper. It covers a lot of the possible objections and over-interpretations of these results quite nicely.
posted by DRMacIver at 10:13 AM on August 20, 2011 [1 favorite]
posted by DRMacIver at 10:13 AM on August 20, 2011 [1 favorite]
From Part 5 of the PDF (Discussion):
Our study illustrates that the widely adopted use of sac-
rificial dilemmas in the study of moral judgment fails to
distinguish between people who are motivated to endorse
utilitarian moral choices because of underlying emotional
deficits (such as those captured by our measures of psy-
chopathy and Machiavellianism) and those who endorse
it out of genuine concern for the welfare of others and a
considered belief that utilitarianism is the optimal way of
achieving the goals of morality. Consistent with what is
known about the emotional deficits in a clinical population
that endorses utilitarian responses to moral dilemmas (i.e.,
patients with damage to their VMPFC), we found that non-
clinical individuals who indicated utilitarian preferences
scored higher on measures of psychopathy and Machiavel-
lianism. In addition, these participants also appear to per-
ceive less meaning in life.
posted by philip-random at 10:14 AM on August 20, 2011
Our study illustrates that the widely adopted use of sac-
rificial dilemmas in the study of moral judgment fails to
distinguish between people who are motivated to endorse
utilitarian moral choices because of underlying emotional
deficits (such as those captured by our measures of psy-
chopathy and Machiavellianism) and those who endorse
it out of genuine concern for the welfare of others and a
considered belief that utilitarianism is the optimal way of
achieving the goals of morality. Consistent with what is
known about the emotional deficits in a clinical population
that endorses utilitarian responses to moral dilemmas (i.e.,
patients with damage to their VMPFC), we found that non-
clinical individuals who indicated utilitarian preferences
scored higher on measures of psychopathy and Machiavel-
lianism. In addition, these participants also appear to per-
ceive less meaning in life.
posted by philip-random at 10:14 AM on August 20, 2011
Utilitarianism is a stupid philosophy. Even if you get both utilities - Water Works and Electric Company - you are fucked as soon as another player builds a Hotel on Mayfair. That's where the real money is. It's a "dog eat top-hat eat giant shoe" world out there.
posted by the quidnunc kid at 10:22 AM on August 20, 2011 [93 favorites]
posted by the quidnunc kid at 10:22 AM on August 20, 2011 [93 favorites]
Dear science: Did you know that many years these people involved things like punctuation to include commas and semicolons these things make it easier to parse long and I don't want to say rambling but quite possibly rambling sentiments and, in so doing, allow you to better present your information in such a way that people don't just get bored and just looking at cat videos on the interoh kitteh
posted by kittens for breakfast at 10:24 AM on August 20, 2011 [14 favorites]
posted by kittens for breakfast at 10:24 AM on August 20, 2011 [14 favorites]
This is an interesting study. It's obviously not possible to put people in real-life, simulated simulations to test this hypothesis. Nevertheless, it makes me wonder what would transpire in a strong simulation, where the experimental subject didn't know it was simulation. I wonder if things would be different.
posted by Vibrissae at 10:25 AM on August 20, 2011
posted by Vibrissae at 10:25 AM on August 20, 2011
Appendix A. Sacrificial dilemmas
You are the captain of a small military submarine traveling underneath a large iceberg
"whats an iceberg."
"ever see ice cube in a glass of water?"
"yeah"
"same thing."
You and some other soldiers were captured. After a year in a prison camp, your group tried to escape but was caught.
"whatta you mean i don't want to help"
posted by clavdivs at 10:26 AM on August 20, 2011 [3 favorites]
You are the captain of a small military submarine traveling underneath a large iceberg
"whats an iceberg."
"ever see ice cube in a glass of water?"
"yeah"
"same thing."
You and some other soldiers were captured. After a year in a prison camp, your group tried to escape but was caught.
"whatta you mean i don't want to help"
posted by clavdivs at 10:26 AM on August 20, 2011 [3 favorites]
Reminds me of the Trolley Problem. Would you pull a lever to divert a train to save its ten passengers, if it killed one man on the track? Most/many/some people say "yes!". Would you push the same man in front of the same train to have the same effect? Most people say "no!" People aren't good at utilitarianism.
There was a TV programme a few years ago, where members of the general public pretended to be British Government officials making security decisions. One of the scenarios was "a passenger aircraft has dropped out of radio contact, and changed course towards the centre of London. Do you order it shot down?" I suspect twenty years ago, as a teenager, I would have said "Yes! Of course! Don't be so STUPID! One hundred passengers against London?" Now, I'm less sure. Even if thousands die because I do not shoot down the 'plane, none of them were killed by me. If I shoot down the 'plane, they died because of me. That makes a difference. I am responsible for myself and the acts I do.
I recall a sci-fi short story, where a traveller finds a blissful, happy, harmonious society. No poverty, no ill-will, all is happy. But he finds that, through some mechanism, this is purchased at the cost of one child - just one! - who is imprisoned in a basement and hurt. Utilitarian perfect sense. Morally dubious. I think you'll agree.
This all led me, personally, to reject utilitarianism as a moral system. Do you pull the lever? Do you shoot down the 'plane? Do you acquiesce in the child torture? Do you nuke Hiroshima? Do you hang murderers? Do you launch a pogrom to unify your country and give people a sense of purpose? Do you legalise driving without seatbelts? Do you shut down the welfare state? All good utilitarian arguments for these positions. I'm sure I can find one that presses your particular buttons, if none of these did. Utilitarianism can excuse too many evils.
posted by alasdair at 10:26 AM on August 20, 2011 [9 favorites]
There was a TV programme a few years ago, where members of the general public pretended to be British Government officials making security decisions. One of the scenarios was "a passenger aircraft has dropped out of radio contact, and changed course towards the centre of London. Do you order it shot down?" I suspect twenty years ago, as a teenager, I would have said "Yes! Of course! Don't be so STUPID! One hundred passengers against London?" Now, I'm less sure. Even if thousands die because I do not shoot down the 'plane, none of them were killed by me. If I shoot down the 'plane, they died because of me. That makes a difference. I am responsible for myself and the acts I do.
I recall a sci-fi short story, where a traveller finds a blissful, happy, harmonious society. No poverty, no ill-will, all is happy. But he finds that, through some mechanism, this is purchased at the cost of one child - just one! - who is imprisoned in a basement and hurt. Utilitarian perfect sense. Morally dubious. I think you'll agree.
This all led me, personally, to reject utilitarianism as a moral system. Do you pull the lever? Do you shoot down the 'plane? Do you acquiesce in the child torture? Do you nuke Hiroshima? Do you hang murderers? Do you launch a pogrom to unify your country and give people a sense of purpose? Do you legalise driving without seatbelts? Do you shut down the welfare state? All good utilitarian arguments for these positions. I'm sure I can find one that presses your particular buttons, if none of these did. Utilitarianism can excuse too many evils.
posted by alasdair at 10:26 AM on August 20, 2011 [9 favorites]
Gah this study reminds me of a "Facebook friend" of one of my Facebook friends who is a real troll. "Data supports the hypothesis that jobless people do not have jobs because they have lower IQs." etc.
posted by KokuRyu at 10:33 AM on August 20, 2011
posted by KokuRyu at 10:33 AM on August 20, 2011
This all led me, personally, to reject utilitarianism as a moral system...Do you shut down the welfare state? All good utilitarian arguments for these positions.
Actually, utilitarianism was set up by Bentham as a political argument in the direction of a welfare state. Everyone's pleasures count equally in his system, the poor as much as the rich. Would that we lived in such a sociopathic society.
posted by justsomebodythatyouusedtoknow at 10:33 AM on August 20, 2011 [3 favorites]
Actually, utilitarianism was set up by Bentham as a political argument in the direction of a welfare state. Everyone's pleasures count equally in his system, the poor as much as the rich. Would that we lived in such a sociopathic society.
posted by justsomebodythatyouusedtoknow at 10:33 AM on August 20, 2011 [3 favorites]
Do you acquiesce in the child torture?
A quick read of the various scenarios offered revealed that many (not all) could perhaps be resolved by self-sacrifice. That is, don't sacrifice some other guy, offer up yourself instead. Which gets us to the child torture. Is there an option to replace the child with yourself? If not, why not? What kind of sick god/tyrant/monster/ruling-elite would divine such a system? Fuck 'em. Tear the whole thing down, or die trying.
posted by philip-random at 10:34 AM on August 20, 2011 [3 favorites]
A quick read of the various scenarios offered revealed that many (not all) could perhaps be resolved by self-sacrifice. That is, don't sacrifice some other guy, offer up yourself instead. Which gets us to the child torture. Is there an option to replace the child with yourself? If not, why not? What kind of sick god/tyrant/monster/ruling-elite would divine such a system? Fuck 'em. Tear the whole thing down, or die trying.
posted by philip-random at 10:34 AM on August 20, 2011 [3 favorites]
So the problem I have with a lot of specific flavours of utilitarianism is what I refer to as "transfer utilitarianism" (there may be an official word for it that I don't know). The idea that you can freely transfer "good" amongst people without changing the ethical status of the situation (crudely: If I give bob a book and jim a kitten, this is ethically the same as giving bob a kitten and a book and jim nothing). This is particularly common when you're adopting economist ideas like utility functions and are trying to maximize "expected utility". I don't think this is essential to adopting a utilitarian framework of ethics.
If you regard utilitarianism as the statement "greater good for the greatest number of people", then you can still judge something as "more ethical" if it increases the number of people or the amount of good for the same (or greater) number of people. It's just that you then need additional guidelines for e.g. the circumstance of the child in the basement (which I'm perfectly comfortable saying is completely morally unsound, regardless of the fact that I consider myself more or less utilitarian in my ethics).
posted by DRMacIver at 10:36 AM on August 20, 2011 [2 favorites]
If you regard utilitarianism as the statement "greater good for the greatest number of people", then you can still judge something as "more ethical" if it increases the number of people or the amount of good for the same (or greater) number of people. It's just that you then need additional guidelines for e.g. the circumstance of the child in the basement (which I'm perfectly comfortable saying is completely morally unsound, regardless of the fact that I consider myself more or less utilitarian in my ethics).
posted by DRMacIver at 10:36 AM on August 20, 2011 [2 favorites]
I mean, say what you like about the tenets of utilitarianism, Dude, at least it's an ethos.
posted by AsYouKnow Bob at 10:38 AM on August 20, 2011 [10 favorites]
posted by AsYouKnow Bob at 10:38 AM on August 20, 2011 [10 favorites]
So this study involved.. researchers free from emotional attachment to the participants? (low affect - tick).. some level of blind or double blind testing (deceit - tick).. might result in someone in real life putting psychopaths in positions where they evaluate the moral judgements of others (disregard for both consequences of actions and safety of others - tick, and tick again).. doesn't seem to have resulted in any unhappiness on the part of the researchers (no remorse - tick)..
Still, they must have thought it would have a certain utility or they wouldn't have published..
(tick)..
posted by Ahab at 10:38 AM on August 20, 2011 [6 favorites]
Still, they must have thought it would have a certain utility or they wouldn't have published..
(tick)..
posted by Ahab at 10:38 AM on August 20, 2011 [6 favorites]
The counterintuitive conclusion would be exactly what I would have suspected. But that's because the so-called utilitarian solutions are those that value objectivity over feelings (which are subjective.)
posted by Obscure Reference at 10:42 AM on August 20, 2011
posted by Obscure Reference at 10:42 AM on August 20, 2011
Commented w/o reading but very unsurprising -- being utilitarian requires you to reject a lot of commonsense morality and appeals to analytical / systematizing minds.
I wouldn't be surprised if a similar result holds for adherents of any systematic moral view. . . .
posted by grobstein at 10:42 AM on August 20, 2011 [1 favorite]
I wouldn't be surprised if a similar result holds for adherents of any systematic moral view. . . .
posted by grobstein at 10:42 AM on August 20, 2011 [1 favorite]
I didn't read it yet, but I saved it as a PDF for reading later. What I'm concerned about is how someone could study psychology and apply those concepts to the whole population. One reason why one person may be antisocial could be a totally different reason for the next person.
posted by mitrieD at 10:43 AM on August 20, 2011
posted by mitrieD at 10:43 AM on August 20, 2011
I doubt you really understand Utilitarianism, alasdair. Try some Pieter Singer books like Animal Rights. And try considering real world scenarios rather than contrived thought experiments.
You know, the utility function was designed precisely to prevent your book & kitten swap example, DRMacIver.
posted by jeffburdges at 10:43 AM on August 20, 2011
You know, the utility function was designed precisely to prevent your book & kitten swap example, DRMacIver.
posted by jeffburdges at 10:43 AM on August 20, 2011
jeffburdges: Yes, I know. The kitten and book example is specifically chosen as things which are reasonably orthogonal and thus it's reasonable to regard the utility of both as the sum of the individual utilities, but it's still not a very good example. I retract the example, but not the underlying point.
The place where I've encountered this that I find particularly egregious is in voting systems, where some people (usually those pushing range voting) argue that the overall best result for the population is the one which maximizes the expected utility of the voters. The problem is that this is tantamount to maximizing the sum of individual peoples' utility functions, and is thus prone to marginalizing the needs of minorities (if you have unbounded utility functions you're prone to pascal's mugger. If you don't then it's very easy for majorities to crowd out the needs of minorities).
posted by DRMacIver at 10:49 AM on August 20, 2011
The place where I've encountered this that I find particularly egregious is in voting systems, where some people (usually those pushing range voting) argue that the overall best result for the population is the one which maximizes the expected utility of the voters. The problem is that this is tantamount to maximizing the sum of individual peoples' utility functions, and is thus prone to marginalizing the needs of minorities (if you have unbounded utility functions you're prone to pascal's mugger. If you don't then it's very easy for majorities to crowd out the needs of minorities).
posted by DRMacIver at 10:49 AM on August 20, 2011
Book title: "Why kittens will ruin you life, and why you are better off not having one"
posted by found missing at 10:49 AM on August 20, 2011 [1 favorite]
posted by found missing at 10:49 AM on August 20, 2011 [1 favorite]
I recall a sci-fi short story, where a traveller finds a blissful, happy, harmonious society. No poverty, no ill-will, all is happy. But he finds that, through some mechanism, this is purchased at the cost of one child - just one! - who is imprisoned in a basement and hurt. Utilitarian perfect sense. Morally dubious. I think you'll agree.
The story you're thinking of is by Ursula LeGuin, I think. But you could just as well be talking about Christianity. This perspective is brought out in the Brothers Karamazov. A number of years ago I was disappointed to discover I wasn't the first to notice this. . . .
posted by grobstein at 10:51 AM on August 20, 2011 [9 favorites]
The story you're thinking of is by Ursula LeGuin, I think. But you could just as well be talking about Christianity. This perspective is brought out in the Brothers Karamazov. A number of years ago I was disappointed to discover I wasn't the first to notice this. . . .
posted by grobstein at 10:51 AM on August 20, 2011 [9 favorites]
The problem I have with utilitarianism is in implementation where people have to predict the future consequences of actions. This is where the implementation breaks down for me, it isn't the sociopathic consequences, it is the fact that all these moral choice questions seem to involve a small evil choice now in return for a larger good payoff later. I think from what I have read that there is empirical consensus that humans are just terrible at predicting the future. I mean we can even look at recent history. How have predictions of what would have happened in Iraq and Afghanistan turned out? Those short term evils were definitely worth tolerating considering all the good that came out of them!
You just are never in this situation where you "know" what will happen as you are in these thought experiments. I propose that judging only the morality of the direct actions is a much safer system. Utilitarianism has a much clearer useful application to me when the question is something like, "how much money should we put towards cancer research vs Mars expeditions".
posted by SomeOneElse at 10:51 AM on August 20, 2011 [16 favorites]
You just are never in this situation where you "know" what will happen as you are in these thought experiments. I propose that judging only the morality of the direct actions is a much safer system. Utilitarianism has a much clearer useful application to me when the question is something like, "how much money should we put towards cancer research vs Mars expeditions".
posted by SomeOneElse at 10:51 AM on August 20, 2011 [16 favorites]
This is a very interesting paper. I tend to agree that there is an underlying factor of level of trust in the experiment itself that is present here. Most people's morality is based on a very present recognition that they don't know with certainty what the future holds. The experiments are presented as factual with concrete cause and effect actions. However, even when told this, most folks will tend to react with the rules pertaining to the ambiguity they expect in life.
It also shows a significant agency flaw in human thinking. That doing something gives you agency, while standing by and choosing to do nothing makes the outcome not your responsibility. I'd be interested in the intersection of those who engage actively by habit versus standing by, and how it relates to utilitarian choices.
posted by meinvt at 10:58 AM on August 20, 2011 [2 favorites]
It also shows a significant agency flaw in human thinking. That doing something gives you agency, while standing by and choosing to do nothing makes the outcome not your responsibility. I'd be interested in the intersection of those who engage actively by habit versus standing by, and how it relates to utilitarian choices.
posted by meinvt at 10:58 AM on August 20, 2011 [2 favorites]
In fact this result was anticipated by Sidgwick, who argued that utilitarianism could be the true morality but should remain unknown to and unaccepted by most people (the idea known as "esoteric morality" or, more pejoratively, as "Government House utilitarianism").
posted by grobstein at 11:08 AM on August 20, 2011 [1 favorite]
posted by grobstein at 11:08 AM on August 20, 2011 [1 favorite]
A fun read, but I want to point out that this study was based on a convenience sample of 208 undergraduate students. I know I'm stating the obvious to anyone with even the most basic grasp of statistics, but these findings certainly can't be generalized on any level.
posted by pecanpies at 11:10 AM on August 20, 2011 [1 favorite]
posted by pecanpies at 11:10 AM on August 20, 2011 [1 favorite]
I like to pray to the god of random sampling as much as the next person, but to argue that results are entirely ungeneralizable it helps to speculate about the nature of the unobserved heterogeneity between the study sample and wider populations.
posted by found missing at 11:24 AM on August 20, 2011 [1 favorite]
posted by found missing at 11:24 AM on August 20, 2011 [1 favorite]
The story referenced above is The Ones Who Walk Away From Omelas [PDF!!!!] by Ursula Le Guin.
SPOILER - the idea is that when you're a teen they show you this room where a child is imprisoned in solitude and pain, and if you accept it you just go on living in utopia, and some people...um...walk away.
I couldn't put my finger on why this story bugged me, and then I realized that it should be called "The Ones Who Got Together, Shared Their Disgust and Took Direct Action To Free The Child From The Room In Omelas". Seriously, you're not even talking about a high security prison, here. The idea that the choices are acquiescence or departure is a bit silly just as a plot device.
posted by Frowner at 11:25 AM on August 20, 2011 [4 favorites]
SPOILER - the idea is that when you're a teen they show you this room where a child is imprisoned in solitude and pain, and if you accept it you just go on living in utopia, and some people...um...walk away.
I couldn't put my finger on why this story bugged me, and then I realized that it should be called "The Ones Who Got Together, Shared Their Disgust and Took Direct Action To Free The Child From The Room In Omelas". Seriously, you're not even talking about a high security prison, here. The idea that the choices are acquiescence or departure is a bit silly just as a plot device.
posted by Frowner at 11:25 AM on August 20, 2011 [4 favorites]
jeffburdges I doubt you really understand Utilitarianism, alasdair. Try some Pieter Singer books like Animal Rights. And try considering real world scenarios rather than contrived thought experiments.
[Reads Wikipedia article] No, sorry, I'm not getting you. I've read some Singer things, and I completely agree with him on some of his points. But I don't get how I'm mis-characterising Utilitarianism, as it's commonly understood, nor indeed as it is described in the post itself - which in fact references the exact same trolley problem as an example of utilitarianism. Can you elucidate?
Real world scenarios: that would be the scenarios from "nuking Japan" onwards at the end of my comment. Do you need me to clarify any of these? I'm not a good writer. They look pretty real-world to me.
posted by alasdair at 11:29 AM on August 20, 2011
[Reads Wikipedia article] No, sorry, I'm not getting you. I've read some Singer things, and I completely agree with him on some of his points. But I don't get how I'm mis-characterising Utilitarianism, as it's commonly understood, nor indeed as it is described in the post itself - which in fact references the exact same trolley problem as an example of utilitarianism. Can you elucidate?
Real world scenarios: that would be the scenarios from "nuking Japan" onwards at the end of my comment. Do you need me to clarify any of these? I'm not a good writer. They look pretty real-world to me.
posted by alasdair at 11:29 AM on August 20, 2011
You should check out Donald Saari's geometric decomposition of voting systems, DRMacIver. A priori, the Borda count sounds as ridiculous as range voting to my ear. Yet, apparently the Borda count has some nice universality properties.
In any case, we know that strategic voting is rather easy with range voting, meaning that Arrow's theorem applies in spirit, contrary to it's proponents claims. And approval voting gets downright wonky.
Btw, there has been empirical work suggesting that Arrow related difficulties decrease when the number of winners increases, favoring multi-winner lections system like Single transferable vote, or perhaps a variation on Schwartz/Smith set with a minimum size.
posted by jeffburdges at 11:30 AM on August 20, 2011
In any case, we know that strategic voting is rather easy with range voting, meaning that Arrow's theorem applies in spirit, contrary to it's proponents claims. And approval voting gets downright wonky.
Btw, there has been empirical work suggesting that Arrow related difficulties decrease when the number of winners increases, favoring multi-winner lections system like Single transferable vote, or perhaps a variation on Schwartz/Smith set with a minimum size.
posted by jeffburdges at 11:30 AM on August 20, 2011
Ah, Metafilter: where you can't even state the obvious without running into someone who wants to start an argument. I remember now why I primarily lurk rather than comment.
posted by pecanpies at 11:30 AM on August 20, 2011 [2 favorites]
posted by pecanpies at 11:30 AM on August 20, 2011 [2 favorites]
Thank-you grobstein and Frowner for finding the story for me.
posted by alasdair at 11:31 AM on August 20, 2011
posted by alasdair at 11:31 AM on August 20, 2011
Ah, Metafilter: where you can't even state the obvious without running into someone who wants to start an argument.
[tongue-in-cheek] I'm sure that's not true. I think it's a problem with your idea of "obvious". [/tongue-in-cheek]
posted by alasdair at 11:33 AM on August 20, 2011
[tongue-in-cheek] I'm sure that's not true. I think it's a problem with your idea of "obvious". [/tongue-in-cheek]
posted by alasdair at 11:33 AM on August 20, 2011
My point is that it isn't obvious, so lurk away!
posted by found missing at 11:33 AM on August 20, 2011 [2 favorites]
posted by found missing at 11:33 AM on August 20, 2011 [2 favorites]
jeffburdges: Know if any of that is online? I vaguely recall having seen visualisations of the discontinuities in voting before, which this reminds me of, but can't find them. Also what are the borda count universality properties?
I don't actually have a particular horse to back in the voting system race (although I quite like various randomized methods). I have various practical objections to range voting which could probably be overcome easily enough, but my objection is less to range voting and more about the arguments about why it's the Only Correct Thing being ethically unsound.
(actually this is getting pretty off topic. Perhaps it should be moved elsewhere? As a relative newbie I'm not sure what the metiquette is here)
posted by DRMacIver at 11:40 AM on August 20, 2011
I don't actually have a particular horse to back in the voting system race (although I quite like various randomized methods). I have various practical objections to range voting which could probably be overcome easily enough, but my objection is less to range voting and more about the arguments about why it's the Only Correct Thing being ethically unsound.
(actually this is getting pretty off topic. Perhaps it should be moved elsewhere? As a relative newbie I'm not sure what the metiquette is here)
posted by DRMacIver at 11:40 AM on August 20, 2011
I think people are taking a cartoon version of 'utilitarianism' here, something like "Whatever solution results in the least death/discomfort is the most moral". But that's not what Utilitarianism is actually about. If it was, you could argue that throwing everyone into a prison and giving them happy pills so that they can't hurt themselves or others is the most 'moral' thing to do.
You have to pick what you consider a 'moral good'. You can define things like Justice, fairness, and freedom as moral goods as well as life and comfort. But in this thread we have people discussion Utilitariansim without even bothering define what ideals we are trying to maximize.
So for example the 'throwing the dude in front of the train' example, or the Omelas, or whatever might not be moral if fairness and justice are something you're trying to optimize.
If voters take fairness/justice into consideration when they vote, then you don't have the problem of crushing minorities.
posted by delmoi at 11:42 AM on August 20, 2011 [1 favorite]
You have to pick what you consider a 'moral good'. You can define things like Justice, fairness, and freedom as moral goods as well as life and comfort. But in this thread we have people discussion Utilitariansim without even bothering define what ideals we are trying to maximize.
So for example the 'throwing the dude in front of the train' example, or the Omelas, or whatever might not be moral if fairness and justice are something you're trying to optimize.
If voters take fairness/justice into consideration when they vote, then you don't have the problem of crushing minorities.
posted by delmoi at 11:42 AM on August 20, 2011 [1 favorite]
Can't read the study right now, which doesn't matter because even if the authors are careful I'm pretty sure this research will be abused. But I have a quick question. How hard do they work to avoid the following conclusion:
"Psychopaths are more likely to be utilitarian, therefore utilitarians are probably psychopaths."
If anyone can read the article and wants to increase my happiness at low coat to themselves, I'd really appreciate it!
posted by anotherpanacea at 11:50 AM on August 20, 2011
"Psychopaths are more likely to be utilitarian, therefore utilitarians are probably psychopaths."
If anyone can read the article and wants to increase my happiness at low coat to themselves, I'd really appreciate it!
posted by anotherpanacea at 11:50 AM on August 20, 2011
anotherpanacea: From the article:
Nor do our results show that endorsing utilitarianism is pathological, as it is unlikely that the personality styles measured here would characterize all (or most) proponents of utilitarianism as an ethical theory (nor is the measure of psychopathic personality traits we used sufficient to conclude that any respondents reach clinical levels of psychopathy).
posted by DRMacIver at 11:53 AM on August 20, 2011
Nor do our results show that endorsing utilitarianism is pathological, as it is unlikely that the personality styles measured here would characterize all (or most) proponents of utilitarianism as an ethical theory (nor is the measure of psychopathic personality traits we used sufficient to conclude that any respondents reach clinical levels of psychopathy).
posted by DRMacIver at 11:53 AM on August 20, 2011
delmoi: The problem is that when designing a voting system according to a code of ethics, you can't assume that the individual voters adopt that code of ethics (they probably don't). If you want fairness you have to build it into the voting system, not assume that the voters will value fairness over self-interest.
posted by DRMacIver at 11:54 AM on August 20, 2011
posted by DRMacIver at 11:54 AM on August 20, 2011
Thanks so much! Can I prevail upon your goodwill again? With that caveat in place, what then is the implication of this study? Does it tell us anything about utilitarianism? Or does it only tell us about psychopaths? Would someone with an antisocial personality actually be able to engage in utility-maximizing behavior at their own expense, as for instance Peter Singer advocates?
posted by anotherpanacea at 12:00 PM on August 20, 2011
posted by anotherpanacea at 12:00 PM on August 20, 2011
The story referenced above is The Ones Who Walk Away From Omelas [PDF!!!!] by Ursula Le Guin
Echoed in the series 5 Dr Who episode The Beast Below; wonder if the writers based it in the Le Guin story.
posted by weston at 12:06 PM on August 20, 2011
Echoed in the series 5 Dr Who episode The Beast Below; wonder if the writers based it in the Le Guin story.
posted by weston at 12:06 PM on August 20, 2011
There have been some other articles on the blue claiming mildly anti-social personality traits related to various adaptive behavior. One suggested that mild levels give you leadership potential, ala Bill Gates, higher levels give you sociopaths like the Enron guys.
DRMacIver : I think the original papers were Saari's two "Mathematical structure of voting paradoxes" articles, but maybe his preprint 'Explaining All Three-Alternative Voting Outcomes' has a cleaner expository presentation. I never completely bought that Saari's results said exactly what he claimed they said myself, but still.
alasdair : Umm, I thought Japan was nuked to frighten the Russians, right? I'd disagree that even the stated justification was Utilitarian, given the soldiers vs. civilians distinction.
posted by jeffburdges at 12:08 PM on August 20, 2011
DRMacIver : I think the original papers were Saari's two "Mathematical structure of voting paradoxes" articles, but maybe his preprint 'Explaining All Three-Alternative Voting Outcomes' has a cleaner expository presentation. I never completely bought that Saari's results said exactly what he claimed they said myself, but still.
alasdair : Umm, I thought Japan was nuked to frighten the Russians, right? I'd disagree that even the stated justification was Utilitarian, given the soldiers vs. civilians distinction.
posted by jeffburdges at 12:08 PM on August 20, 2011
"Ethics" and the resultant use of short hand terminology to cover complex issues drives me nuts. You could have just as easily asked the study group of undergrads if their preferred short story was the above referenced "Those who walk away..." or Tom Godwin's The Cold Equations wherein the young teenager stows away on a rocket ship which is on a medical rescue mission, and the ADULTS have to decide whether it's her or the mission because the fuel/trajectory is highly calculated for timeliness and the pilot is not expendable. ADULTS make tough decisions every day and the underlying reasoning is incredibly complex and not necessarily consistent. And refusing to make adult decisions (the above cited, albeit extreme, example of deciding btwn the crashing plane vs the city of innocents) because it's "tough" or "it's in God's hands" is as morally weak/selfish as the "psychopathism" this study reportedly paints utilitarianism to be, imo.
posted by beaning at 12:35 PM on August 20, 2011
posted by beaning at 12:35 PM on August 20, 2011
alasdair : Umm, I thought Japan was nuked to frighten the Russians, right? I'd disagree that even the stated justification was Utilitarian, given the soldiers vs. civilians distinction.
Hmmm. I Google "truman utilitarian". First result: "President Harry S. Truman, for example, used a utilitarian calculus when deciding to drop atomic bombs on the Japanese cities of Hiroshima and Nagasaki in 1945." Second result: "I’ll argue that President Truman’s decision to use the atomic bomb against Japan was a utilitarian decision" - Philosophy Now. Third result: Truman and utilitarianism and the decision to bomb Japan. And so on.
Look, you clearly have some exciting definition of utilitarianism that's beyond me and 99% of the population. Possibly you're a philosopher, and this is an area I don't understand? But I think my points have been reasonable. I guess I'll just have to leave you to it.
posted by alasdair at 12:54 PM on August 20, 2011
Hmmm. I Google "truman utilitarian". First result: "President Harry S. Truman, for example, used a utilitarian calculus when deciding to drop atomic bombs on the Japanese cities of Hiroshima and Nagasaki in 1945." Second result: "I’ll argue that President Truman’s decision to use the atomic bomb against Japan was a utilitarian decision" - Philosophy Now. Third result: Truman and utilitarianism and the decision to bomb Japan. And so on.
Look, you clearly have some exciting definition of utilitarianism that's beyond me and 99% of the population. Possibly you're a philosopher, and this is an area I don't understand? But I think my points have been reasonable. I guess I'll just have to leave you to it.
posted by alasdair at 12:54 PM on August 20, 2011
Btw, there has been empirical work suggesting that Arrow related difficulties decrease when the number of winners increases
Not my field, but what's the connection between Arrow's (im)possibility theorem and this? (I'm assuming you're referring to a result that a system approximates characteristics A better as it approaches characteristics B.) While fascinating and depressing, Arrow's Theorem isn't informative about specific tradeoffs in a given system, is it? It spells out what cannot be achieved.
posted by ~ at 1:00 PM on August 20, 2011
Not my field, but what's the connection between Arrow's (im)possibility theorem and this? (I'm assuming you're referring to a result that a system approximates characteristics A better as it approaches characteristics B.) While fascinating and depressing, Arrow's Theorem isn't informative about specific tradeoffs in a given system, is it? It spells out what cannot be achieved.
posted by ~ at 1:00 PM on August 20, 2011
anotherpanacea: I would struggle to summarize the conclusion in a few sentences. There's a very good discussion at the end of it. I suppose a rough approximation would be "Most of the obvious things you could conclude from such a result are unwarranted, but we should be careful about using the ability to make valid utilitarian judgements as a yardstick of peoples' moral character".
posted by DRMacIver at 1:05 PM on August 20, 2011 [1 favorite]
posted by DRMacIver at 1:05 PM on August 20, 2011 [1 favorite]
Moral, social, and anti-social are very squishy words. Many acts, and ideas pose as moral which are reprehensible. Oh, try female genital mutilation, or paying large sums to governments in the form of foreign aid, that actually support the removal of indigenous peoples from lands with attractive natural resources. Moral, moral, moral, many modalities of morality pose as perfection on this little planet of ours. Ours being the operating reality, and ours is a huge item, encompassing everything that we know lives here, and things we have not found yet. Most humans are not moral enough to grant living rights to the web of life.
Anti-social might actually be anti the morality of a group one does not share association with. Utilitarian could be what saves the situation if it is just the math of largest number of survivors. Defining morality is a game for people with nothing better to do, since it is undefinable in any concrete sense. It is an abstraction, a hope for continuity of the immediate norm, embraced by any group, for any reason.
When interracial marriage was illegal in the American South, it was the acceptable norm to hang people who were involved in it. It was the moral thing to do. Morals shift with the social needs of any group. Humans migrate from group to group, and adopt the morals of the new group, to assure membership. It is possible to choose the new group based on moral rectitude, but more likely to choose because of survival needs. Morality plays into to the perceived behavior needed for survival.
Anti-social behavior can represent the highest moral road when that behavior counters extremely mortal behavior parading as morality. Wasn't it Monty Python that said, "NOBODY expects the Spanish inquisition!"
posted by Oyéah at 1:21 PM on August 20, 2011 [2 favorites]
Anti-social might actually be anti the morality of a group one does not share association with. Utilitarian could be what saves the situation if it is just the math of largest number of survivors. Defining morality is a game for people with nothing better to do, since it is undefinable in any concrete sense. It is an abstraction, a hope for continuity of the immediate norm, embraced by any group, for any reason.
When interracial marriage was illegal in the American South, it was the acceptable norm to hang people who were involved in it. It was the moral thing to do. Morals shift with the social needs of any group. Humans migrate from group to group, and adopt the morals of the new group, to assure membership. It is possible to choose the new group based on moral rectitude, but more likely to choose because of survival needs. Morality plays into to the perceived behavior needed for survival.
Anti-social behavior can represent the highest moral road when that behavior counters extremely mortal behavior parading as morality. Wasn't it Monty Python that said, "NOBODY expects the Spanish inquisition!"
posted by Oyéah at 1:21 PM on August 20, 2011 [2 favorites]
There was some work arguing that increasing the number of winners decreased the opportunities for strategic voting in single transferable vote elections, but I've forgotten the authors and never read the article closely.
I just liked the idea that representing more ideas in your legislature decrease the opportunities for complicate strategic voting. I donno if it's true.
You might of course claim any such solution merely pushes the voting paradoxes into the legislature, simply hiding them from public view. You might even imagine that political parties eventually organize themselves around whatever opportunities for strategic voting exist, filling some kinda predatory niche. whatever.
posted by jeffburdges at 1:44 PM on August 20, 2011
I just liked the idea that representing more ideas in your legislature decrease the opportunities for complicate strategic voting. I donno if it's true.
You might of course claim any such solution merely pushes the voting paradoxes into the legislature, simply hiding them from public view. You might even imagine that political parties eventually organize themselves around whatever opportunities for strategic voting exist, filling some kinda predatory niche. whatever.
posted by jeffburdges at 1:44 PM on August 20, 2011
Anti-social might actually be anti the morality of a group one does not share association with.
I think this context is crucial. I do not believe there are any universal and timeless examples of what must constitute a moral act (or an immoral one for that matter.) If you happened to be born an Inca, then balking at ritual child sacrifice would be highly anti-social and seen as dangerous to a tradition that helped bond the community. It is not unlikely that the child being sacrificed would have shared this community value and may even have felt honoured rather than discriminated against. Yes, this is an example of the dreaded moral relativism, but give me that rather than the dangerous dogma of moral absolutes.
posted by binturong at 2:02 PM on August 20, 2011 [2 favorites]
I think this context is crucial. I do not believe there are any universal and timeless examples of what must constitute a moral act (or an immoral one for that matter.) If you happened to be born an Inca, then balking at ritual child sacrifice would be highly anti-social and seen as dangerous to a tradition that helped bond the community. It is not unlikely that the child being sacrificed would have shared this community value and may even have felt honoured rather than discriminated against. Yes, this is an example of the dreaded moral relativism, but give me that rather than the dangerous dogma of moral absolutes.
posted by binturong at 2:02 PM on August 20, 2011 [2 favorites]
I'll raise you a dangerous dogma, with a serious stigma, or, that's better than a poke in the eye with a sharp stigma. Or get your dangerous dogma on a leash.
posted by Oyéah at 2:43 PM on August 20, 2011
posted by Oyéah at 2:43 PM on August 20, 2011
I do not believe there are any universal and timeless examples of what must constitute a moral act (or an immoral one for that matter.)
Bingo. For one thing, the wording seems chosen to avoid scenarios in which utilitarian choices are deemed acceptable by our society. Try something a la 24 ("the terrorist knows where the bomb is. If you torture him to get the information, twenty innocent children in the embassy day-care will be saved. Do you torture him?") or try making the single victim a criminal ("the death penalty is said to deter crime. If you apply it to the serial murderer, you may prevent umpty-ump future deaths. Do you sentence the murderer to death?") and see what happens...
posted by vorfeed at 2:53 PM on August 20, 2011
Bingo. For one thing, the wording seems chosen to avoid scenarios in which utilitarian choices are deemed acceptable by our society. Try something a la 24 ("the terrorist knows where the bomb is. If you torture him to get the information, twenty innocent children in the embassy day-care will be saved. Do you torture him?") or try making the single victim a criminal ("the death penalty is said to deter crime. If you apply it to the serial murderer, you may prevent umpty-ump future deaths. Do you sentence the murderer to death?") and see what happens...
posted by vorfeed at 2:53 PM on August 20, 2011
The point of "Omelas" is that you're living it; we've all made our peace with the suffering and degradation of not one child but millions of children and adults in the name of our happy and comfortable lives. Only a vanishingly small number of people "walk away" from contemporary consumer capitalism and the widespread structure injustices that fuel it; by and large, those people aren't commenting on MetaFilter. The rest of us find some way to deal with it.
Le Guin says (I think) that she got the idea for the story while driving in her car; Omelas is Salem, Oregon, in the rearview mirror.
posted by gerryblog at 3:14 PM on August 20, 2011 [7 favorites]
Le Guin says (I think) that she got the idea for the story while driving in her car; Omelas is Salem, Oregon, in the rearview mirror.
posted by gerryblog at 3:14 PM on August 20, 2011 [7 favorites]
The rest of us find some way to deal with it.
This, in a nutshell. Perhaps the philosophy is not so much Utilitarianism as Pragmatism. Since no society is ideal and no society is unchanging, how can we do otherwise than compromise? Re: this article and the discussion, I think that for some of these decisions the characteristic is best described as asocial rather than anti-social. That's a very important distinction.
posted by binturong at 3:46 PM on August 20, 2011
This, in a nutshell. Perhaps the philosophy is not so much Utilitarianism as Pragmatism. Since no society is ideal and no society is unchanging, how can we do otherwise than compromise? Re: this article and the discussion, I think that for some of these decisions the characteristic is best described as asocial rather than anti-social. That's a very important distinction.
posted by binturong at 3:46 PM on August 20, 2011
This would appear to fit me. My morality is broadly utilitarian and it seems I have a pretty antisocial personality. Also, my anecdotal single data point rules.
posted by Decani at 4:17 PM on August 20, 2011
posted by Decani at 4:17 PM on August 20, 2011
I do not believe there are any universal and timeless examples of what must constitute a moral act (or an immoral one for that matter.)
So depending on the society, practices like clitorectomies and sati are moral acts? No, they may not be as contemptible in some contexts where the actors have more of an excuse for failing to recognize their victim's humanity. Give me the so called "dangerous dogma of moral absolutes" instead of this total failure to distinguish good from evil.
It is not unlikely that the child being sacrificed would have shared this community value and may even have felt honoured rather than discriminated against.
It is indeed rather unlikely that any particular child would feel honored rather than discriminated against, but with a large enough population you could probably find a couple that believed in the communal lie right up to the end. There is no community value here for the child to have shared; there may have been a belief in the necessity of the act that the child may have internalized. Note the difference between those two claims. In the latter we can recognize both superstition and that commonly (I would say universally, but there are always some who are simply wrong) acknowledged virtue, duty.
-----
It also shows a significant agency flaw in human thinking. That doing something gives you agency, while standing by and choosing to do nothing makes the outcome not your responsibility.
This might not be a flaw. From the section on "Critique of Utilitarianism" in the Wikipedia article on Bernard Williams:
"Williams was particularly critical of utilitarianism, a consequentialist position, the simplest version of which is that actions are good only insofar as they promote the greatest happiness of the greatest number.
One of his best-known arguments against utilitarianism centers on Jim, a botanist doing research in a South American country led by a brutal dictator. One day Jim finds himself in the central square of a small town facing 20 Indians who have been randomly captured and tied up as examples of what will happen to rebels. The captain who has arrested the Indians says that if Jim will kill one of them, the others will be released in honor of Jim's status as a guest, but if he does not, all the Indians will be killed.[11] For most consequentialist theories, there is no moral dilemma in a case like this; all that matters is the outcome. Simple act utilitarianism would therefore favour Jim killing one of the men. Against this, Williams argued that there is a crucial moral distinction between a person being killed by me, and being killed by someone else because of an act or omission of mine. The utilitarian loses that vital distinction, turning us into empty vessels by means of which consequences occur, rather than preserving our status as moral actors and decision-makers. He argued that moral decisions must preserve our psychological identity and integrity.
We do not, in fact, judge actions by their consequences, he argued. To solve parking problems in London, a utilitarian would have to favour threatening to shoot people who parked illegally. If only a few people were shot for this, illegal parking would soon stop; thus the utilitarian calculus could justify the shootings by the happiness the absence of parking problems would bring. Any theory with this as a consequence, Williams argued, should be rejected out of hand, no matter how plausible it feels to argue that we do judge actions by their consequences. In an effort to save the utilitarian account, a rule utilitarian — a version of utilitarianism that promotes not the act, but the rule that tends to lead to the greatest happiness of the greatest number — would ask what rule could be extrapolated from the parking example. If the rule were: "Anyone might be shot over a simple parking offense," the utilitarian would argue that its implementation would bring great unhappiness. For Williams, this argument simply proved his point. We do not need to calculate why threatening to shoot people over parking offenses is wrong, he argued, and any system that shows us how to make the calculation is one we should reject. Indeed, we should reject any system that reduces moral decision-making to a few algorithms, because any systematization or reductionism will inevitably distort its complexity."
posted by BigSky at 5:31 PM on August 20, 2011 [3 favorites]
So depending on the society, practices like clitorectomies and sati are moral acts? No, they may not be as contemptible in some contexts where the actors have more of an excuse for failing to recognize their victim's humanity. Give me the so called "dangerous dogma of moral absolutes" instead of this total failure to distinguish good from evil.
It is not unlikely that the child being sacrificed would have shared this community value and may even have felt honoured rather than discriminated against.
It is indeed rather unlikely that any particular child would feel honored rather than discriminated against, but with a large enough population you could probably find a couple that believed in the communal lie right up to the end. There is no community value here for the child to have shared; there may have been a belief in the necessity of the act that the child may have internalized. Note the difference between those two claims. In the latter we can recognize both superstition and that commonly (I would say universally, but there are always some who are simply wrong) acknowledged virtue, duty.
-----
It also shows a significant agency flaw in human thinking. That doing something gives you agency, while standing by and choosing to do nothing makes the outcome not your responsibility.
This might not be a flaw. From the section on "Critique of Utilitarianism" in the Wikipedia article on Bernard Williams:
"Williams was particularly critical of utilitarianism, a consequentialist position, the simplest version of which is that actions are good only insofar as they promote the greatest happiness of the greatest number.
One of his best-known arguments against utilitarianism centers on Jim, a botanist doing research in a South American country led by a brutal dictator. One day Jim finds himself in the central square of a small town facing 20 Indians who have been randomly captured and tied up as examples of what will happen to rebels. The captain who has arrested the Indians says that if Jim will kill one of them, the others will be released in honor of Jim's status as a guest, but if he does not, all the Indians will be killed.[11] For most consequentialist theories, there is no moral dilemma in a case like this; all that matters is the outcome. Simple act utilitarianism would therefore favour Jim killing one of the men. Against this, Williams argued that there is a crucial moral distinction between a person being killed by me, and being killed by someone else because of an act or omission of mine. The utilitarian loses that vital distinction, turning us into empty vessels by means of which consequences occur, rather than preserving our status as moral actors and decision-makers. He argued that moral decisions must preserve our psychological identity and integrity.
We do not, in fact, judge actions by their consequences, he argued. To solve parking problems in London, a utilitarian would have to favour threatening to shoot people who parked illegally. If only a few people were shot for this, illegal parking would soon stop; thus the utilitarian calculus could justify the shootings by the happiness the absence of parking problems would bring. Any theory with this as a consequence, Williams argued, should be rejected out of hand, no matter how plausible it feels to argue that we do judge actions by their consequences. In an effort to save the utilitarian account, a rule utilitarian — a version of utilitarianism that promotes not the act, but the rule that tends to lead to the greatest happiness of the greatest number — would ask what rule could be extrapolated from the parking example. If the rule were: "Anyone might be shot over a simple parking offense," the utilitarian would argue that its implementation would bring great unhappiness. For Williams, this argument simply proved his point. We do not need to calculate why threatening to shoot people over parking offenses is wrong, he argued, and any system that shows us how to make the calculation is one we should reject. Indeed, we should reject any system that reduces moral decision-making to a few algorithms, because any systematization or reductionism will inevitably distort its complexity."
posted by BigSky at 5:31 PM on August 20, 2011 [3 favorites]
But I thought I was agreeing with you! I really enjoyed the double d of dangerous dogma. I was just kidding around.
posted by Oyéah at 8:47 PM on August 20, 2011
posted by Oyéah at 8:47 PM on August 20, 2011
So depending on the society, practices like clitorectomies and sati are moral acts? No, they may not be as contemptible in some contexts where the actors have more of an excuse for failing to recognize their victim's humanity. Give me the so called "dangerous dogma of moral absolutes" instead of this total failure to distinguish good from evil.
But don't cultures with these practices already have the "dangerous dogma of moral absolutes"? And isn't that precisely why they fail to recognize their victim's humanity?
posted by fleetmouse at 9:08 PM on August 20, 2011
But don't cultures with these practices already have the "dangerous dogma of moral absolutes"? And isn't that precisely why they fail to recognize their victim's humanity?
posted by fleetmouse at 9:08 PM on August 20, 2011
First you say "give me the so called "dangerous dogma of moral absolutes" instead of this total failure to distinguish good from evil.", and then you claim that "we should reject any system that reduces moral decision-making to a few algorithms, because any systematization or reductionism will inevitably distort its complexity"? Universal morality ("good from evil") would seem to involve a tremendous reduction of the complexity of moral decision-making. Same for the idea that "we do not, in fact, judge actions by their consequences" (if this were literally true, perceived risk would have no effect on behavior whatsoever, and we'd have gone extinct shortly after our ancestors decided to hug the nearest lion).
Besides, the point is not that "practices like clitorectomies and sati are moral acts", everywhere and at all times. The point is that some "immoral" practices are always viewed as moral acts within a given society, as with my examples from 24 and the criminal justice system. The list of "acceptable" violations of the local moral rules may change depending on the society, as may the rules themselves, but there are always exceptions. We don't shoot men for parking violations, but many of us will sentence them to die for treason or murder, or kill them in self-defense. We don't harm children, but when potentially-harmful practices are seen as sufficiently beneficial, we tend to give them a pass. Change the number of lives involved from five to five million, and most people will give more consideration to the article's supposedly "psychopathic" responses. When the perceived consequences become serious or personal enough, the idea that we don't judge actions by their consequences often breaks down.
In short: it is as impossible (and, ultimately, as undesirable) to live in a world with a single set of universal morals as it is to live in one where consequences are the sole moral yardstick. Neither paints a descriptive (as opposed to prescriptive) picture of the way human morality appears to work.
posted by vorfeed at 10:03 PM on August 20, 2011 [1 favorite]
Besides, the point is not that "practices like clitorectomies and sati are moral acts", everywhere and at all times. The point is that some "immoral" practices are always viewed as moral acts within a given society, as with my examples from 24 and the criminal justice system. The list of "acceptable" violations of the local moral rules may change depending on the society, as may the rules themselves, but there are always exceptions. We don't shoot men for parking violations, but many of us will sentence them to die for treason or murder, or kill them in self-defense. We don't harm children, but when potentially-harmful practices are seen as sufficiently beneficial, we tend to give them a pass. Change the number of lives involved from five to five million, and most people will give more consideration to the article's supposedly "psychopathic" responses. When the perceived consequences become serious or personal enough, the idea that we don't judge actions by their consequences often breaks down.
In short: it is as impossible (and, ultimately, as undesirable) to live in a world with a single set of universal morals as it is to live in one where consequences are the sole moral yardstick. Neither paints a descriptive (as opposed to prescriptive) picture of the way human morality appears to work.
posted by vorfeed at 10:03 PM on August 20, 2011 [1 favorite]
It's my understanding the people who become antisocial personality disordered do so because of both a genetic tendency and as a result of abuse/really bad parenting as a child. Since the child's brain responds to the abuse in a survival strategy, it shuts down to some significant extent in the limbic part of the brain that is more an emotional seat of the brain, connected with love and becomes more neocortically gifted, especially strategically.
This is, I think, why sociopaths are usually in political/military and corporate/business leadership roles. They are driven to strategize successfully. Being radically less encumbered by emotional attachments, compassionate empathy or well-wishing consideration for others, they make amazing strategists. They are limbically challenged. Because their core sense of self is not integrated with love as emotionally healthier people are, sociopaths lack integrity, have a slippery inner narrative (an authentic autobiography), are incapable of honesty (although they can speak the truth, often with malice) and are not morally compelled in the way emotionally endowed people are.
posted by nickyskye at 11:31 PM on August 20, 2011 [1 favorite]
This is, I think, why sociopaths are usually in political/military and corporate/business leadership roles. They are driven to strategize successfully. Being radically less encumbered by emotional attachments, compassionate empathy or well-wishing consideration for others, they make amazing strategists. They are limbically challenged. Because their core sense of self is not integrated with love as emotionally healthier people are, sociopaths lack integrity, have a slippery inner narrative (an authentic autobiography), are incapable of honesty (although they can speak the truth, often with malice) and are not morally compelled in the way emotionally endowed people are.
posted by nickyskye at 11:31 PM on August 20, 2011 [1 favorite]
But don't cultures with these practices already have the "dangerous dogma of moral absolutes"? And isn't that precisely why they fail to recognize their victim's humanity?
No.
Your question, like my earlier comment, presumes the importance of recognizing others. If there is no implicit acceptance of a standard for one and all, then what does it matter if some dominant members of a different culture, or for that matter our own, victimize others, e.g. engage in slavery? They're just doing their thing, right?
In fact if ethics are relative to a time and place then there is nothing all that dangerous about dogma. It's only because we all believe that a dogma can be wrong (wrong not just different), that there is any reason to question the content.
Sure, the self righteous have committed many travesties, and their conviction fuels their acts. But that's a call to reflect on one's own behavior, not a reason to deny virtue.
-----
First you say "give me the so called "dangerous dogma of moral absolutes" instead of this total failure to distinguish good from evil.", and then you claim that "we should reject any system that reduces moral decision-making to a few algorithms, because any systematization or reductionism will inevitably distort its complexity"?
???
The first part is something I wrote, the second was quoted from the Wikipedia article on Bernard Williams.
Same for the idea that "we do not, in fact, judge actions by their consequences" (if this were literally true, perceived risk would have no effect on behavior whatsoever, and we'd have gone extinct shortly after our ancestors decided to hug the nearest lion).
Again, this is in reference to Bernard Williams and limited to his criticism of utilitarianism. The man was not such a simpleton as to claim that possible consequences have no effect on behavior(!).
On another occasion he does discuss the importance of consequences in forming our judgment of an action: moral luck.
Besides, the point is not that "practices like clitorectomies and sati are moral acts", everywhere and at all times.
LOL.
My point was the opposite, that yes there actually are acts which are universally immoral.
In short: it is as impossible (and, ultimately, as undesirable) to live in a world with a single set of universal morals as it is to live in one where consequences are the sole moral yardstick. Neither paints a descriptive (as opposed to prescriptive) picture of the way human morality appears to work.
You seem to think that I hold in some set of propositions stating with finality what is and is not moral behavior, something along the lines of the Ten Commandments. I do not. My claim is that there is a real standard out there - the Good, whose precise articulation perhaps escapes us and that virtue is both real and common - common in the sense of shared. Note that my examples of universally immoral acts are not behaviors like killing or lying, but wholly gratuitous.
posted by BigSky at 11:45 PM on August 20, 2011
No.
Your question, like my earlier comment, presumes the importance of recognizing others. If there is no implicit acceptance of a standard for one and all, then what does it matter if some dominant members of a different culture, or for that matter our own, victimize others, e.g. engage in slavery? They're just doing their thing, right?
In fact if ethics are relative to a time and place then there is nothing all that dangerous about dogma. It's only because we all believe that a dogma can be wrong (wrong not just different), that there is any reason to question the content.
Sure, the self righteous have committed many travesties, and their conviction fuels their acts. But that's a call to reflect on one's own behavior, not a reason to deny virtue.
-----
First you say "give me the so called "dangerous dogma of moral absolutes" instead of this total failure to distinguish good from evil.", and then you claim that "we should reject any system that reduces moral decision-making to a few algorithms, because any systematization or reductionism will inevitably distort its complexity"?
???
The first part is something I wrote, the second was quoted from the Wikipedia article on Bernard Williams.
Same for the idea that "we do not, in fact, judge actions by their consequences" (if this were literally true, perceived risk would have no effect on behavior whatsoever, and we'd have gone extinct shortly after our ancestors decided to hug the nearest lion).
Again, this is in reference to Bernard Williams and limited to his criticism of utilitarianism. The man was not such a simpleton as to claim that possible consequences have no effect on behavior(!).
On another occasion he does discuss the importance of consequences in forming our judgment of an action: moral luck.
Besides, the point is not that "practices like clitorectomies and sati are moral acts", everywhere and at all times.
LOL.
My point was the opposite, that yes there actually are acts which are universally immoral.
In short: it is as impossible (and, ultimately, as undesirable) to live in a world with a single set of universal morals as it is to live in one where consequences are the sole moral yardstick. Neither paints a descriptive (as opposed to prescriptive) picture of the way human morality appears to work.
You seem to think that I hold in some set of propositions stating with finality what is and is not moral behavior, something along the lines of the Ten Commandments. I do not. My claim is that there is a real standard out there - the Good, whose precise articulation perhaps escapes us and that virtue is both real and common - common in the sense of shared. Note that my examples of universally immoral acts are not behaviors like killing or lying, but wholly gratuitous.
posted by BigSky at 11:45 PM on August 20, 2011
If there is no implicit acceptance of a standard for one and all, then what does it matter if some dominant members of a different culture, or for that matter our own, victimize others, e.g. engage in slavery? They're just doing their thing, right? [...] It's only because we all believe that a dogma can be wrong (wrong not just different), that there is any reason to question the content.
You seem to be conflating all moral relativism with normative relativism, but it is not necessarily so. Relative systems of morality are relative, but do not have to be all-accepting. A moral relativist can believe that slavery, dogma, etc are wrong according to his or her own morals and/or society's morals, just as a moral absolutist can believe that they are wrong according to some supposedly-universal standard.
Note that my examples of universally immoral acts are not behaviors like killing or lying, but wholly gratuitous.
...to you. To other people, they can be the very height of morality, and your so-called virtue can be the deepest vice. That's the problem. By any reasonable standard, your "common" universal standard of Good isn't actually universal -- if it were, "wholly gratuitous" acts would not be so commonly accepted and even lauded by a wide variety of human cultures throughout history, including our own.
posted by vorfeed at 12:37 AM on August 21, 2011
You seem to be conflating all moral relativism with normative relativism, but it is not necessarily so. Relative systems of morality are relative, but do not have to be all-accepting. A moral relativist can believe that slavery, dogma, etc are wrong according to his or her own morals and/or society's morals, just as a moral absolutist can believe that they are wrong according to some supposedly-universal standard.
Note that my examples of universally immoral acts are not behaviors like killing or lying, but wholly gratuitous.
...to you. To other people, they can be the very height of morality, and your so-called virtue can be the deepest vice. That's the problem. By any reasonable standard, your "common" universal standard of Good isn't actually universal -- if it were, "wholly gratuitous" acts would not be so commonly accepted and even lauded by a wide variety of human cultures throughout history, including our own.
posted by vorfeed at 12:37 AM on August 21, 2011
RE the debate on moral relativism vs moral absolutism: The stance I take is that morality is entirely relative, but that that shouldn't stop me from imposing mine on other people. If someone is committing acts I believe to be profoundly immoral, I will attempt to stop them. I have no inherent right to do so, but it would be unethical (according to my personal standards) to do otherwise.
(I suppose from the link above that this is an instance of meta-ethical relativism)
posted by DRMacIver at 2:27 AM on August 21, 2011
(I suppose from the link above that this is an instance of meta-ethical relativism)
posted by DRMacIver at 2:27 AM on August 21, 2011
WTF is Mayfair?
Softcore porn magazine of the 70s and 80s, m'lud. Named after a street where rich people live, as though the association would somehow drag the amateur gynaecology magazine upmarket. Among their competitors were a batch seemingly named after Ford cars (Escort and Fiesta - no Cortina, which would have clinched it). Another competitor, Parade, referred to its models as "paradames", which struck me as a cry for help - "Here I am, brain the size of a planet, extensive classical education, and the only job I can find is editor of a magazine about pudenda!"
HTH
posted by Grangousier at 3:15 AM on August 21, 2011 [6 favorites]
Softcore porn magazine of the 70s and 80s, m'lud. Named after a street where rich people live, as though the association would somehow drag the amateur gynaecology magazine upmarket. Among their competitors were a batch seemingly named after Ford cars (Escort and Fiesta - no Cortina, which would have clinched it). Another competitor, Parade, referred to its models as "paradames", which struck me as a cry for help - "Here I am, brain the size of a planet, extensive classical education, and the only job I can find is editor of a magazine about pudenda!"
HTH
posted by Grangousier at 3:15 AM on August 21, 2011 [6 favorites]
The problem with most of these questions is the "sink or swim" argument fallacy -- putting on pressure to make it appear as if there is no other option but death one way or another. For an anti-social type, it's a dream scenario -- a moral justification to kill someone, while sparing their own butt every time.
But I don't go for false dichotomies and in most cases, I can think of a solution that keeps everyone alive and breathing. The authoritative tone of the questions is an interesting misdirection that reminds me of the Migram Experiments -- the guy in the lab coat says I have no choice but to continue, and I continue like a big dummy instead of getting up, kicking lab coat boy in the shins before freeing the subject and reporting the torturefest to the police. Not everyone appeals to authority who tell them they only have two choices.
The study is interesting in that it makes assumptions about human morals based on the same two argument fallacies. I would have liked to have seen a study that would have provided the same scenarios, but then let it open for subjects to write out what they would do. I bet some of the answers would have been more telling and useful...
posted by Alexandra Kitty at 12:09 PM on August 21, 2011 [1 favorite]
But I don't go for false dichotomies and in most cases, I can think of a solution that keeps everyone alive and breathing. The authoritative tone of the questions is an interesting misdirection that reminds me of the Migram Experiments -- the guy in the lab coat says I have no choice but to continue, and I continue like a big dummy instead of getting up, kicking lab coat boy in the shins before freeing the subject and reporting the torturefest to the police. Not everyone appeals to authority who tell them they only have two choices.
The study is interesting in that it makes assumptions about human morals based on the same two argument fallacies. I would have liked to have seen a study that would have provided the same scenarios, but then let it open for subjects to write out what they would do. I bet some of the answers would have been more telling and useful...
posted by Alexandra Kitty at 12:09 PM on August 21, 2011 [1 favorite]
« Older Caturday in late August | Decision Fatigue Newer »
This thread has been archived and is closed to new comments
"Knowing so little of human feelings, he knew still less of the influences by which those feelings are formed: all the more subtle workings both of the mind upon itself, and of external things upon the mind, escaped him; and no one, probably, who, in a highly instructed age, ever attempted to give a rule to all human conduct, set out with a more limited conception either of the agencies by which human conduct is, or of those by which it should be, influenced."
posted by robself at 10:10 AM on August 20, 2011 [12 favorites]