Auros (auros) wrote,
Auros
auros

  • Mood:

Experimental Ethics

So, I was arguing with somebody the other day about the meaning of the Milgram and Zimbardo experiments, how we assess responsibility/blame, and how a rationalist formulates those ideas in a non-dualist/physicalist universe. He was a Christian who likes Gould's silly Non-Overlapping Magisteria thing.

As some of you may know, I regard Utilitarianism as the only reasonable starting point for making ethical public policy. There are four factors in it that you have to be very flexible about.

  1. Constituency: Whose "happiness" counts? (Like, should we count animals and trees as part of what should be "kept happy"?)
  2. Desires: What factors make up happiness? (For a person, a three-member vector for wealth / "standard of living", freedom, and mental well-being is a good start. For an animal, you might get away with a scalar representing how much of its normal life in its natural habitat has been impacted.)
  3. Aggregation: How do we "sum up" happiness across a group? (Clearly we don't want to say that a situation where most people live just above the poverty line and there're one or two "infinitely happy" overlords, is just as good as one where the overlords' goods are divided roughly equally among the population.)
  4. Values: Given that happiness may be made up of more than one factor, how do we weight them to come up with a final scalar value to represent the success of the society?


Now, obviously I do not think it's worth trying to actually define a Utilitarian function. But Utilitarianism as a methodology is the only ethical way to practice politics. When politics is working right, the sort of things we argue over are precisely the four factors above. Liberals tend to make arguments suggesting that they're inclined towards aggregations functions that favor flatter distributions. Free-market libertarians think that the weighting of values should favor liberty over standard-of-living issues (and also tend not to acknowledge that for the poor, there are links between the two). The argument over tradeoffs between freedom and security really comes down to questions of whether the mental well-being and economic risks averted are worth the reduction in freedom; this has both an experimental aspect (have security cameras in shopping malls been shown to actually prevent crime?) and an aesthetic one -- because that's what the final Values factor is; an aesthetic choice about how to weight the other factors.

I happen to find that many people whose choices about Values are radically different from mine also offer programs that, experimentally, are failures -- e.g. abstinance only sex ed -- but these are two independent observations. I acknowledge that there could, in theory, be a reasonable Christian who felt that preserving virginity has some sort of transcendant importance, and wanted to find sex ed techniques that would promote that goal.

For personal ethics, as opposed to the public ethics we should use for directing the action of an entire society, I regard Epicureanism as a very similar model to Utilitarianism, but with an acknowledgement built in that the importance we assign things is inversely proportional to its "emotional distance" from us. We can actually do a pretty good job of defining how ethical a person is, by integrating the function defining this distance factor. Somebody who only cares about hirself has a 1 at d = 0, with an almost immediate drop to 0 for any d > 0, and thus has almost no area under the curve. A serious pantheist tries to experience the universe as unitary, valuing everything in it -- implying an almost flat curve, and a potentially limitless area underneath it. A more realistic humanist (religious or otherwise) tends to regard the survival of the species as desirable, and one would expect to see a gradual drop-off, approaching 0 as you got to things like, say, the well-being of hypothetical bacteria living on comets in the Oort cloud. *g*

Now, here's the thing. During the discussion mentioned up at the top, one of the things brought up was that ethics can't ever really be empirical. I let that slide. But I don't think it's actually true. The truth is, it can't be empirical while still being ethical or practical. You could, in theory, raise millions of clones in almost-identical Truman-Show environments, in order to use them as test subjects to see what kinds of personal interactions make them happy or hurt them. You could build whole societies in such bubbles in order to see where things go because of various policies and starting conditions (stuff like native language, clothing styles, etc). But obviously this is prohibitively expensive, and violates the precept of informed consent.

Somehow, it hadn't occurred to me to make the obvious connection. It hit me this morning while I was making breakfast.
Subscribe
  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 17 comments