Wednesday, December 16, 2009

Utility Monsters

I doubt that any utilitarian sympathizer has managed to evade the question of the utility monster. As a complete utilitarian (when in the role of outside observer; when in the role of active player, I'm a self-acknowledged selfish altruist, i.e. I like other people to be happy but only because when people around me are happy, I tend to be happier, so basically I'm totally selfish), I've decided to simply tackle the question head on. However, first, I will need to explain my utilitarian ideas and the idea of the utility monster.

My utilitarian idea is simply that the optimal result for the whole is the optimal result for the parts; optimal results are calculated in units of "happiness*time*capacity to experience". Happiness ranges from -1 to 1, measuring "completely unhappy" (would prefer to not exist) to "completely happy" with 0 being defined as "indifferent about own existence". This is what that means: each entity in the universe has a utility function that assigns how much the entity "likes" any given outcome of the universe. For non-sentient entities, they don't actually care, so their utility is defined to be a constant 0 over all outcomes. For sentient ones, their utility functions are defined by them for themselves. A typical human utility function combines material wealth, emotional stability, social interactions, and a number of other factors. Utility functions increase in magnitude as "capacity to experience" increases - i.e. a snake may "like" a certain outcome that a human would not like, but the human overrides the snake since the human is more fully aware and thus able to appreciate his optimal utility better. I add the "*time" since I this happiness should last long.

More precisely, total utility from time t = a to b is (integral from a to b) ((summation over all entities) (happiness*capacity to experience)) (dt) - it's a fairly simple equation if you understand calculus.

A utility monster is a being that experiences a massive burst of optimality that is mutually exclusive with others' optimal results: for example, a typical utility monster is a man who experiences millions of times the happiness a normal human is capable of when he kills babies. The question: giving babies to this man to kill gruesomely will indeed increase total utility by my formula. Is it therefore the right thing to do?

Well, it depends on "right thing to do". I certainly wouldn't sacrifice myself to the utility monster - I'm selfish. And I wouldn't sacrifice others to it either - it's not my business, and I'd rather avoid it. But, I do believe that it is, from society's point of view, the "right thing to do". After all, we do it already. In fact, unless you're vegetarian, YOU are a utility monster (or at least you should hope you are - if you're not, it means eating meat is increasing total suffering in the world). I don't try to moralize meat-eating at all, really. I do it because I like it and I don't give a flying shit about what happens to animals, except for my friends' pets and the fish in my mother's pond. I accept that I am the utility monster, or worse, I'm pure selfishness propagating misery, and I eat meat. It's delicious. And I certainly wouldn't grudge meat to someone else. We all have the 'right' to do anything we can physically get away with (i.e. not be punished for or not be caught doing) and emotionally get away with (i.e. not feel guilty about afterwards), and I intend to exercise this right continuously. It's what rational living is all about.

Belief that feeding the utility monster is 'wrong' combined with meat-eating is a contradiction in terms. A side-effect conclusion is that if machines ever achieve greater sentience than humans, they are 100% justified in wiping out humanity to make room for their superior pleasure centers. If they try to wipe ME out, though, I'll fight back. Because it's what's good for ME.

Look out for yourself. Few others will.

No comments:

Post a Comment