Adventures of blasphemy, anger, and failure in philosophy

Wednesday, November 18, 2009

The Axiom of Utility and the Non-Importance of Truth

My next axiom I hope to be somewhat justifiable on its own (as axioms should ideally be), and it is the basis for pretty much everything to come. It states simply that:

Any philosophy I choose to adopt must give me some direction as to how to make decisions. Otherwise, I have no reason to adopt said philosophy.

And that's it! I can start deducing immediately, and get some pretty nontrivial results too. Here's one:

Theorem 1 (yes, math has influenced me that much. So what? It's better than not justifying things): I exist.

Proof: If my philosophy has the clause "I don't exist", it logically cannot assign actions to me. Therefore, by the Axiom of Utility, it's worthless for me to adopt it.

Theorem 2: There must exist things I can do, collectively called my Action Set. The Action Set must have cardinality greater than 1, and must somehow determine the choosing of an element from the set of possible outcomes (the imaginatively-named Outcome Set) - i.e. there exist two distinct actions for which the probabilities (as in, how likely that outcome is to actually happen given that action) assigned to the members of the Outcome Set differ. Basically, this says that I can choose actions which will cause some change my real-world experiences.

Proof: If there weren't such actions and outcomes, no two actions would be at all distinguishable and so the philosophy would not help me decide between actions. Thus any philosophy I adopt must have such actions and outcomes.

And another:
Theorem 3: Some outcomes are are to be preferred over others (the mapping of the set of outcomes to the set of values they hold for me is called my Utility Function. I'll talk about that later on.)

Proof: If no outcome is preferred to any other outcome, no action is preferred since actions both partially determine and are part of the outcomes. Thus, no action can be logically prescribed by my philosophy. Thus, some outcomes must be preferred over others.

Note that none of these theorems have anything to do with truth! If reality really excludes me, it still doesn't help me to believe in my own non-existence - so of course I believe in my existence. By believing in my existence I allow two outcomes to happen: (a) I exist and I am right, or (b) I don't exist, so, although I seem wrong, I'm not since I don't exist and therefore can't hold this wrong opinion. Thus either I'm right, or I don't exist so it doesn't matter anyhow. This is provably superior to the belief that I don't exist, which is either wrong or moot.

These are the first theorems that can be deduced from the axiom of utility. Theorem 2 is essentially my statement on the loaded question of Free Will; however, I feel Free Will to be important enough to merit its own post, which I will write next time.

2 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. (I wanted this to be easier to read)
    Unfortunately I believe that these theorems are presuppositions, that these "theorems" as more basic than the Axiom of Utility and thus the Axiom of Utility is based on them.

    For instance, the semantics of the Axiom as stated of presuppose that you exist. If you rephrase it to be "Any philosophy one chooses ... must give one some ... Otherwise, one has...." then just someone has to exist, not just you. In addition, this is only effective if there is someone to exist in the first place. Someone could exist even if the Axiom didn't, like if the world was deterministic according to your view, so it's too basic an assumption for the Axiom to prove.

    Think of a person, let's call him Albert, who didn't have this Axiom. This would mean that Albert is willing to take philosophies that don't affect his decisions. He still makes decisions and has multiple actions which he can do, it's just that certain parts of his philosophy are incredibly ill-equipped to deal with them. For example, if Albert's deepest belief is that there are aliens on the Moon, that doesn't help him decide between drinking tea or hot chocolate. He still could have the decision to have tea, but he most likely just flipped a coin or used some other method rather than a logical deduction from the fact that aliens are on the moon.

    Similarly, Albert wants to pick the drink which tastes best to him. He doesn't use the lunar alien belief to make the decision, but he still wants the outcome of having the best drink. From another perspective, bacteria don't have brains or beliefs but there are outcomes which are better for them (as in better for their long-term survival, which seems to be the basis upon which they live). In essence, trying to make the best decision is predicated on that there is a best decision, that there is a best outcome and that there can be a system by which the best outcome can be determined.

    ReplyDelete

Followers