1 / 37

PPE 110

PPE 110. Lecture on preferences over uncertainty. Again we observe that there is more to decision-making of people than is being captured here, but again we proceed because of the same reasons.

avari
Download Presentation

PPE 110

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PPE 110 Lecture on preferences over uncertainty

  2. Again we observe that there is more to decision-making of people than is being captured here, but again we proceed because of the same reasons. • In addition to these simplifying assumptions, we will need to impose a little more structure on the set of probability trees ∆(X) • We will assume that whenever the decision-maker is faced with a pair-wise comparison between two elements of ∆(X), the decision-maker can say if any one tree is at least as good as another, or is not at least as good as the other.

  3. Since this is a mouthful, we will use the terminology that, given α and β ∈∆(X) • α ≽β will imply that α is at least as good as β • ¬(α ≽β) will imply that α is not at least as good as β • Again remember that α and β are both situations of uncertainty/probability-tree/probability distribution/lottery (the terms are used interchangeably), while ∆(X) is the set of all such situations/probability-trees/probability-distributions/lotteries

  4. Note that we may define α ∼β or “α is exactly as good as β” by saying this holds if α ≽β and β ≽ α • Similarly, note that we may define “α is strictly better than β” or α ≻β by saying this holds if α ≽β and¬(β≽ α) • In other words, if individuals can compare pair-wise in order to be able to say “this situation is at least as good as the other” or this situation is not at least as good as the other” then that is tantamount to giving them the ability to say: “the situations are similarly good” or “one situation is strictly better.”

  5. Again a reminder: always remember what X, ∆(X), α and β are. • You are used to comparing objects (the saying this is like “comparing apples to oranges..”). • Now we are comparing probability distributions over objects rather than comparing the objects themselves

  6. Here then are the weak order axioms: • II (a) For all α, β ∈∆(X) either α ≽β or β ≽ α or both. This is known as the completeness axiom • (b) For all α ∈∆(X), α ≽ α (this is known as the reflexivity axiom) • (c ) For all α, β, γ ∈∆(X), if α ≽ β and β ≽ γ then α ≽ γ (this is known as the transitivity axiom). This will imply the strong version of these property as well: if α ≻ β and β ≻ γ then α ≻ γ

  7. The first axiom says that the decision maker can always compare two probability trees, and able to say one is at least as good as the other. The rational person never throws their hands up in the air and says the two cannot be compared. • This assumes you always have some idea of what the two trees are about. • Consider the following two probability distributions. In the picture, let b=the number of mountains over 10,000 meters in the solar system

  8. => 0.5 0.7 0.3 0.5 $10 $3b $4 $9

  9. Most people will not know which to prefer, even weakly because they do not know what b is. However, rational choice forces that person to have to be able to be able to make pair wise comparisons of the “at least as good as sort.” • The logic is, even if you do not what b is, use your best possible guess, and then rank the two situations as better and worse, or equal.

  10. The reflexivity axiom is relatively innocuous, it is imposed for technical reasons – to complete a mathematical proof. We will not worry too much about that axiom • The transitivity axiom, however, is very powerful. Can you think of a situation where it may not be obeyed?

  11. Here is one example. Suppose a person prefers more sugar to a cup of tea to less, up to 5000 grains of sugar. • A sequence of cups of tea are presented to this person. The sequence is ordered. Let the kth cup have k grains of sugar and be denoted by Ck. Assume there are 5001 such cups – C0, C1, C2, C3,…,…C5000.

  12. Assume that to be able to distinguish a cup of tea as containing more sugar than another, the cup that is preferred must have at least 10 grains of more sugar. Otherwise, the differences in sweetness fall below the threshold of distinction for the person. • Now note that C0 ≽C1 ≽C2 ≽C3… • And yet C5000 ≻C0

  13. Another famous example involves a context outside decision theory, but is still very useful to look at. The issues illustrated by this paradox translate to our context as well. • Suppose there are 3 voters (x,y,z), and 3 candidates (A,B,C). Voters rank their choices as best, middle worst, or as 1,2,3 respectively. • The way the group makes its decision is by majority rule: One candidate (say A) is preferred to another (say B) by the entire group if at least 2 people prefer A over B. • Consider the following situation

  14. For the entire group, note that A ≻B, and B ≻C, but C ≻A. • Thus, this is a failure of transitivity • What we are ruling out by imposing transitivity is the existence of such choice cycles. • The last set of axioms involve the substitution axiom and the Archimedean axiom. They will appear next as III (a) and III (b) respectively.

  15. III (a) (Substitution Axiom): For any X and corresponding ∆(X), assume that the preferences of the individual over elements of ∆(X) obey the axioms I (simplifying axioms) and II (weak-order axioms) stated previously. Further, assume that given α, β, γ ∈∆(X), it is true that α ≻ β. Then for any number x strictly between 0 and 1, it will be true that • xα + (1-x)γ ≻ xβ+(1-x)γ • Here is an illustration of the axiom at work. Suppose X are all dollar amounts.

  16. Let the trees be as follows: β α γ 0.3 0.7 0.2 0.4 0.4 0.8 0.2 2 10 3 7 -9 2 4

  17. Then it is true that for any positive fraction x x ≻ 1-x 1-x x α γ γ β

  18. What this means is that if one situation of uncertainty α is preferred to another β (for whatever reason), then a more complex situation of uncertainty which results in α with probability x and any third situation of uncertainty γ with probability 1-x is to be preferred to another, more complex situation of uncertainty which gives β with also with probability x and γ with probability 1-x • In a sense, the γ cancels out in the reckoning. This property is similar to that of real numbers, with the ≻ replaced by >.

  19. We will assume that the reverse is also true: • Given any set of outcomes X, any set of lotteries/probability trees over C denoted as Δ(X), any p,q,r∈Δ(X) and any α∈(0,1) • p ≻ q => αp+(1- α)r ≻αq+(1- α)r and αp+(1- α)r ≻αq+(1- α)r => p ≻ q

  20. The substitution axiom is not innocuous. In fact, in an experiment, several of the people who contributed significantly to the theory we are studying violated it. • The experiment they were given is: • Experiment 5, part 1. • Choose between the following options. All prizes are in dollars. • Lottery A • 27,500 with probability 0.33 • 24,000 with probability 0.66 • 0 with probability 0.01 • Lottery B • 24,000 with probability 1 • Experiment 5, part 2 • Choose between the following options. All prizes are in dollars. • Lottery C • 27,500 with probability 0.33 • 0 with probability 0.67 • Lottery D • 24,000 with probability 0.34 • 0 with probability 0.66 • A majority chose B in part 1 and C in part 2. Are these answers consistent with the substitution axiom?

  21. x y • Since B ≻ A, this means that y ≻ x, but then D should be ≻ to C

  22. III(b) (Archimedean axiom): For any X and corresponding ∆(X), assume that the preferences of the individual over elements of ∆(X) obey the axioms I and II stated previously. Further, assume that given α, β, γ ∈∆(X), it is true that α ≻ β ≻ γ . Then there exists number m and n strictly between 0 and 1, such that • mα + (1-m)γ ≻ β ≻ nα+(1-n)γ • This is illustrated next

  23. Let the trees be as follows (to begin with, they are arbitrarily chosen): β α γ ≻ ≻ 0.5 0.5 1/3 1/3 1/3 0.8 0.2 2 10 3 7 -9 2 -8

  24. Then there exists numbers m and n (this property will not hold for all numbers, just some) such that the following preferences will hold true. ≻ β ≻ m 1-m n 1-n γ α γ α

  25. This implies that if we take a lot of the best, and a little bit of the worst, then that is better than the middle. Likewise, If we take a lot of the worst, and a little bit of the best, then that is worse than the middle. • These axioms may sound innocuous and obvious. But they are all we need to state the following theorem.

  26. Theorem (informally stated): Given any X and ∆(X), and any elements α, β∈∆(X), where α and β are probability distributions (or probability trees) over X, then α≻ β iff the expected utility of αgreater than the expected utility of β. And α∼β iff the expected utility of αequals the expected utility of β. • What is the expected utility of α or β? It is simply the expected value of the probability tree replacing the outcomes with the utility of the outcomes. • You might say, this is obvious – why bother with writing down the axioms? Here is why:

  27. Expected utility is the cornerstone of decision theory. It is used everywhere: actuarial science, finance, computer science, economics, indeed, whenever a rational agent is presumed to be in a position to maximize utility, and the situation can be quantified. • The theorem states that whenever it is justified to use expected utility, what we are really assuming is that the agent is acting as if they obey these axioms. Thus, we have a handle on the implicit cognitive processes and value judgments that manifest themselves in such behavior. And wherever they do not end up being expected utility maximizers (such as in a lot of psychology experiments), their failure to use expected utility is directly attributable to their failure to obey one or more of the axioms we have written down.

  28. The student might wonder: why use expected utility instead of expected value? • To illustrate the necessity of using utility, we look at a particular bar game. • A popular bar is charging customers for the right to play a particular game. • In a computer simulation, a fair coin is tossed for the person playing the game. • The tossing continues until the first time a head occurs. If the first time this occurs is on the nth toss, the person playing is given $2n

  29. 1 2 2 3 4 8 n 2n And so on….

  30. How much would you be willing to play this game? (What is the maximum possible cover charge you would be willing to pay?) • Small winnings are possible with high probability. Large winnings are possible with low probability. • A rational way to add up the worth of playing this game should be to weigh each reward with the possibility of that reward occurring. • In other words, use expected value.

  31. And yet, the expected value is: • 1(1/2)+2(1/4)+….+(1/ 2n)(2n )….. • =1+1+1…..=∞ • Thus, expected value tells us we/you should be willing to pay any amount necessary to be able to play this game. • Clearly this is absurd. How do we get some insight of what might be going on in the typical behavior of people faced with such a hypothetical choice, who usually just are willing to pay a very small amount to be able to play this game?

  32. If we assume that people value each dollar worth than the previous one, we have a possible clue to their behavior. • In other words, every additional dollar still gives people an “utility”, but this utility is less than what the previous dollar contributed. • The 100th dollar is worth more than the 101st, the millionth dollar is worth more than the millionth and oneth, and so on. • Here is what the utility function would then look like.

  33. Utility of $ $

  34. One utility function that has such a property is u(x)=√x • Then the expected utility is what should be calculated, not expected value. • This expected utility = (1/2)(√2)+(1/4)(√4)+(1/8)(√8)+(1/ 2n)(√ 2n) = (1/√2)+(1/√4)+(1/√8)+…(1/√ 2n) Which is the same as a geometric series with first term (1/√2) and constant multiplicative term also (1/√2)

  35. Which yields a value much more consistent with what people are willing to pay

  36. There really is no one utility function that is able to explain human behavior in all contexts. • Sometimes it is one kind, sometimes other. Part of the goal of the cognitive sciences is to provide some idea of when one kind is appropriate, and which type that is. • It is a qualified goal, but perhaps better than assuming all kinds are equally likely at any possible juncture.

More Related