1 / 20

A Non-Probabilistic Generalization of the Agreement Theorem

A Non-Probabilistic Generalization of the Agreement Theorem. Knowledge. E. ( ω ). K( E ). ω. Ω – a state space  – a partition of Ω ( ω ) – the element of  that contains state ω. At ω the agent knows  ( ω ). ...and also E. K( E ) – The event that the agent knows E.

dara
Download Presentation

A Non-Probabilistic Generalization of the Agreement Theorem

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Non-Probabilistic Generalization of the Agreement Theorem

  2. . . . . . . . Knowledge E (ω) K(E) ω Ω– a state space  – a partition of Ω (ω) – the element of  that contains stateω. At ωthe agentknows (ω) ...and also E. K(E) – The event that the agent knowsE.

  3. . . . . . . . Jaako Hintikka Knowledge and Belief – An Introduction to the Logic of the Two Notions Knowledge (ω) ω K(E) = {ω | } (ω)E K :2Ω 2Ω 1. K(Ω) = Ω 2. K(E) K(F) = K(EF) 3. K(E) E 4. ¬ K(E) = K(¬ K(E)) K(E) – The event that the agent knowsE. Conversely: If K satisfies 1-4 then there exist  such that…

  4. 2/14 1/14 4/14 1/14 2/14 2/14 2/14 Knowledge Probability . . 2/3 2/3 P– aprior probability . . . Fix event E. . . The posterior probability of E: 0 2/3 2/3 d:Ω R d(ω) = P(E | (ω)) 1/2 1/2 E [d = p] – The event {ω|d(ω) = p} e.g. [d = 2/3]

  5. Kc(E) = Kn(E) n=1 Common Knowledge . . 1 - . . . 2 - c - coarser than 1 and 2 . . finest among all such partitions the common knowledge partition. K(E) := K1(E) K2(E)

  6. The probabilistic agreement theorem common P– a prior probability d1(ω) = P(E | 1(ω)) … to agree… d2(ω) = P(E | 2(ω)) Kc ( ) p1 p2 [d1 = p1 ] ([d2= p2 ] =  … to disagree. It is impossible …

  7. non- The probabilistic agreement theorem A P– a prior probability  d1(ω) = P(E | 1(ω))  d2(ω) = P(E | 2(ω)) A set of decisions Kc ( ) δ2 δ1 p1 p2 δ1 δ2 [d1 = p1 ] ([d2= p2 ] =  +conditions on d1 andd2 ? satisfied by the posterior probability functions

  8. Virtual decision functions D:2Ω A decision function di :Ω  is derived from the virtual decision function D if di (ω)= D(i (ω)) Agents are like mindedif all individual decision functions are derived from the same virtual decision function. Interpretation: D(E) is the decision made if E were the information given to the agent. Cave, J. (1983), Learning to agree, Economics Letters, 12. Bacharach, M. (1985), Some extensions of a claim of Aumann in an axiomatic model of knowledge, J. Econom. Theory, 37(1).

  9. The Sure Thing Principle (STP) A businessman contemplates buying a certain piece of property. He considers the outcome of the next presidential election relevant. So, to clarify the matter to himself, he asks whether he would buy if he knew that the Democratic candidate were going to win, and decides that he would. Similarly, he considers whether he would buy if he knew that the Republican candidate were going to win, and again finds that he would. Seeing that he would buy in either event, he decides that he should buy, even though he does not know which event obtains, or will obtain, as we would ordinarily say. It is all too seldom that a decision can be arrived at on the basis of this principle, but except possibly of for the assumption of simple ordering, I know of no other extralogical principle governing decisions that finds such ready acceptance. The sure-thing principle cannot appropriately be accepted as postulate in the sense that P1 is, because it would introduce new undefined technical terms referring to knowledge and possibility that would render it mathematically useless without still more postulates governing these terms. It will be preferable to regard the principle as a loose one that suggests certain formal postulate well articulated with P1. Savage, L. J. (1954), The foundations of statistics.

  10. Virtual decision functions D:2Ω A decision function di :Ω  is derived from the virtual decision function D if di (ω)= D(i (ω)) Agents are like mindedif all individual decision functions are derived from the same virtual decision function. The virtual decision function, D, satisfies the STP when for any two disjoint events E, F, if D(E) = D(F) = δthen D(EF) = δ.

  11. An agreement theorem • If the agents are • like minded with virtual decision function D, and • D satisfies STP, • then it is impossible to agree to disagree. • That is, if the decisions of the agents are common knowledge then they coincide.

  12. A detective story A murder has been committed. To increase the chances of conviction, the chief of police puts two detectives on the case, with strict instructions to work independently, to exchange no information. The two, Alice and Bob, went to the same police school; so given the same clues, they would reach the same conclusions. But as they will work independently, they will, presumably; not get the same clues. At the end of thirty days, each is to decide whom to arrest (possibly nobody). Like mindedness

  13. A detective story On the night before the thirtieth day, they happen to meet … and get to talking about the case. True to their instructions, they exchange no substantive information, no clues; but … feel that there is no harm in telling each other whom they plan to arrest. Thus, …it is common knowledge between them whom each will arrest. Conclusion: They arrest the same people; and this, in spite of knowing nothing about each other's clues. Curtain

  14. A detective story Aumann, (1999) Notes on interactive epistemology, IJGT. Aumann, (1988) Notes on interactive epistemology, unpublished.

  15. Is the STP captured?Is the agent more knowledgeable in ω than in ω’ ? How do virtual decision functions fit in a partitional knowledge setup? Syntactically, they involve knowledge that cannot be expressed in terms of actual knowledge Ki. Semantically, at a given state ω the agent’s knowledge is given by i(ω) and not by any other event. (Moses & Nachum (1990)) . . ¬ Ki p Ki p ω’ ω Ki ¬ Ki p ¬ Ki ¬ Ki p i(ω) i(ω’) A remedy

  16. . . . ω’ ω ω personal inter state intra The event that i is more knowledgeable than j: [i j] := (¬ Kj(E)  Ki(E)) E Comparison of knowledge i knows at ω… more than … i knows at ω’. personal inter state intra personal inter state intra i knows at ω… more than …j knows at ω. ω¬ Kj(E)  Ki(E) for each E.

  17. Interpersonal Sure Thing Principle (ISTP) The decision functions (d1,…, dn) satisfy the ISTPif for each i and j: Kj( ) [i j] ) [di= δ] [dj= δ] [i = j] := [i j]  [j i] If the decision functions satisfy ISTP, then the agents are like minded: For each i and j, [i = j]  [di = di]

  18. Expandability The decision functions (d1,…, dn) on the model (Ω,K1 , … , Kn ) are ISTP-expandableif for each expansion (Ω,K1 , … , Kn , Kn+1 ) there exists a decision dn+1for agent n+1such that(d1,…, dn, dn+1) satisfy the ISTP. Where n+1 is epistemic dummy An agent is epistemic dummy, if it is common knowledge that each other agent is more knowledgeable. Officer E. P. Dummy

  19. A non-probabilistic generalization of the agreement theorem If the decision functions (d1,…, dn) on the model (Ω,K1 , … , Kn ) are ISTP-expandablethen the agents cannot agree to disagree.

  20. Why ISTP? K  K Binmore’s ken Alice’s ken K K kenthe list of all known sentences .......................................... .......................................... The decision δ depends only on the ken. . . . . . . . . . . . . Alice knows that… Binmore is more knowledgeable: Binmore’s decision is δ: …..…......... K = K’ is consistent withKK’ Binmore’s decision is δfor each the kensK’ . . . .

More Related