common knowledge and handshakes in computer mediated cooperation n.
Skip this Video
Loading SlideShow in 5 Seconds..
Common Knowledge and Handshakes in Computer-Mediated Cooperation PowerPoint Presentation
Download Presentation
Common Knowledge and Handshakes in Computer-Mediated Cooperation

Common Knowledge and Handshakes in Computer-Mediated Cooperation

116 Views Download Presentation
Download Presentation

Common Knowledge and Handshakes in Computer-Mediated Cooperation

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Common Knowledge and Handshakes in Computer-Mediated Cooperation Albert Esterline Dept. of Computer Science North Carolina A&T State University

  2. Introduction • Goal: • Model human and artificial agents formally and uniformly in systems where they collaborate • Gain insight into the conditions for coordination that such modeling offers.

  3. Start with a simple distributed game that displays a common interface. • Players collaborate to move proxy agents around a grid. • Requires making agreements—entails common knowledge. • Formal characterization and interpretation of common knowledge. • New common knowledge and simultaneous actions.

  4. Handshakes and process algebras • Process-algebraic agent abstraction • Must add account of common knowledge and deontic notions. • Co-presence heuristics for establishing common knowledge • Grounding (human-computer dialog) • Back to the simple distributed game • Virtual agents

  5. Simple Distributed Cooperative System • Users move proxy agents on a grid. • Each player participates at his own workstation. • But system ensures that grid state is displayed in exactly the same way to all players. • Each agent visits several goal cells specific to it in an unspecified order. • Single-cell moves are made in round-robin fashion. • Object: cooperate so as to minimize the total number of single-cell moves taken by all proxy agents to visit all their goal cells.

  6. Free space on the grid tends to occur in long corridors. • Need agreements to avoid lengthy backtracking when two agents travel in opposite directions on a corridor. • Interface has features that allow the players to suggest and agree on itineraries. • All interaction is by clicking—easy interpretation of communication

  7. A player can make a suggestion when its his/her turn. • All players can negotiate. • Agreement must be unanimous. • An agreement is obligates the player of the proxy agent in question. • It must be common knowledge.

  8. Three Approaches to Common KnowledgeIterate Approach • Assume n agents named 1, 2, …, n., G={1,…,n} • Introduce n modal operators Ki, 1 in. • Ki is read “agent i knows that ”. • EG, read as “everyone in group G knows that ”. • is the EG operator iterated k times. • CG: is common knowledge in group G.

  9. Fixed-point Approach • View CG as a fixed-point of the function f(x) = EG(x). • Specifically (derivable in augmented S5), CG EG ( CG)

  10. Shared Situation Approach • Assume that A and B are rational. • We may infer common knowledge among A and B that  if • A and B know that some situation  holds. •  indicates to both A and B that both A and B know that  holds. •  indicates to both A and B that .

  11. Barwise on the Three Approaches • Barwise contrasts the 3 approaches within his situation theory. • An infon is an (n+1)-tuple of a relation and n (minor) constituents. • Its polarity is 1 if the minor constituents are related as per the relation. • A set of infons is a situation (small world). • An infon with polarity 1 is a “fact” (of some situation, not others).

  12. Minor constituents may be situations, even one where the infon itself occurs. • Example H, pi, 3 player i has the 3 of clubs S, pi, s player i sees situation s s = {H, p1, 3, S, p1, s, S, p2, s} situation where player 1 has the 3 of hearts and this is publicly perceived by both player 1 and player 2

  13. Define classes INFON (of infons) and SIT of (situations) by mutual induction. • Consider the fixed-points of a monotone increasing operator  corresponding to this inductive definition. • If a standard set theory (e.g., ZFC) is used as the metatheory, there’s a unique fixed-point. • But Barwise considers a variant of ZFC giving multiple fixed-points

  14. Two intuitions about sets: I1. Sets are collections got by collecting together things already at hand to get something new (a set). I2. Sets arise from independently given structured situations by dropping the structure—“forgetful situations.” • I1 generates the cumulative hierarchy characteristic of, e.g., ZFC. • I2 gives a richer universe of sets.

  15. bs: b is a constituent of situation s (a minor constituent of some infon in it). • Reality is wellfounded iff every situation is wellfounded. • A situation is wellfounded iff it’s neither circular nor ungroundable. • Situation s is circular if s … s. • s is ungroundable if there’s an infinite sequence … sss

  16. These notions also apply to sets. • The Axiom of Foundation of ZFC: • A set contains no infinitely decreasing membership sequence. • Rules out circular and ungroundable sets. • Barwise proves: The universe of sets is wellfounded iff the universe of situations is. • So we must replace the Axiom of Foundation of ZFC with something that • admits non-wellfounded sets and • supports unique construction of sets.

  17. Take Aczel’s Anti-Foundation Axiom, AFA. • When this replaces the Axiom of Foundation in ZFC, get ZFC/AFA set theory. • A tagged graph is a directed graph where each node without children is tagged with an atom or . • A decoration for a tagged graph is a recursive function  mapping a node x to a set. • If x is childless, then (x) is its tag. • Otherwise (x) = {(y) : y is a child of x}. • A tagged graph G is wellfounded if the child-of relation on G is wellfounded (no circular or infinite directed paths).

  18. Without AFA, can prove that every wellfounded tagged graph has a unique decoration in the universe of sets. • AFA asserts that every tagged graph has a unique decoration.

  19. With ZFC/AFA as our metatheory, there are many fixed-points of . • Least fixed-point gives collection of wellfounded infons and situations. • Interested in greatest fixed-point. • Includes all the non-wellfounded infons and situations as well.

  20. Want to compare iterate and fixed-point approaches. • Show how infon  gives rise to an transfinite sequence of wellfounded infons ,  a finite or infinite ordinal. • Requires a sequence s for any situation as well. • These are sequences of approximations. • Members of a sequence approximating a non-wellfounded situation have increasingly deep nestings. • Corresponds to increasingly deep nestings of “everyone knows that” operator.

  21. For circular infon , approximations get ever stronger but never as strong as . • Yet the totality of all approximations captures . • If each  holds in a situation, so does . • The finite approximations of a circular infon are equivalent to it w.r.t. finite situations. • But this doesn’t hold for infinite situations. • In this sense, iterate approach is weaker than fixed-point approach.

  22. In shared-situation approach, characterize common knowledge in terms of existence of a real situation meeting a certain condition. • Introduce a second-order language to express the existential conditions. • Variables range over situations, may be bound by existential quantifiers. • Semantics stated in terms of assignment of situations to free situation variables in a condition. • A model for a condition is an assignment making it true.

  23. Two conditions with the same free variables are strongly equivalent if they have the same models. • A condition entails a sequence of infons if that sequence is a list of facts, each holding in the situation assigned to a given variable in any assignment satisfying the condition. • Two conditions with the same free variables are informationally equivalent if they entail the same sequences of infons. • A model M of a condition  is a minimal model of  if each situation in M has no more information than the corresponding situation in any other model of . • A condition generally has several minimal models.

  24. Can be shown that 2 conditions are informationally equivalent iff they have a minimal model in common. • So, suppose we start with shared-situation approach, formulating a condition. • Situations in a minimal model of this condition give a handle for fixed-point approach. • But 2 conditions can be informationally equivalent and not strongly equivalent. • Conditions are more discriminating than the situations that are their minimal models. • 2 conditions may be different but equally correct ways a group comes to have shared information.

  25. Barwise’s Conclusions • Fixed-point approach is correct analysis of common knowledge. • Common knowledge generally arises via shared situations. • Iterate approach characterizes how common knowledge is used? • Progress through sequence of approximations corresponds to inferring ever deeper nestings of “everyone knows that”? • But doubt about a given inference blocks next step.

  26. Knowing that  is stronger than carrying the info that . • Involves carrying the info in a way relating to ability to act. • Possible-worlds semantics of standard epistemic logic requires we know all logical consequences of what we know.

  27. Common knowledge (per fixed-point approach) is a necessary but not sufficient condition for action. • Useful only when arising in a straightforward shared situation. • A situation works not just by giving rise to common knowledge. • It also “provides a stage for maintaining common knowledge through the maintenance of a shared situation.” • The shared interface of our system is a common artifact in Devlin’s sense.

  28. Common Knowledge and Simultaneous Action • Agents A and B communicate over a channel. • It’s common knowledge that • delivery of a message is guaranteed and • a message A sends to B arrives either immediately or after  time units. • At time mS, A sends B a message  that doesn’t specify the sending time. • Let • mD denote the message arrival time and • sent() the proposition that  has been sent.

  29. KBsent() is true at mD. • But A can’t be sure that KBsent() before mS+. • So KA KBsent() isn’t true until mS+. • And B knows this. • But  may have been delivered immediately. • So B doesn't know that mS+ time has elapsed until mD+. • So KB KA KBsent() doesn’t hold until mD+. • And A knows this. • But it may take  time for  to be delivered. • So mD could (for all A knows) be mS+. • So KA KB KA KBsent() does not hold until mS+2.

  30. mS mS+  mS+ 2 mS+ 3  mD mD+  mD+ 2 mS mS+  mS+ 2 mS+ 3  mD mD+  mD+ 2 mD+ 3

  31. A straightforward induction shows that, for any natural number k, • before mS+k, (KA KB)ksent() doesn’t hold, while • at mS+k it does. • Common knowledge requires infinitely deep nesting of KA KB. • So common knowledge of sent() is never attained no matter how small .

  32. But suppose that • A attaches the sending time mS to , giving message , and • A and B use the same global clock . • When B receives , he knows it was sent at mS. • Because of the global clock, it is common knowledge at time mS+ that it is mS+. • Since it is also common knowledge that a message received at mS+ was sent at mS, CG sent(), G = {A, B}, holds at mS+.

  33. Can model the global clock is with another agent. • An action by any other agent is always simultaneous with one of this agent’s actions (a “tick”). • More parsimoniously: • Require that an agent have a different state at each point in a run. • It always knows what time it is.

  34. A thesis of standard epistemic logic CG EG CG. • So the transition • from  not being common knowledge • to it being common knowledge must involve simultaneous changes in the knowledge of all agents in the group. • I.e., information becomes shared in the required sense at the same time for all agents sharing it. • No surprise—all the agents are involved in the circularity.

  35. Common Knowledge Inherent in Agreement and Coordination • Suppose that A and B agree to something . • For there to be an agreement, every party in group G = {A, B} must know there’s agreement: agreeG() EGagree() (**) • By idempotence of , this is equivalent to agreeG() EG (agreeG() agreeG()) • But standard epistemic logic includes the inference rule From 1 EG (21) infer CG2 • Substituting agreeG() for both 1 and 2 in the rule and using (**) for the premise, we infer agreeG() CGagree()

  36. To show formally that coordination implies common knowledge requires extensive development. • But the result is just as direct.

  37. Process Algebras and Handshakes • The standard epistemic-logic framework explicates the notion of simultaneous actions. • But the notion it provides of a joint action preformed by n agents is simply: • an (n+1)-tuple whose components are the simultaneous actions of the environment and the n agents. • One thing critical to a joint action is: • the agents must time their contributions so that each contributes only when all are prepared.

  38. A handshake in process algebras is a joint communication action that happens only when both parties are prepared for it. • A process algebra (e.g., -calculus, CCS, CSP) is a term algebra. • Terms denote processes. • Combinators apply to processes to form more complex processes. • Combinators typically include • alternative and parallel composition and • a prefix combinator that forms a process from a given process and a name.

  39. Names come in complementary pairs. • A prefix offers a handshake. • A handshake results in an action identified by the prefix of the selected alternative. • Resulting process consists of only the selected alternative with its prefix removed. • Parallel processes may handshake if they have alternatives with complementary prefixes. • Only way a process can evolve is as result of handshakes.

  40. Handshakes between parallel components can happen only when they have evolved to have alternatives beginning with complementary prefixes. • In this sense, they can handshake only when both are prepared. • Handshakes synchronize the behavior of components • They thereby coordinate behavior. • Handshakes are like speech acts. • Contemporary analysis of face-to-face conversation emphasizes the active role of addressees (e.g., nods).

  41. Process-Algebraic Agent Abstraction • Some of the combinators (and their syntactic patterns) persist through transitions— • e.g., parallel composition and restriction (or hiding) combinators. • Other combinators (e.g., alternative composition and prefix) don't thus persist. • Processes corresponding to agents persist through transitions. • So a a multiagent system from is • a parallel composition. • Each component models an agent and involves a recursively defined process identifier.

  42. This view of agents is simpler than that of standard epistemic logic. • Handshakes are primitives, so no need for assumptions about agents’ states or a global clock to support joint actions. • State of an agent given simply by the current form of the term denoting it. • A process algebra is more concrete than epistemic logic. • A logic lets us assert abstract properties of an agent or system of agents. • Using a process algebra, we specify the behavior of agents.

  43. What’s Missing in the Process-Algebraic Agent Abstraction • Tempting to view process-algebraic terms as possible plans an agent or a person may undertake. • But the notion that humans execute predefined plans in interacting with technology or with each other has been heavily criticized by ethnomethodologists. • Emphasize how situated behavior is determined in an ongoing way.

  44. Certain speech acts occur only to establish common knowledge. • Nearly all contributions in a conversation advance our common knowledge. • So what future actions might be appropriate is determined as a joint project unfolds. • And patterns of joint communication actions have nothing to say about behavior that deviates from them.

  45. What was missing in our agent abstraction was the persisting effects of speech acts. • Within a conversation speech acts can establish common knowledge. • Also, certain speech acts have deontic effects, such as obligations, prohibitions, and permissions.

  46. Deontic Logic • Modal operators of standard deontic logic: • O , “ is obligatory”, • P , “ is permitted”, and • F , “ is forbidden (or prohibited)”. • P  O  F  • Development driven by certain paradoxes that arise when there’s a conflict between • the logical status (valid, satisfiable, etc.) of a deontic-logic formula and • the intuitive understanding of the natural-language reading of the formula.

  47. Dyadic deontic logic—e.g., • O “Given , it is obligatory that .” • Special obligations, permissions, and prohibitions—e.g., • OA “It is obligatory for A that .” • Directed obligations, etc.—e.g., • OA,B “A is obligated to B that .” • Deontic operators derived from operators that make action explicit—e.g., • A sees to it that  • operators of dynamic logic.

  48. Deontic notions are appropriate whenever we distinguish between • what is ideal (obligatory) and • what is actual. • Reject O  as a thesis. • Obligation may be violated.

  49. Some application areas of computer science: • formal specification • Modern software is so complex, we must cover non-ideal cases too in specifications. • fault tolerance • Non-ideal behavior introduces obligations to correct the situation. • database integrity constraints—distinguish between • deontic constraints: may be violated • necessity constraints: largely analytically true.