1 / 28

The probabilistic asynchronous p -calculus

The probabilistic asynchronous p -calculus. Catuscia Palamidessi, INRIA Futurs, France. Motivations. Main motivation behind the development of the probabilistic asynchronous p -calculus:

jania
Download Presentation

The probabilistic asynchronous p -calculus

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The probabilistic asynchronous p-calculus Catuscia Palamidessi, INRIA Futurs, France Imperial College

  2. Motivations • Main motivation behind the development of the probabilistic asynchronous p-calculus: To use it as an intermediate language for the fully distributed implementation of the p-calculus with mixed choice Plan of the talk • The p-calculus, • Operational semantics • Expressive power • The probabilistic asynchronous p-calculus • Probabilistic automata, operational semantics • Distributed implementation • Encoding the p-calculus into the probabilistic asynchronous p-calculus

  3. The p-calculus • Proposed by [Milner, Parrow, Walker ‘92]as a formal language to reason about concurrent systems • Concurrent: several processes running in parallel • Asynchronous: every process proceeds at its own speed • Synchonous communication: aka handshaking • Mixed guarded choice: input and output guards like in CSP and CCS. The implementation of guarded choice is aka the binary interaction problem • Dynamic generation of communication channels • Scope extrusion: a channel name can be communicated and its scope extended to include the recipient Q z z x y R P z

  4. p : the p-calculus (w/ mixed choice) Syntax g ::= x(y) | x^y | t prefixes (input, output, silent) P ::= Si gi . Pimixed guarded choice | P | P parallel | (x) P new name | recA Precursion | Aprocedure name

  5. Operational semantics • Transition system P -a Q • Rules ChoiceSi gi . Pi –giPi P-x^yP’ Open ___________________ (y) P -x^(y)P’

  6. Operational semantics • Rules (continued) P -x(y) P’Q-x^zQ’ Com ________________________ P | Q -t P’ [z/y]|Q’ P -x(y) P’Q-x^(z)Q’ Close _________________________ P | Q -t (z) (P’ [z/y] |Q’) P -g P’ Par _________________ fn(Q) and bn(g) disjoint Q | P -gQ | P

  7. Features which make p very expressive- and cause difficulty in its distributed implementation • (Mixed) Guarded choice • Symmetric solution to certain distributed problems involving distributed agreement • Link mobility • Network reconfiguration • It allows expressing HO (e.g. l calculus) in a natural way • In combination with guarded choice, it allows solving more distributed problems than those solvable by guarded choice alone

  8. y P Q x The expressive power of p • Example of distributed agreement: The leader election problem in a symmetric network Two symmetric processes must elect one of them as the leader • In a finite amount of time • The two processes must agree • A symmetric and fully distributed solution in p, using guarded choice: x.Pwins+ y^.Ploses| y.Qwins+ x^.Qloses Ploses| Qwins Pwins| Qloses

  9. Example of a network where the leader election problem cannot be solved by guarded choice alone For the following network there is no (fully distributed and symmetric) solution in CCS, or in CSP

  10. A solution to the leader election problem in p winner looser winner winner looser looser

  11. Approaches to the implementation of guarded choice in literature • [Parrow and Sjodin 92], [Knabe 93], [Tsai and Bagrodia 94]: asymmetric solution based on introducing an order on processes • Other asymmetric solutions based on differentiating the initial state • Plenty of centralized solutions • [Joung and Smolka 98] proposed a randomized solution to the multiway interaction problem, but it works only under an assumption of partial synchrony among processes • Our solution is the first to be fully distributed, symmetric, and using no synchronous hypotheses.

  12. State of the art in p • Formalisms able to express distributed agreement are difficult to implement in a distributed fashion • For this reason, the field has evolved towards variants of p which retain mobility, but have no guarded choice • One example of such variant is the asynchronousp calculus proposed by [Honda-Tokoro’91, Boudol, ’92] (Asynchronous = Asynchronous commnication)

  13. pa : the Asynchonous pVersion of [Amadio, Castellani, Sangiorgi ’97] Syntax g ::= x(y) | t prefixes P ::= Si gi . Piinput guarded choice | x^youtput action | P | P parallel | (x) P new name | recA Precursion | Aprocedure name

  14. Characteristics of pa • Asynchronous communication: • we can’t write a continuation after an output, i.e. no x^y.P, but only x^y | P • soPwill proceed without waitingfor the actual delivery of the message • Input-guarded choice: only input prefixes are allowed in a choice. Note: the originalasynchronous p-calculus did not contain a choice construct.However the version presented here was shown by [Nestmann and Pierce, ’96] to be equivalent to the originalasynchronous p-calculus • It can be implemented in a fully distributed fashion (see for instance Odersky’s group’s project PiLib)

  15. p [[ ]] probabilistic asynchronous p << >> distributed machine Towards a fully distributed implementation of p • The results of previous pages show that a fully distributed implementation of p must necessarily be randomized • A two-steps approach: Advantages: the correctness proof is easier since [[ ]] (which is the difficult part of the implementation) is between two similar languages

  16. ppa: the Probabilistic Asynchonous p Syntax g ::= x(y) | t prefixes P ::= Sipigi . Pi pr. inp. guard. choiceSi pi = 1 | x^youtputaction | P | Pparallel | (x) Pnewname | recA Precursion | Aprocedurename

  17. 1/2 1/3 1/2 1/3 1/3 1/2 1/3 1/2 1/3 1/3 2/3 2/3 1/3 1/3 1/2 1/3 1/3 1/2 1/3 2/3 1/3 The operational semantics of ppa • Based on the Probabilistic Automata of Segala and Lynch • Distinction between • nondeterministic behavior (choice of the scheduler)and • probabilistic behavior (choice of the process) Scheduling Policy: The scheduler chooses the group of transitions Execution: The process choosesprobabilistically the transition within the group

  18. The operational semantics of ppa • Representation of a group of transition P { --gi-> piPi } i • Rules Choice Si pi gi . Pi {--gi-> piPi }i P{--gi-> piPi }i Par ____________________ Q | P {--gi-> piQ | Pi }i

  19. The operational semantics of ppa • Rules (continued) P{--xi(yi)-> piPi }i Q{--x^z-> 1 Q’}i Com ____________________________________ P | Q {--t-> piPi[z/yi]|Q’ }xi=x U { --xi(yi)-> pi Pi |Q }xi=/=x P{--xi(yi)-> piPi }i Res ___________________ qi renormalized (x) P { --xi(yi)-> qi (x) Pi }xi =/= x

  20. Implementation of ppa • Compilation in Java << >> : ppa Java • Distributed << P | Q >> = << P >>.start(); << Q >>.start(); • Compositional << P op Q >> = << P >> jop << Q >> for all op • Channels are one-position buffers with test-and-set (synchronized) methods for input and output

  21. Encoding p into ppa • [[ ]] : pppa • Fully distributed [[ P | Q ]] = [[ P ]] | [[ Q ]] • Preserves the communication structure [[ P s ]] = [[ P ]] s • Compositional [[ P op Q ]] = Cop [ [[ P ]] , [[ Q ]] ] • Correct wrt a notion of probabilistic testing semantics P must O iff [[ P ]] must [[ O ]] with prob 1

  22. f Pi P R f Ri Qi R’i Q S f Si f Encoding p into ppa • Idea (from an idea in [Nestmann’97]): • Every mixed choice is translated into a parallel comp. of processes corresponding to the branches, plus a lock f • The input processes compete for acquiring both its own lock and the lock of the partner • The input process which succeeds first, establishes the communication. The other alternatives are discarded The problem is reduced to a generalized dining philosophers problem where each fork (lock) can be adjacent to more than two philosophers Further, we can reduce the generalized DP to the classic case, and then apply the randomized algorithm of Lehmann and Rabin for the dining philosophers

  23. Dining Philosophers: classic case Each fork is shared by exactly two philosophers

  24. The algorithm of Lehmann and Rabin • think; • choose probabilistically first_fork in {left,right}; • if not taken(first_fork) then take(first_fork) else goto 3; • if not taken(second_fork) then take(second_fork); else { release(first_fork); goto 2 } • eat; • release(second_fork); • release(first_fork); • goto 1

  25. Dining Philosophers: generalized case • Each fork can be shared by more than two philosophers • The classical algorithm of Lehmann and Rabin, as it is, does not work in the generalized case. However, we can transform it into the classic case Transformation into the classic case: each fork is initially associated with a token. Each phil needs to acquire a token in order to participate to the competition. The competing phils determine a set of subgraphs in which each subgraph contains at most one cycle

  26. Generalized philosophers • Another problem we had to face: the solution of Lehmann and Rabin works only for fair schedulers, while ppa does not provide any guarantee of fairness • Fortunately, it turns out that the fairness is required only in order to avoid a busy-waiting livelock at instruction 3. If we replace busy-waiting with suspension, then the algorithm works for any scheduler • This result was achieved independently also by [Duflot, Fribourg, Picarronny 02].

  27. The algorithm of Lehmann and RabinModified so to avoid the need for fairness The algorithm of Lehmann and Rabin • think; • choose probabilistically first_fork in {left,right}; • if not taken(first_fork) then take(first_fork) else wait; • if not taken(second_fork) then take(second_fork); else { release(first_fork); goto 2 } • eat; • release(second_fork); • release(first_fork); • goto 1 • think; • choose probabilistically first_fork in {left,right}; • if not taken(first_fork) then take(first_fork) else goto 3; • if not taken(second_fork) then take(second_fork); else { release(first_fork); goto 2 } • Eat; • release(second_fork); • release(first_fork); • goto 1

  28. Conclusion • We have provided an encoding of the p calculus into a probabilistic version of its asynchronous fragment • fully distributed • compositional • correct wrt a notion of testing semantics • Advantages: • high-level solutions to distributed algorithms • Easier to prove correct (no reasoning about randomization required)

More Related