1 / 20

Distributed Algorithms: Asynch R/W SM Computability

Distributed Algorithms: Asynch R/W SM Computability. Eli Gafni, UCLA Summer Course, CRI, Haifa U, Israel. Computational Models. Serial Van-Neuman Turing Machines, RAM PRAM Interleaving ND Turing Machine - exits a paths Asynch Distributed - for all paths. Message-Passing Interleaving.

odin
Download Presentation

Distributed Algorithms: Asynch R/W SM Computability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Algorithms: Asynch R/W SM Computability Eli Gafni, UCLA Summer Course, CRI, Haifa U, Israel

  2. Computational Models • Serial Van-Neuman • Turing Machines, RAM • PRAM • Interleaving • ND Turing Machine - exits a paths • Asynch Distributed - for all paths

  3. Message-Passing Interleaving • N processors each with own program of the type “upon receiving message X in state Z do Y”. • Y may involve sending messages to certain other processors. • Special message type ``start’’ that can be received (or acted upon) only as first message.

  4. MP Cont’ed • ``Configuration’’ - messages in the ``ether’’ distained to processors + states of processors. • ``next Configuration’’ - choose a messge from the ``ether’’ and deliver it to its destination processor. Operate the ``Upon’’ procedure. Change processor’s state, and place new messages in the ``ether’’. • Initial configuartion - Start message in ``ether’’ to each processor. Processors in intial state.

  5. Full-Information • History of a processor determines its state. • History contains more info than ``state.’’ • Processors’ programs are ``common knowledge.’’ • W.l.o.g when interested in computability rather than efficiency - message is the history of the sender.

  6. Computability (Problems/Tasks) • Each processor eventually halts with ``output’’. • Some n-tuple outputs are valid some are not. • To prevent ``default’’ solutions the output tuples are parametrized by the set of processors that ``started.’’

  7. Fail-Free Model • Every message eventually delivered, and all processors eventually respond. • All problems are solvable: • Receive/send message to/from all • Determine the set of ``starters’’ • Apply default output to that set of starters

  8. Faults • Communication: ``channel’’ goes down. • Processor: Fail-Stop, Byzantine. • Many other faults possible: Discuss.

  9. Communication • Synchronous: Proceed by ``rounds’’. Every message sent at the beginning of a round is received by the end of the round. • Assume 2 processors/ 2 channels and one of the channels may fail-stop to deliver

  10. Consensus • Proc’s vote ``attack/retreat.’’ • If both vote same and no communication failure both eventually decide their vote. • Else they decide both either ``attack’’ or ``retreat.’’

  11. 2 procs synch cons w/ single channel failure in a round: Impossible. • Cannot do it with no communication • Cannot do it in 1 round: • A_1 donot receive must decide A since its view is compatible with A_2 no fault run. • Similarly R_2. • Say w.l.o.g R_2/A_1 no fault is R • Fail the message to A_1 to get contradiction.

  12. Impossibility Cont’ed • In general A_1 has to decide A comm with A_2 whether it receive or does not receive the last message. • It must have commited to A at the end of the previous round, so does A_2, so the last round is not necessary, Contradiction.

  13. N>2 • No alg for even N when N-1 channels may fail - one of the 2 procs emulate N-1 procs and ine emulates the remaining one. • Alg for less then N-1 channels: • Send input to all, receive • Send all that received to all, receive • Until have heard input of all: decide. • (prove liveness…)

  14. From Synch to ``Asynch’’ • What if the faults are less N-1 in each round, but the set of faults may ``jump around’’? • Correctness of the prev alg does not depend on the faults being static.

  15. Proc fail-stop failure • What if synch, no comm failure, but proc fail-stop. • What if t at most can fail-stop? • In t+1 rounds there is a ``clean’’ round that the view of all inputs is shared by all.

  16. What if single failure that ``Jumps’’ • In each round some or all messages from a SINGLE proc may not be delivered. • If N=2 obviously cannot do cons since can emulate comm failure. • What if N=3? ---- suspense.

  17. SWMR Async Shared Memory • n procs p_0,…,p_n and n cells C_1,…,C_n. • Proc p_k writes to C_k and can read all cells one at a time. • Configuration: each proc at a state that enable writing its cell, or reading some cell. • A proc is chosen it reads or writes, changes state, and another is chosen, etc.

  18. Properties of SM • If all procs write and then read all cells in arbitrary order, then each p_k returns the set of procs S_k it has seen write: • What is the property of the sets that makes them realization of SM execution? • At least one sees all, continue inductively • Fat Immediate Snapshots

  19. Immediate Snapshots • p_k \in S_k (proc reads itself) • p_k \subseteq p_j or vice versa • If p_j in S_k then S_j \subseteq S_k • Can you implement IS in SWMR SM? • Atomic Snap is I without the last property • IS \subseteq AS

  20. How do you Take AS in the middle of Comp rather than as a task? • Use sequence numbers with each new value written, double sacn until success • AS as a model - the read operation returns the whole memory rather than a single cell.

More Related