1 / 50

CPSC 668 Distributed Algorithms and Systems

CPSC 668 Distributed Algorithms and Systems. Fall 2006 Prof. Jennifer Welch. Impossibility of Asynchronous Consensus. Show impossible in read/write shared memory with n processors and n - 1 faults prove directly: not hard since so many faults

ethan
Download Presentation

CPSC 668 Distributed Algorithms and Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CPSC 668Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch Set 11: Asynchronous Consensus

  2. Impossibility of Asynchronous Consensus • Show impossible in read/write shared memory with n processors and n - 1 faults • prove directly: not hard since so many faults • implies there is no 2-proc algorithm for 1 fault • Show impossible in r/w shared memory with n processors and 1 fault. Use reduction: • use a hypothetical n-proc algorithm for 1 fault as a subroutine to design a 2-proc algorithm for 1 fault • Show impossible in message passing with n processors and 1 fault. Use reduction: • use a hypothetical message passing algorithm as a subroutine to design a shared memory algorithm Set 11: Asynchronous Consensus

  3. Modeling Asynchronous Systems with Crash Failures • Let f be the maximum number of faulty processors. • For both SM and MP: All but f of the processors must take an infinite number of steps in an admissible execution. • For MP: Also require that all messages sent must eventually be delivered, except for those sent by a faulty processor in its last step, which might or might not be delivered. Set 11: Asynchronous Consensus

  4. Wait-Free Algorithms • An algorithm for n processors is wait-free if it can tolerate n - 1 failures. • Intuition is that a nonfaulty processor does not wait for other processors to do something: it cannot, because it might be the only processor left alive. • First result is to show that there is no wait-free consensus algorithm in the asynchronous r/w shared memory model. Set 11: Asynchronous Consensus

  5. Impossibility of Wait-Free Consensus • Suppose in contradiction there is an n-processor algorithm for n - 1 faults in the asynchronous read/write shared memory model. • Proof is similar to that showing f + 1 rounds are necessary in the synchronous message passing model. … bivalent initial config bivalent config bivalent config bivalent config bivalent config Set 11: Asynchronous Consensus

  6. Modified Notion of Bivalence • In the synchronous round lower bound proof, valency referred to which decisions are reachable in failure-sparse admissible executions. • For this proof, we are concerned with which decisions are reachable in any execution, as long as it is admissible (for the asynchronous shared memory model with up to n - 1 failures). Set 11: Asynchronous Consensus

  7. Univalent Similarity Lemma (5.15): If C1 and C2 are both univalent and they are similar w.r.t. pi, then they have the same valency. Proof: because wait-free pi-only  C1 v-valent pi decides v  Thus v = w C2 w-valent pi decides v Set 11: Asynchronous Consensus

  8. Bivalent Initial Configuration Lemma (5.16): There exists a bivalent initial configuration Proof is a simpler version of what we did for the synchronous f + 1 round lower bound proof. Set 11: Asynchronous Consensus

  9. i(C) 0-val. Rest of proof is case analysis of what pi and pj do in their two steps pi C bival. pj j(C) 1-val. Critical Processors If C is bivalent and i(C) (result of pi taking one step) is univalent, then pi is criticalin C. Lemma (5.17): If C is bivalent, then at least one processor is not critical in C. Proof: Suppose in contradiction all processors are critical. Set 11: Asynchronous Consensus

  10. i(C) 0-val. pj pi C bival. pi pj j(C) 1-val. Critical Processors Case 1:pi and pj access different registers. Can't be different valencies! Case 2:pi and pj read same register. Same proof. Set 11: Asynchronous Consensus

  11. Critical Processors Case 3: pi writes to a register R and pj reads from R. pj reads from R j(C) 1-val C bival. pi writes to R contradicts Univalent pi writes to R Similarity Lemma i(C) 0-val i(j(C)) 1-val look the same to pi Set 11: Asynchronous Consensus

  12. Critical Processors Case 4: What if pi and pj both write to the same shared variable? • Can "assume away" the problem by assuming we only have single-writer shared variables. • Or, can do a similar proof for this case. Set 11: Asynchronous Consensus

  13. Finishing the Impossibility Proof • Create an admissible execution C0,i1,C1,i2,C2,… in which all configurations are bivalent. • contradicts termination requirement • Start with bivalent initial configuration. • Suppose we have bivalent Ck. To get bivalent Ck+1: • Let pik+1 be a proc. that is not critical in Ck. • Let Ck+1 be ik+1(Ck). Set 11: Asynchronous Consensus

  14. Impossibility of 1-Resilient Consensus Even if the ratio of nonfaulty processors becomes overwhelming, consensus still cannot be solved in asynchronous SM (with read/write registers). • Assume there exists an algorithm A for n processors and 1 failure. • Use A as a subroutine to design an algorithm A' for 2 processors and 1 failure. • We just showed such an A' cannot exist. • Thus A cannot exist. Set 11: Asynchronous Consensus

  15. Features of Assumed Algorithm Assume in contradiction that there exists algorithm A for processors q0, q1, …, qn-1 that solves consensus with 1 failure. W.l.o.g., assume A satisfies: • each qj has a single shared register Rj which it writes and others read; initially empty • code of each qj alternates reads and writes, beginning with a read • each write step of qj writes qj's entire current local state into Rj Set 11: Asynchronous Consensus

  16. Idea of Simulation Construct algorithm A' for 2 processors, p0 and p1 that solves consensus with 1 failure: • Each pi goes through the qj 's in round robin order, trying to simulate their steps. • When pi begins the simulation for each qj, it uses its own input as the input for qj. • If pi ever simulates a decision step by some qj, it decides the same value. • How do p0 and p1 keep their simulations consistent? • Steps are grouped into pairs, a read and the following write. • p0 and p1 need to "agree" on the value of each qj's local state after each pair of steps by qj. Set 11: Asynchronous Consensus

  17. Keeping Simulations Consistent • For qj 's k-th pair, p0 and p1 each have a Flag shared variable and a Suggest shared variable. • Assume qj 's (k-1)-st pair has been computed. • pi calculates its suggestion for qj 's state after the k-th pair (more details shortly). • pi checks if p1-i has made a suggestion for this state of qj. • If not, then pi sets its Flag to 1. • If so, then pi sets its Flag to 0. Set 11: Asynchronous Consensus

  18. Winner for a Pair Intepretation of Flag shared variables for qj 's k-th pair: • If pi 's Flag is 1, then pi is the winner. • If both are 0, then consider p0 the winner. • If one is 0 and the other is not yet set, then winner is not determined. • If neither is set yet, then winner is not determined. • Not possible for both to be 1. (Convince yourself.) In situations 1 and 2, qj 's k-th pair is said to be computed. Set 11: Asynchronous Consensus

  19. Calculating Suggestion for Simulated State How does pi calculate its suggestion for qj 's state after qj 's k-th pair? • Get qj 's state after qj 's (k-1)-st pair: • determine winner for that pair • get the winner's suggestion (in winner's Suggest variable for that pair) • (if k - 1 = 0 then use qj 's initial state with pi 's input) • Determine (from state just obtained) which processor's register (say qr) qj is to read in its k-th pair Set 11: Asynchronous Consensus

  20. Calculating Suggestion for Simulated State • Get current value of qr 's register: • Find largest m such that qr 's m-th pair has been computed • Get the winning suggestion for that pair • Apply qj 's transition function to qj 's state after (k-1)-st pair and current value of qr 's register to get value of qj 's state after its k-th pair. Set 11: Asynchronous Consensus

  21. Correctness of 2-Proc. Algorithm • Each admissible execution of A' (by p0 and p1) simulates an execution of A (by q0 through qn-1). • If pi observes some qj make a decision in the simulated execution, then pi makes the same decision. • If the simulated execution of A were to be admissible, then it would satisfy • termination: eventually the qj's decide • agreement: the qj's make the same decision • validity: if all qj 's have input v, then decision is v. Remember that the simulated execution's inputs are based on the real execution's inputs. • Then the actual execution of A' would be correct. Set 11: Asynchronous Consensus

  22. Simulated Execution is Admissible • Must show that at most one of the qj 's fails to take an infinite number of steps. • How can a simulation of a qj get stuck? • If p0 or p1 crashes while simulating qj 's k-th pair for some k. For instance: • p0writes a suggestion and crashes • p1 sees p0 's suggestion and writes 0 to its Flag • p0 's Flag remains unset forever • So qj 's k-th pair is never computed. • But the crash of one pican only block the simulation of one qj ! • In the example, p1 will be able to progress on its own simulating the steps of all processors other than qj Set 11: Asynchronous Consensus

  23. Impossibility of Consensus in Message Passing Strategy: • Assume there exists an n-processor 1-resilient consensus algorithm A for the asynchronous message passing model. • Use A as a subroutine to design an n-processor 1-resilient consensus algorithm A' for asynchronous shared memory (with read/write variables). • Previous result shows A' cannot exist. • Thus A cannot exist. Set 11: Asynchronous Consensus

  24. Impossibility of Consensus in MP Idea of A': • Simulate message channels wth read/write registers. • Then run algorithm A on top of these simulated channels. To simulate channel from pi to pj: • Use one register to hold the sequence of messages sent over the channel • pi "sends" a message m by writing the old value of the register with m appended • pj "receives" a message by reading the register and checking for new values at the end Set 11: Asynchronous Consensus

  25. Randomized Consensus • To get around the negative results for asynchronous consensus, we can: • weaken the termination condition: nonfaulty processors must decide with some nonzero probability • keep the same agreement and validity conditions • This version of consensus is solvable, in both shared memory and message passing! Set 11: Asynchronous Consensus

  26. Motivation for Adversary • Even without randomization, in an asynchronous system there are many executions of an algorithm, even when the inputs are fixed, depending on when processors take steps, when they fail, and when messages are delivered. • To be able to calculate probabilities, we need to separate out variation due to causes other than the random choices • Group executions of interest so that each group differs only in the random choices • Perform probabilistic calculations separately for each group and then combine somehow Set 11: Asynchronous Consensus

  27. Adversary • Concept used to account for all variability other than the random choices is that of "adversary". • Adversary is a function that takes an execution prefix and returns the next event to occur. • Adversary must obey admissibility conditions of the revelant model • Other conditions might be put on the adversary (e.g., what information it can observe, how much computational power it has) Set 11: Asynchronous Consensus

  28. Probabilistic Definitions • An execution of a specific algorithm, exec(A,C0,R), is uniquely determined by • an adversary A • an initial configuration C0 • a collection of random numbers • Given a predicate P on executions and a fixed adversary A and initial config C0, Pr[P] is the probability of {R : exec(A,C0,R) satisfies P} • Let T be a random variable (e.g., running time). For a fixed A and C0, the expected value of T is ∑ x Pr[T = x] x is a value of T Set 11: Asynchronous Consensus

  29. Probabilistic Definitions • We define the expected value of a complexity measure to be the maximum over all admissible adversaries A and initial configurations C0, of the expected value for that particular A and C0. • So this is a "worst-case" average: worst possible adversary (pattern of asynchrony and failures) and initial configuration, averaging over the random choices. Set 11: Asynchronous Consensus

  30. A Randomized Consensus Algorithm • Works in message passing model • Tolerates f crash failures • more complicated version handles Byzantine failures • Works in asynchronous case • circumvents asynchronous impossibility result • Requires n > 2f • this is optimal Set 11: Asynchronous Consensus

  31. Consensus Algorithm ensures a high level of consistency b/w what different procs get Code for processor pi: Initially r = 1 and prefer = pi 's input • while true do • votes := get-core(<VOTE,prefer,r>) • let v be majority of phase r votes • if all phase r votes are v then decide v • outcomes := get-core(<OUTCOME,v,r>) • if all phase r outcome values received are w then prefer := w • else prefer := common-coin() • r := r + 1 uses randomization to imitate tossing a coin Set 11: Asynchronous Consensus

  32. Properties of Get-Core • Executed by n processors, at most f of which can crash. • Input parameter is a value supplied by the calling processor. • Return parameter is an n-array, one entry per processor • Every nonfaulty processor's call to get-core returns. • There exists a set C of more than n/2 processors such that every array returned by a call to get-core contains the input parameter supplied by every processor in C. Set 11: Asynchronous Consensus

  33. Properties of Common-Coin • Subroutine implements an f-resilient common coin with bias . • Executed by n processors, at most f of which can crash. • No input parameter • Return parameter is a 0 or 1. • Every nonfaulty processor's call to common-coin returns. • Probability that a return value is 0 is at least . • Probability that a return value is 1 is at least . Set 11: Asynchronous Consensus

  34. Correctness of Consensus Algorithm • For now, don't worry about how to implement get-core and common-coin. • Assuming we have subroutines with the desired properties, we'll show • validity • agreement • probabilistic termination (and expected running time) Set 11: Asynchronous Consensus

  35. Unanimity Lemma Lemma (14.6): if all procs. that reach phase r prefer v, then all nonfaulty procs decide v by phase r. Proof: • Since all prefer v, all call get-core with v • Thus get-core returns a majority of votes for v • Thus all nonfaulty procs. decide v Set 11: Asynchronous Consensus

  36. Validity • If all processors have input v, then all prefer v in phase 1. • By unanimity lemma, all nonfaulty processors decide v by phase 1. Set 11: Asynchronous Consensus

  37. Agreement Claim: If pi decides v in phase r, then all nonfaulty procs. decide v by phase r + 1. Proof: Suppose r is earliest phase in which any proc. decides. • pi decides v in phase r => • all its phase r votes are v => • pi 's call to get-core(<VOTE,prefer,r>) returns more than n/2 non-nil entries and all are <VOTE,v,r> => • all entries for procs. in C are <VOTE,v,r> Set 11: Asynchronous Consensus

  38. Agreement • Thus every pj receives more than n/2 <VOTE,v,r> entries => • pj does not decide a value other than v in phase r • Also if pj calls get-core a second time in phase r, it uses input <OUTCOME,v,r> => • Every pk gets only <OUTCOME,v,r> as a result of its second call to get-core in phase r => • pk sets preference to v at end of phase r => • in round r + 1, all prefer v and Unanimity Lemma implies they all decide v in that round. Set 11: Asynchronous Consensus

  39. Termination Lemma (4.10): Probability that all nonfaulty procs decide by any particular phase is at least . Proof: Case 1: All nonfaulty procs set preference in that phase using common-coin. • Prob. that all get the same value is at least 2  ( for 0 and  for 1), by property of common-coin • Then apply Unanimity Lemma (14.6) Set 11: Asynchronous Consensus

  40. Termination Case 2: Some processor does not set its preference using common-coin. • All procs. that don't use common-coin to set their preference for that round have the same preference (convince yourself) • Probability that the common-coin subroutine returns the same value for all procs. that use it is at least . • Then apply the Unanimity Lemma (14.6). Set 11: Asynchronous Consensus

  41. Expected Number of Phases • What is the expected number of phases until all nonfaulty processors have decided? • Probability of all deciding in any given phase is at least . • Probability of terminating after i phases is (1 - )i-1. • Geometric random variable whose expected value is 1/ . Set 11: Asynchronous Consensus

  42. Implementing Get-Core • Difficulty in achieving consistency of messages is due to combination of asynchrony and crash possibility: • a processor can only wait to receive n - f messages • the first n - f messages that pi gets might not be from the same set of processors as pj 's first n - f messages • Overcome this by exchanging messages three times Set 11: Asynchronous Consensus

  43. Get-Core First exchange ("round"): • send argument value to all • wait for n - f first round msgs Second exchange ("round"): • send values received in first round to all • wait for n - f second round msgs • merge data from second round msgs Third exchange ("round"): • send values received in second round to all • wait for n - f third round msgs • merge data from third round msgs • return result Set 11: Asynchronous Consensus

  44. Analysis of Get-Core • Lemmas 14.4 and 14.5 show that it satisfies the desired properties (termination and consistency). • Time is O(1) (using standard way of measuring time in an asynchronous system) Set 11: Asynchronous Consensus

  45. Implementing Common-Coin A simple algorithm: • Each processor independently outputs 0 with probability 1/2 and 1 with probability 1/2. • Bias  = 1/2n • Advantage: simple, no communication • Disadvantage: Expected number of phases until termination is 2n Set 11: Asynchronous Consensus

  46. A Common Coin with Constant Bias 0 with probability 1/n 1 with probability 1 - 1/n coins := get-core(<FLIP,c>) if there exists j s.t. coins[j] = 0 then return 0 else return 1 c := Set 11: Asynchronous Consensus

  47. Correctness of Common Coin Lemma (14.12): Common-coin implements a (n/2 - 1)-resilient coin with bias 1/4. Proof: Fix any admissible adversary that is weak (cannot see the contents of messages) and any initial configuration. All probabilities are calculated with respect to them. Set 11: Asynchronous Consensus

  48. Probability of Flipping 1 • Probability that all nonfaulty processors get 1 for the common coin is at least the probability that they all set c to 1. • This probability is at least (1 - 1/n)n • When n = 2, this function is 1/4 • This function increases up to its limit of 1/e. • Thus the probability that all nonfaulty processors get 1 is at least 1/4. Set 11: Asynchronous Consensus

  49. Probability of Flipping 0 • Let C be the set of core processors (whose existence is guaranteed by properties of get-core). • If any processor in C sets c to 0, then all the nonfaulty processors will observe this 0 after executing get-core, and thus return 0. • Probability at least one processor in C sets c to 0 is 1 - (1 - 1/n)|C|. • This expression is at least 1/4 (by arithmetic). Set 11: Asynchronous Consensus

  50. Summary of Randomized Consensus Algorithm • Using the given implementations for get-core and common-coin, we get a randomized consensus algorithm for f crash failures with • n > 2f • O(1) expected time complexity • expected number of phases is 4 • time per phase is O(1) Set 11: Asynchronous Consensus

More Related