1 / 36

Achieving Byzantine Agreement and Broadcast against Rational Adversaries

Achieving Byzantine Agreement and Broadcast against Rational Adversaries. Adam Groce Aishwarya Thiruvengadam Ateeq Sharfuddin CMSC 858F: Algorithmic Game Theory University of Maryland, College Park. Overview.

royal
Download Presentation

Achieving Byzantine Agreement and Broadcast against Rational Adversaries

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Achieving Byzantine Agreement and Broadcast against Rational Adversaries Adam Groce Aishwarya Thiruvengadam Ateeq Sharfuddin CMSC 858F: Algorithmic Game Theory University of Maryland, College Park

  2. Overview • Byzantine agreement and broadcast are central primitives in distributed computing and cryptography. • The original paper proves (LPS1982) that successful protocols can only be achieved if the number of adversaries is 1/3rd the number of players.

  3. Our Work • We take a game-theoretic approach to this problem • We analyze rational adversaries with preferences on outputs of the honest players. • We define utilities rational adversaries might have. • We then show that with these utilities, Byzantine agreement is possible with less than half the players being adversaries • We also show that broadcast is possible for any number of adversaries with these utilities.

  4. The Byzantine Generals Problem • Introduced in 1980/1982 by Leslie Lamport, Robert Shostak, and Marshall Pease. • Originally used to describe distributed computation among fallible processors in an abstract manner. • Has been applied to fields requiring fault tolerance or with adversaries.

  5. General Idea • The n generals of the Byzantine empire have encircled an enemy city. • The generals are far away from each other necessitating messengers to be used for communication. • The generals must agree upon a common plan (to attack or to retreat).

  6. General Idea (cont.) • Up to t generals may be traitors. • If all good generals agree upon the same plan, the plan will succeed. • The traitors may mislead the generals into disagreement. • The generals do not know who the traitors are.

  7. General Idea (cont.) • A protocol is a Byzantine Agreement (BA) protocol tolerating t traitors if the following conditions hold for any adversary controlling at most t traitors: • All loyal generals act upon the same plan of action. • If all loyal generals favor the same plan, that plan is adopted.

  8. General Idea (broadcast) • Assume general i acting as the commanding general, and sending his order to the remaining n-1 lieutenant generals. • A protocol is a broadcast protocol tolerating t traitors if these two conditions hold: • All loyal lieutenants obey the same order. • If the commanding general is loyal, then every loyal lieutenant general obeys the order he sends.

  9. Impossibility • Shown impossible for t ≥ n/3 • Consider n = 3, t = 1; Sender is malicious; all initialized with 1; from A’s perspective: A1 1 A does not know if Sender or B is honest. Sender1 I’m Spartacus! 0 0 B1 I’m Spartacus!

  10. Equivalence of BA and Broadcast • Given a protocol for broadcast, we can construct a protocol for Byzantine agreement. • All players use the broadcast protocol to send their input to every other player. • All players output the value they receive from a majority of the other players.

  11. Equivalence of BA and Broadcast Given a protocol for Byzantine agreement, we can construct a protocol for broadcast. The sender sends his message to all other players. All players use the message they received in step 1 as input in a Byzantine agreement protocol. Players output whatever output is given by the Byzantine agreement protocol.

  12. Previous Works • It was shown in PSL1980 that algorithms can be devised to guarantee broadcast/Byzantine agreement if and only if n ≥ 3t+1. • If traitors cannot falsely report messages (for example, if a digital signature scheme exists), it can be achieved for n ≥ t ≥ 0. • PSL1980 and LSP1982 demonstrated an exponential communication algorithm for reaching BA in t+1 rounds for t < n/3.

  13. Previous Works • Dolev, et al. presented a 2t+3 round BA with polynomially bounded communication for any t < n/3. • Probabilistic BA protocols tolerating t < n/3 have been shown running in expected time O (t / log n); though running time is high in worst case. • Faster algorithms tolerating (n - 1)/3 faults have been shown if both cryptography and trusted parties are used to initialize the network.

  14. Rational adversaries • All results stated so far assume general, malicious adversaries. • Several cryptographic problems have been studied with rational adversaries • Have known preferences on protocol output • Will only break the protocol if it benefits them • In MPC and secret sharing, rational adversaries allow stronger results • We apply rational adversaries to Byzantine agreement and broadcast

  15. Definition: Security against rational adversaries A protocol for BA or broadcast is secure against rational adversaries with a particular utility function if for any adversary A1 against which the protocol does not satisfy the security conditions there is another adversary A2 such that: • When the protocol is run with A2 as the adversary, all security conditions are satisfied. • The utility achieved by A2 is greater than that achieved by A1.

  16. Definition: Security against rational adversaries (cont.) • This definition requires that, for the adversary, following the protocol strictly dominates all strategies resulting in security violations. • Guarantees that there is an incentive not to break the security. • We do not require honest execution to strictly dominate other strategies.

  17. Utility Definitions • Meaning of “secure against rational adversaries” is dependent on preferences that adversaries have. • Some utility functions make adversary-controlled players similar to honest players, with incentives to break protocol only in specific circumstances. • Other utility functions define adversaries as strictly malicious. • We present several utility definitions that are natural and reasonable.

  18. Utility Definitions (cont.) • We limit to protocols that attempt to broadcast or agree in a single bit. • The output can be one of the following: • All honest players output 1. • All honest players output 0. • Honest players disagree on output.

  19. Utility Definitions (cont.) • We assume that the adversary knows the inputs of the honest players. • Therefore, the adversary can choose a strategy to maximize utility for that particular input set.

  20. Utility Definitions (cont.) • Both protocol and adversary can act probabilistically. • So, an adversary’s choice of strategy results not in a single outcome but in a probability distribution over possible outcomes. • We establish a preference ordering on probability distributions.

  21. Strict Preferences • An adversary with “strict preferences” is one that will maximize the likelihood of its first-choice outcome. • For a particular strategy, let a1 be the probability of the adversary achieving its first-choice outcome and a2 be the probability of achieving the second choice outcome. • Let b1 and b2 be the same probabilities for a second potential strategy. • We say that the first strategy is preferred if and only if a1 > b1 or a1=b1 and a2 > b2.

  22. Strict Preferences (cont.) • Not a “utility” function in the classic sense • Provides a good model for a very single-minded adversary.

  23. Strict Preferences (cont.) • We will use shorthand to refer to the ordering of outcomes: • For example: 0s > disagreement > 1s. • Denotes an adversary who prefers that all honest players output 0, whose second choice is disagreement, and last choice is all honest players output 1.

  24. Linear Utility • An adversary with “linear utilities” has its utilities defined by: Utility = u1 Pr[players output 0] + u2 Pr[players output 1] + u3 Pr [players disagree] Where u1 + u2 + u3 = 1.

  25. Definition (0-preferring) • A 0-preferring adversary is one for which Utility = E[number of honest players outputting 0]. • Not a refinement of the strict ordering adversary with a preference list of 0s > disagree > 1.

  26. Other possible utility definitions • The utility definitions do not cover all preference orderings. • With n players, there is 2n output combinations. • There are an infinite number of probability distributions on those outcomes. • Any well-ordering on these probability distributions is a valid set of preferences against which security could be guaranteed.

  27. Other possible utility definitions • Our utility definitions preserve symmetry of players, but this is not necessary. • It is also possible that the adversary’s output preferences are a function of the input. • The adversary could be risk-averse • Adversarial players might not all be centrally controlled. • Could have conflicting preferences

  28. Equivalence of Broadcast to BA, revisited • Reductions with malicious adversaries don’t always apply • Building BA from broadcast fails • Building broadcast from BA succeeds

  29. Strict preferences case • Assume a preference ordering of0 > 1 > disagree. • t < n/2 • Protocol: • Each player sends his input to everyone. • Each player outputs the majority of all inputs he has received. If there is no strict majority, output 0.

  30. Strict preferences case (cont.) • Proof: • If all honest players held the same input, the protocol terminates with the honest players agreeing despite what the adversary says. • If the honest players do not form a majority, it is in adversarial interest to send 0’s.

  31. Generalizing the proof • Same protocol: • Each player sends his input to everyone. • Each player outputs the majority of all inputs he has received. If there is no strict majority, output 0. • Assume any preference set with all-zero output as first choice. • Proof works as before.

  32. A General Solution • Task: To define another protocol that will work for strict preferences with disagree > 0s > 1s. • We have not found a simple efficient solution for this case. • Instead, we define a protocol based on Fitzi et al.’s work on detectable broadcast. • Solves for a wide variety of preferences

  33. Definition: Detectable Broadcast • A protocol for detectable broadcast must satisfy the following three conditions: • Correctness: All honest players either abort or accept and output 0 or 1. If any honest player aborts, so does every other honest player. If no honest players abort then the output satisfies the security conditions of broadcast. • Completeness: If all players are honest, all players accept (and therefore, achieve broadcast without error). • Fairness: If any honest player aborts then the adversary receives no information about the sender’s bit (not relevant to us).

  34. Detectable broadcast (cont.) • Fitzi’s protocol requires t + 5 rounds and O(n8 (log n + k)3 ) total bits of communication, where k is a security parameter. • Assumes computationally bounded adversary. • This compares to one round and n2 bits for the previous protocol. • Using detectable broadcast is much less efficient. • However, this is not as bad when compared to protocals that achieve broadcast against malicious adversaries.

  35. General protocol • Run the detectable broadcast protocol • - If an abort occurs, output adversary’s least-preferred outcome- Otherwise, output the result of the detectable broadcast protocol • Works any time the adversary has a known least-favorite outcome • Works for t<n

  36. Conclusion • Rational adversaries do allow improved results on BA/broadcast. • For many adversary preferences, we have matching possibility and impossibility results. • More complicated adversary preferences remain to be analyzed.

More Related