1 / 31

Byzantine fault tolerance

Byzantine fault tolerance. Jinyang Li Slides adapted from Liskov. What we ’ ve learnt so far: tolerate fail-stop failures. Traditional RSM tolerates benign failures Node crashes Network partitions A RSM w/ 2f+1 replicas can tolerate f simultaneous crashes. Byzantine faults.

buck
Download Presentation

Byzantine fault tolerance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Byzantine fault tolerance Jinyang Li Slides adapted from Liskov

  2. What we’ve learnt so far:tolerate fail-stop failures • Traditional RSM tolerates benign failures • Node crashes • Network partitions • A RSM w/ 2f+1 replicas can tolerate f simultaneous crashes

  3. Byzantine faults • Nodes fail arbitrarily • Failed node performs incorrect computation • Failed nodes collude • Causes: attacks, software/hardware errors • Examples: • Client asks bank to deposit $100, a Byzantine bank server substracts $100 instead. • Client asks file system to store f1=“aaa”. A Byzantine server returns f1=“bbb” to clients.

  4. Strawman defense • Clients sign inputs. • Clients verify computation based on signed inputs. • Example: C stores signed file f1=“aaa” with server. C verifies that returned f1 is signed correctly. • Problems: • Clients have to perform computation • Even for storage systems, • Byzantine node can falsely report non-existence of a file • Byzantine node can return stale/correct computation • E.g. Client stores signed f1=“aaa” and later stores signed f1=“bbb”, a Byzantine node can always return f1=“aaa”.

  5. PBFT ideas • PBFT, “Practical Byzantine Fault Tolerance”, M. Castro and B. Liskov, SOSP 1999 • Replicate a service across many nodes • Assumption: only a small fraction of nodes are Byzantine • Rely on a super-majority of votes to decide on correct computation. • PBFT property: tolerates <=f failures using a RSM with 3f+1 replicas

  6. RSM Review: normal operation Client write W N1=Primary Wait for responses from all backups before applying write and replying to client View=1, seqno=25, W View=1, seqno=25, W N3=Backup N2=Backup

  7. RSM review: view change N1=Primary Paxos propose viewno=2, <primary=N2,backup=N3> Paxos propose viewno=2, <primary=N3,backup=N2> N3=Backup N2=Backup

  8. Why doesn’t traditional RSM work with Byzantine nodes? • Malicious primary is bad • Send wrong result to clients • Send different ops to different replicas • Assign the same seqno to different requests! • Cannot use Paxos for view change • Paxos uses a majority accept-quorum to tolerate f benign faults out of 2f+1 nodes • With malicious interfering, Paxos does not ensure agreement among honest nodes

  9. Paxos under Byzantine faults Propose vid=1, myn=N0:1 OK val=null N2 N0 N1 nh=N0:1 nh=N0:1 Propose vid=1, myn=N0:1 OK val=null

  10. Paxos under Byzantine faults accept vid=1, myn=N0:1, val=xyz OK N2 X N0 N1 N0 decides on Vid1=xyz nh=N0:1 nh=N0:1

  11. Paxos under Byzantine faults prepare vid=1, myn=N1:1, val=abc OK val=null N2 X N0 N1 N0 decides on Vid1=xyz nh=N0:1 nh=N0:1

  12. Agreement conflict! Paxos under Byzantine faults accept vid=1, myn=N1:1, val=abc OK N2 X N0 N1 N0 decides on Vid1=xyz nh=N0:1 nh=N1:1 N1 decides on Vid1=abc

  13. PBFT main ideas • Static configuration (same 3f+1 nodes across views) • To deal with malicious primary • Use a 3-phase protocol to agree on sequence number • To deal with loss of agreement • Use a bigger quorum (2f+1 out of 3f+1 nodes) • Need to authenticate communications

  14. BFT requires a 2f+1 quorum out of 3f+1 nodes … … … … 1. State: 2. State: 3. State: 4. State: A A A Servers X write A write A write A write A Clients For liveness, the quorum size must be at most N - f

  15. BFT Quorums … … … … 1. State: 2. State: 3. State: 4. State: A A B B B Servers X write B write B write B write B Clients For correctness, any two quorums must intersect at least one honest node: (N-f) + (N-f) - N >= f+1 N >= 3f+1

  16. PBFT Strategy Primary runs the protocol in the normal case Replicas watch the primary and do a view change if it fails

  17. Replica state A replica id i (between 0 and N-1) Replica 0, replica 1, … A view number v#, initially 0 Primary is the replica with id i = v# mod N A log of <op, seq#, status> entries Status = pre-prepared or prepared or committed

  18. Normal Case Client sends signed request to primary or to all

  19. Normal Case Primary sends pre-prepare message to all Pre-prepare contains <v#,seq#,op> Records operation in log as pre-prepared Keep in mind that primary might be malicious Send different seq# for the same op to different replicas Use a duplicate seq# for op

  20. Normal Case Replicas check the pre-prepare and if it is ok: Record operation in log as pre-prepared Send prepare messages to all Prepare contains <i,v#,seq#,op> All to all communication

  21. Normal Case: Replicas wait for 2f+1 matching prepares Record operation in log as prepared Send commit message to all Commit contains <i,v#,seq#,op> What does this stage achieve: All honest nodes that are prepared prepare the same value

  22. Normal Case: • Replicas wait for 2f+1 matching commits • Record operation in log as committed • Execute the operation • Send result to the client

  23. Normal Case • Client waits for f+1 matching replies

  24. Request Pre-Prepare Prepare Commit Reply Client Primary Replica 2 Replica 3 Replica 4 Wait for f+1 matching responses BFT Wait for 2f+1 matching responses

  25. Request Pre-Prepare Prepare Commit Reply Client Primary Replica 2 Replica 3 Replica 4 BFT • Commit point: when 2f+1 replicas have prepared

  26. View Change If primary is honest, can faulty replicas prevent progress? If primary is faulty, can honest replicas detect problems? • Replicas watch the primary • Request a view change w/ progress is stalled

  27. View Change Challenges Which primary is next? New primary = v# mod N Why not pick arbitary primary? Can any one replica initialize a view change? How to ensure requests executed by good nodes in old view are reflected in new view?

  28. View Change Each replica sends new primary VIEW-CHANGE request with 2f+1 prepares for recent ops New primary waits for 2f+1 VIEW-CHANGE New primary sends NEW-VIEW to all with Complete set of VIEW-CHANGE messages List of every op for which some VIEW-CHANGE contained 2f+1 prepares

  29. Additional Optimizations State transfer Checkpoints (garbage collection of the log) Timing of view changes

  30. Possible improvements • Lower latency for writes (4 messages) • Replicas respond at prepare • Client waits for 2f+1 matching responses • Fast reads (one round trip) • Client sends to all; they respond immediately • Client waits for 2f+1 matching responses

  31. Practical limitations of BFTs • Expensive • Protection is achieved only when <= f nodes fail • Assume independent machine failures • Does not prevent many classes attacks: • Turn a machine into a botnet node • Steal SSNs from servers

More Related