1 / 45

Practical Byzantine Fault Tolerance and Proactive Recovery

Practical Byzantine Fault Tolerance and Proactive Recovery. Miguel Castro and Barbara Liskov ACM TOCS ‘02 Presented By: Imranul Hoque. Problem. Computer systems provide crucial services Computer systems fail Natural disasters Hardware failures Software errors Malicious attacks

bern
Download Presentation

Practical Byzantine Fault Tolerance and Proactive Recovery

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Practical Byzantine Fault Tolerance and Proactive Recovery Miguel Castro and Barbara Liskov ACM TOCS ‘02 Presented By: Imranul Hoque

  2. Problem • Computer systems provide crucial services • Computer systems fail • Natural disasters • Hardware failures • Software errors • Malicious attacks • Need highly available service Replicate to increase availability

  3. Assumptions are a Problem

  4. Contributions • Practical replication algorithm • Weak assumption • Good performance [Really?] • Implementation • BFT: A generic replication toolkit • BFS: A replicated file system • Performance evaluation Byzantine Fault Tolerance in Asynchronous Environment

  5. Challenges Request A Request B Client Client

  6. Challenges Client Client 1: Request A 2: Request B

  7. State Machine Replication Client Client How to assign sequence number to requests? 1: Request A 1: Request A 1: Request A 1: Request A 2: Request B 2: Request B 2: Request B 2: Request B

  8. Primary Backup Mechanism Client Client What if the primary is faulty? Agreeing on sequence number Agreeing on changing the primary (view change) 1: Request A 2: Request B View 0

  9. Agreement • Certificate: set of messages from a quorum • Algorithm steps are justified by certificates Quorums have at least 2f + 1 replicas Quorum A Quorum B Quorums intersect in at least one correct replica

  10. Algorithm Components • Normal case operation • View changes • Garbage collection • Recovery All have to be designed to work together

  11. Normal Case Operation • Three phase algorithm: • PRE-PREPARE picks order of requests • PREPARE ensures order within views • COMMIT ensures order across views • Replicas remember messages in log • Messages are authenticated • {.}σk denotes a message sent by k Quadratic message exchange

  12. Pre-prepare Phase Request: m Primary: Replica 0 {PRE-PREPARE, v, n, m}σ0 Replica 1 Replica 2 Replica 3 Fail

  13. Prepare Phase Request: m PRE-PREPARE Primary: Replica 0 Replica 1 Replica 2 Fail Replica 3 Accepted PRE-PREPARE

  14. Prepare Phase Request: m PRE-PREPARE Primary: Replica 0 {PREPARE, v, n, D(m), 1}σ1 Replica 1 Replica 2 Fail Replica 3 Accepted PRE-PREPARE

  15. Prepare Phase Request: m Collect PRE-PREPARE + 2f matching PREPARE PRE-PREPARE Primary: Replica 0 {PREPARE, v, n, D(m), 1}σ1 Replica 1 Replica 2 Fail Replica 3 Accepted PRE-PREPARE

  16. Commit Phase Request: m PRE-PREPARE PREPARE Primary: Replica 0 Replica 1 {COMMIT, v, n, D(m)}σ2 Replica 2 Fail Replica 3

  17. Commit Phase (2) Request: m Collect 2f+1 matching COMMIT: execute and reply PRE-PREPARE PREPARE COMMIT Primary: Replica 0 Replica 1 Replica 2 Fail Replica 3

  18. View Change • Provide liveness when primary fails • Timeouts trigger view changes • Select new primary (= view number mod 3f+1) • Brief protocol • Replicas send VIEW-CHANGE message along with the requests they prepared so far • New primary collects 2f+1 VIEW-CHANGE messages • Constructs information about committed requests in previous views

  19. View Change Safety • Goal: No two different committed request with same sequence number across views Quorum for Committed Certificate (m, v, n) View Change Quorum At least one correct replica has Prepared Certificate (m, v, n)

  20. Recovery • Corrective measure for faulty replicas • Proactive and frequent recovery • All replicas can fail if at most f fail in a window • System administrator performs recovery, or • Automatic recovery from network attacks • Secure co-processor • Read-only memory • Watchdog timer Clients will not get reply if more than f replicas are recovering

  21. Sketch of Recovery Protocol • Save state • Reboot with correct code and restore state • Replica has correct code without losing state • Change keys for incoming messages • Prevent attacker from impersonating others • Send recovery request r • Others change incoming keys when r execute • Check state and fetch out-of-date or corrupt items • Replica has correct up-to-date state

  22. Performance • Andrew benchmark • Andrew100 and Andrew500 • 4 machines: 600 MHz, Pentium III • 3 Systems • BFS: based on BFT • NO-REP: BFS without replication • NFS: NFS-V2 implementation in Linux No experiment with faulty replicas Scalability issue: only 4 & 7 replicas

  23. Benchmark Results (w/o PR) Without view change and faulty replica!

  24. Benchmark Results (with PR) Recovery Period Recovery is staggered!

  25. Related Works

  26. Today’s Talk Replication in Wide Area Network Prof. Keith Marzullo Distinguished Lectureship Series Today @ 4PM – 1404 SC

  27. Questions?

  28. Backup Slides • Optimization • Proof of 3f+1 optimality • Garbage Collection • View Change with MAC • Detailed Recovery Protocol • State Transfer

  29. Optimization • Using MAC instead of Digital Signatures • Replying with digest • Tentative execution of Read-Write requests • Tentative execution or Read-only requests • Piggyback COMMIT with next PRE-PREPARE • Request batching

  30. Proof of 3f+1 Optimality • f faulty replicas • Should reply after n-f replicas respond to ensure liveness • Two n-f quorums intersect in at least n-2f replicas • Worst case scenario: out of (n-2f), f are faulty • Two quorums should have at least 1 non-faulty replica in common • Therefore, n-3f >= 1

  31. Garbage Collection • Replicas should remove from their logs information about executed operations. • Even if i has executed a request, it may not be safe to remove that request's information because of a subsequent view change. • Use periodic checkpointing (after K requests).

  32. Garbage Collection • When replica i produces a checkpoint it multicasts {CHECKPOINT, n, d, i}αi. • n is the highest sequence number of the requests i has executed. • d is a digest of the current state. • Each replica collects such messages until it has a quorum certificate with 2f + 1 authenticated CHECKPOINT messages with the same n and d from distinct i. • Called the stable certificate: all other replicas will be able to obtain a weak certificate proving that its stable checkpoint is correct if they need to fetch it.

  33. Garbage Collection • Once a replica has a stable certificate for a checkpoint with sequence number n, then that checkpoint is stable. • The replica can discard all entries in the log with sequence numbers less than n and all checkpointed states ealier than n. • The checkpoint can be used to define high and low water marks for sequence numbers: • L = n • H = n + LOG for a log size LOG.

  34. View Change with MAC • Replica has logged per message m: • m = {REQUEST, o, t, c}αc • {PRE-PREPARE, v, n, D(m)}αp • m is pre-prepared at this replica • {PREPARE, v, n, D(m), i}αi from 2f replicas • m is prepared at this replica • {COMMIT, v, n, i}αi from 2f+1 replicas • m is commited at this replica

  35. View Change with MAC • At a high level: • The primary of view v + 1 reads information about stable and prepared certificates from a quorum • It then computes the stable checkpoint and fills in the command sequence • It sends this information to the backups. • The backups verify this information.

  36. View Change with MAC • Each replica i maintains two sets P and Q: • P contains tuples {n, d, v} such that i has collected a prepared certificate for request with digest d for view v, sequence n and there is no such request with sequence n for a later view. • Q contains tuples {n, d, v} such that i pre-prepared request for request with digest d for view v, sequence n and there is no such request with sequence n for a later view.

  37. View Change with MAC • When i suspects the primary for view v has failed: • Enter view v + 1 • Multicast {VIEW-CHANGE, v+1, h, C, P, Q, i}αi • h is the sequence number of the stable checkpoint of i. • C is a set of checkpoints at i with their sequence number.

  38. View Change with MAC • Replicas collect VIEW-CHANGE messages and acknowledge them to primary p of view v + 1. • Accepts a message only if the values in their P and Q are earlier than the new view number. • {VIEW-CHANGE-ACK, v+1, i, j, d}μi • i is sender of ack. • j is source of the VIEW-CHANGE message. • d is the digest of the VIEW-CHANGE message

  39. View Change with MAC • The primary p: • Stores VIEW-CHANGE from i in S[i] if gets 2f − 1 corresponding VIEW-CHANGE-ACKs. • Call a view-change certificate • Chooses from S a checkpoint and sets of requests. • Try to do whenever update S.

  40. View Change with MAC • Select checkpoint for starting state for processing requests in the new view. • Picks checkpoint with the highest number h from the set of checkpoints that are known to be correct (at least f + 1 of them) and that have numbers higher than the low water mark of at least f +1 non-faulty replicas (so ordering information is available).

  41. View Change with MAC • Select a request to pre-prepare in the new view for each n between h and h + Log. • If m committed in an earlier view, then choose m. • If there is a quorum of replicas that did not prepare any request for n, then p pre-proposes a null command. • Multicast decision to backups. • Each can verify by running the same decision procedure.

  42. Recovery Protocol • Save state • Reboot with correct code and restore state • Replica has correct code without losing state • Change keys for incoming messages • Prevent attacker from impersonating others • Estimate high-water mark in a correct log (H) • Bound sequence number of bad information by H

  43. Recovery Protocol • Send recovery request r • Others change incoming keys when r executes • Bound sequence number of forged messages by Hr (high water mark in log when r executes) • Participate in algorithm • May be needed to complete recovery if not faulty • Check state and fetch out-of-date or corrupt items • End: checkpoint max(H, Hr) is stable • Replica has correct up-to-date state

  44. State Transfer

More Related