1 / 56

Consistency and Replication

Consistency and Replication. Chapter 6. Topics. Reasons for Replication Models of Consistency Data-centric consistency models Client-centric consistency models Protocols for Achieving Consistency. Replication. Reasons: Reliability: increase availability when servers crash

Download Presentation

Consistency and Replication

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Consistency and Replication Chapter 6

  2. Topics • Reasons for Replication • Models of Consistency • Data-centric consistency models • Client-centric consistency models • Protocols for Achieving Consistency

  3. Replication • Reasons: • Reliability: increase availability when servers crash • Performance: load balancing; scale with size of geographical region • Availability: local server likely to be available • When one copy is modified, all replicas have to be updated • Problem: how to keep the replicas consistent

  4. Object Replication • Approach 1: application is responsible for replication • Application needs to handle consistency issues • Approach 2: system (middleware) handles replication • Consistency handled by the middleware: Simplifies application development but makes object-specific solutions harder

  5. Replication and Scaling • Replication and caching used for system scalability • Multiple copies: • Improves performance by reducing access latency • But higher network overheads of maintaining consistency • Example: object is replicated N times • Read frequency R, write frequency W • If R <= W, high consistency overhead and wasted messages • Consistency maintenance is itself an issue • What semantics to provide? • Tight consistency requires globally synchronized clocks! • Solution: loosen consistency requirements • Variety of consistency semantics possible

  6. Data-Centric Consistency Models Consistency Model: contract between processes and the data store. If processes follow contract, the data store works correctly.

  7. Strict Consistency Behavior of two processes, operating on the same data item. (a) A strictly consistent store. (b) A store that is not strictly consistent. Def.: Any read on a data item x returns a value corresponding to the result of the most recent write on x (regardless of which copy was written to).

  8. Sequential Consistency (1) Def.: The result of any excution is the same as if the operations by all processes on the data store were executed in some sequential order and the operations of each individual process appear in this sequence in the order specified by its program. • A sequentially consistent data store. • A data store that is not sequentially consistent. • Sequential consistency is weaker than strict consistency • All processes see the same interleaving of operations

  9. Sequential Consistency (2) • Any valid interleaving is allowed • All agree on the same interleaving • Each process preserves its program order • Nothing is said about “most recent write” Sequential consistency comparable to serializability of transactions

  10. Sequential Consistency (3) Four valid execution sequences for the processes of the previous slide. The vertical axis is time.

  11. Linearizability Assumption: Operations are timestamped (e.g., Lamport TS) Def.: The result of any execution is the same as if the operations by all processes on the data store were executed in some sequential order and the operations of each individual process appear in this sequence in the order specified by its program. In addition, if tsOP1(x)<tsOP2(y), then OP1(x) should precede OP2(y) in this sequence. • Linearizable data store is also sequentially consistent • Lineralizability is weaker than strict consistency, but stronger than sequential consistency - adds global TS requirements to sequential consistency.

  12. Casual Consistency (1) • Writes that are potentially casually related must be seen by all processes in the same order. • Concurrent writes may be seen in a different order on different machines. • Casual consistency is weaker than sequential consistency

  13. Casual Consistency (2) This sequence is allowed with a casually-consistent store, but not with sequentially or strictly consistent store. • W2(x)b may depend on R2(x)a and therefore depends on W1(x)a  a must be seen before b at other processes • W2(x)b and W1(x)c are concurrent

  14. Casual Consistency (3) • A violation of a casually-consistent store: W2(x)b depends on W1(x)a. • A correct sequence of events in a casually-consistent store.

  15. FIFO Consistency (1) • Writes done by a single process are seen by all other processes in the order in which they were issued. • Writes from different processes may be seen in a different order by different processes. • FIFO consistency is weaker than casual consistency. • Simple implementation: tag each write by (Proc ID, seq #)

  16. FIFO Consistency (2) A valid sequence of events of FIFO consistency

  17. FIFO Consistency (3) Statement execution as seen by the three processes. The statements in bold are the write-updates originating from other the processes. Signature 001001 not possible with sequential consistency. In sequential consistency, all processes have the same view. Signature: 001001

  18. FIFO Consistency (4) • Sequential consistency: 6 statement orderings; none of them kills both processes • FIFO consistency: both processes can get killed

  19. Models Based on a Sync Operation • No consistency is enforced until a synchronization operation is performed. This operation can be done after local reads and writes to propagate the changes throughout the system. • Weak Consistency • Release Consistency • Entry Consistency

  20. Weak Consistency (1) • Often not necessary to see all writes done by a process • Weak consistency enforces consistency on a group of operations; not individual read/write statements • Synchronization point: • Propagate changes made to local data store to remote data stores • Changes made by remote data stores are imported • Weak consistency is weaker than FIFO consistency

  21. Weak Consistency (2) Properties: • Accesses to synchronization variables associated with a data store are sequentially consistent (i.e., all processes see all operations on synchronization variables in the same order) • No operation on a synchronization variable is allowed to be performed until all previous writes have been completed everywhere (i.e., guarantees all writes have propagated) • No read or write operation on data items are allowed to be performed until all previous operations to synchronization variables have been performed (i.e., when accessing data items, all previous synchronizations have completed)

  22. Weak Consistency (3) • A valid sequence of events for weak consistency. • An invalid sequence for weak consistency.

  23. Release Consistency (1) • More efficient implementation than weak consistency by identifying critical regions • Acquire: ensure that all local copies of the data are brought up to date to be consistent with (released) remote ones • Release: data that has been changed is propagated out to remote data stores • Acquire does not guarantee that locally made changes will be sent to other copies immediately • Release does not necessarily import changes from other copies

  24. Release Consistency (2) A valid event sequence for release consistency.

  25. Release Consistency (3) Rules: • Before a read or write operation on shared data is performed, all previous acquires done by the process must have completed successfully. • Before a release is allowed to be performed, all previous reads and writes by the process must have completed • Accesses to synchronization variables are FIFO consistent (sequential consistency is not required).

  26. Release Consistency (4) Different implementations: • Eager release consistency: process doing the release pushes out all the modified data to all other processes. • Lazy release consistency: no update messages are sent at time of release. When another process does an acquire, it has to obtain the most recent version.

  27. Entry Consistency (1) • Every data item is associated with a synchronization variable. • Each data items can be acquired and released as in release consistency. • Acquire (entry) gets most recent value. • Advantage: increased parallelism • Disadvantage: increased overhead

  28. Entry Consistency (2) A valid event sequence for entry consistency.

  29. Summary of Consistency Models • Consistency models not using synchronization operations. • Models with synchronization operations.

  30. Client Centric Consistency (1) • Strong consistency for data store often not necessary • Consistency guarantees from a clients perspective • Clients often tolerate inconsistencies (e.g., out of date web-pages) • Assumptions: • Mobile clients • Eventual consistent data store: total propagation and consistent ordering • Trade-off: consistency vs. availability

  31. Client Centric Consistency (2) • The principle of a mobile user accessing different replicas of a distributed database.

  32. Data Storage Model (1) • Client uses Server that operates on a Database • Database holds complete copy of replicated data store • Server executes Read and Write operations • Every Write operation has a globally unique write-ID (WID) • A Session is the context in which reads and writes occur

  33. Data Storage Model (2) Client3 Client2 Client1 Server1 Server3 Server2 DB DB DB Eventually consistent replicated data store Writes are propagated in a lazy fashion among servers. To ensure that the session guarantees are met, the servers at which an operation can be performed must be restricted to a subset of available servers that are sufficiently up-to-date.

  34. Data Storage Model (3) • DB(S,t) ::= ordered sequence of Writes that have been received by server S up to time t • WriteOrder(W1,W2) ::= Write W1 should be executed before Write W2 • Write set WS: set of WIDs • Write set WS is complete for Read R and DB(S,t), iff WS  DB(S,t) and for all WS2 with WS  WS2  DB(S,t): result of R applied to WS2 is the same as the result of R applied to DB(S,t)

  35. Data Storage Model (4) • RelevantWrites(S,t,R) ::= function that returns the smallest set of Writes that is complete for Read R and DB(S,t) • Note: such a set exists, since DB(S,t) is itself complete for any Read

  36. Monotonic Reads (1) • Definition: If Read R1 occurs before R2 in a session and R1 accesses server S1 at time t1 and R2 accesses server S2 at time t2, then RelevantWrites(S1,t1,R1)  DB(S2,t2) • That is R2 sees the same as R1 or a more recent value. • Example: Calendar updates

  37. Monotonic Reads (2) Valid Invalid: R2 doesn‘t see W1 Assumption: W1RelevantWrites(S1,t,R1)

  38. Monotonic Writes (1) • Definition: If Write W1 precedes Write W2 in a session, then, for any server S2, if W2  DB(S2,t) then W1  DB(S2,t) and WriteOrder(W1,W2) • Like monotonic writes except the writes force consistency. • Example: Software Update

  39. Monotonic Writes (2) Valid Invalid

  40. Read Your Writes (1) • Definition: If Read R follows Write W in a session and R is performed at server S at time t, then W  DB(S,t) • Example: Password update propagation

  41. Read Your Writes (2) Valid Invalid

  42. Write Follows Read (1) • Definition: If Read R1 precedes Write W2 in a session and R1 is performed at server S1 at time t, then, for any server S2, if W2  DB(S2,t) then any W1  RelevantWrites(S1,t,R1) implies W1  DB(S2,t) and WriteOrder(W1,W2) • Example: Newsgroup - my message W2 is a response to my reading (R1) message W1, so W1 should proceed W2 in all servers.

  43. Write Follows Read (2) Valid Invalid Assumption: W1RelevantWrites(S1,t,R)

  44. Implementation Summary

  45. Replica Placement The logical organization of different kinds of copies of a data store into three concentric rings.

  46. State vs. Operations • Design choices of update propagation: • Propagate only a notification of an update (e.g., invalidation protocols) • Transfer data from one copy to another • Propagate the update operation to other copies (a.k.a. active replication)

  47. Pull vs. Push Protocols • A comparison between push-based and pull-based protocols in the case of multiple client, single server systems.

  48. Epidemic Protocols • Useful for eventual consistency • Propagating updates to all replicas in as few messages as possible • Update propagation model: • Infective: node holds update and is willing to spread • Susceptible: node willing to accept update • Removed: updated node not willing to spread • Anti-entropy: pick nodes at random • Exchanging updates between nodes P and Q: • P only pushes its own updates to Q • P only pulls in new updates from Q • P and Q send updates to each other

  49. Consistency Protocol Implementations • Primary based (each data item has an associated primary): • Remote write • Local write • Replicated write (write operations carried out at multiple replicas, update anywhere): • Active replication • Quorum based protocols

  50. Remote-Write Protocols (1) Primary-based remote-write protocol with a fixed server to which all read and write operations are forwarded.

More Related