1 / 22

Consistent and Efficient Database Replication based on Group Communication

Consistent and Efficient Database Replication based on Group Communication. Bettina Kemme School of Computer Science McGill University, Montreal. Outline. State of the Art in Database Replication Key Points of our Approach Overview of the Algorithms Implementation Issues

bsegura
Download Presentation

Consistent and Efficient Database Replication based on Group Communication

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Consistent and Efficient Database Replication based on Group Communication Bettina Kemme School of Computer Science McGill University, Montreal

  2. Outline • State of the Art in Database Replication • Key Points of our Approach • Overview of the Algorithms • Implementation Issues • Performance Results • Ongoing Work

  3. Replication -- Why? Fault-Tolerance: Take-Over Scale-Up: cluster instead of bigger mainframe

  4. Replica Control: Research andReality UPDATE WHERE Update Everywhere Primary Copy UPDATE WHEN Globally correct Too expensive Deadlocks Eager Quorum/ROWA Oracle Synchr. Repl Placement Strategies Feasible in some applications Feasible Inconsistent reads Configuration Restrictions Lazy Inconsistent reads Reconciliation Placement Strat. Sybase/IBM/Oracle Weak Consistency Sybase/IBM/Oracle

  5. Requirements • Develop and apply appropriate techniques in order to avoid current limitations of eager update everywhere approaches • Keep flexibility of update everywhere • no restrictions on type of transactions and where to execute them • Consistency and fault-tolerance of eager replication • Good Performance • response time + throughput • Straightforward, Implementable Solution • Easy integration in existing systems

  6. Response Time and Message Overhead • Goal: Reduce number of messages per transaction • reduce response time • reduce message overhead • Solution • local execution of transaction • bundle writes and send them in a single message at the end of the transaction (as done in lazy schemes) Write central transaction Read transaction with individual remote writes transaction with local execution and write set propagation at the end

  7. Ordering Transactions • Before: uncoordinated message delivery; danger of deadlocks • Now: pre-order txns by using total order multicast of Group Communication Total Order • Now: Independent execution at the different sites • Before:2-phase-commit

  8. Group Communication Systems • Group Communication: • Multicast • Delivery order (FIFO, causal, total, etc.) • Reliable delivery: on all nodes vs. on all available nodes • Membership control • ISIS, Totem, Transis, Phoenix, Horus, Ensemble, ... • Goal: Exploit rich semantics of group communication on a low level of the software hierarchy

  9. Replica Control based on 2Phase Locking Node 3 Remote Transaction Node 2 Remote Transaction Node 1 Local Transaction Local Phase Local Phase Send Phase write set WS Send Phase write set Commit Phase Lock Phase Lock Phase commit Write Phase Write Phase commit • Transaction is first performed locally at a single site. • Writes are sent in one message to all sites at the end of transaction. • Write messages are totally ordered. • Serialization order obeys total order.

  10. Concurrency Control • One possible Solution: Given a transaction T • Local Phase: T acquires standard local read and write locks • Send Phase: Send write set using total order multicast • Upon reception of write set of T on local node • Commit Phase: multicast commit message • Upon reception of write set of T on remote node • Lock Phase: request all write locks in a single step; if there is a local transaction T’ with conflicting lock and T’ is still in local phase or send phase, abort T’. If T’ in send phase, multicast abort • Write Phase: apply updates of T • Upon reception of commit/abort message of T on remote node, terminate T accordingly • For two transactions of same node: 2 phase locking. • For concurrent transactions of different nodes: optimistic scheme with early conflict detection: when write set of one transaction is delivered the conflict is detected. • Adjustment to other concurrency control schemes possible

  11. Message Delivery Guarantees • Uniform-Reliable Delivery • If a site delivers a message, all non-faulty sites deliver the message • Correctness on all sites (faulty or non-faulty): when a transaction commits at any site then it commits at all non-faulty sites • High message delay • Reliable Delivery • If a non-faulty site (non-faulty for a sufficiently long time) delivers a message it is delivered at all non-faulty sites. • Correctness in the failure-free case • In case of failures: • all non-faulty sites commit the same set of transactions • a transaction might be committed at a faulty site (shortly before failure) and it is not committed at the other sites. • Low message delay

  12. A Suite of Replication Protocols Uniform Reliable Reliable SER-UR SER - R Serializability CS - R CS-UR Cursor Stability SI-UR SI - R Snapshot Isolation HYB - R Hybrid HYB-UR • The solutions provide • flexibility • accepted correctness criteria

  13. Implementation • Integration of our replica control approach into the database system PostgreSQL • Purpose - We wanted to answer the following questions • Can the abstract protocols really be mapped to concrete algorithms in a relational database? • How difficult is it to integrate the replication tool into a real database system? • What is the performance in a real cluster environment?

  14. Architecture of Postgres-R Client Client Client Client Client original PostgreSQL Server Server Postmaster Local Txn Local Txn Group Communication: Ensemble Replication Mgr Server Remote Txn Remote Txn Remote Txn

  15. Write Set Messages • Send SQL statements and reexecute at all sites • Simple • Small message size • High execution overhead on all site • Problem with locks for implicit reads in statement • Send physical updates and only apply changes • Opposite characteristics than sending statements UPDATE employee SET salary = salary+100 WHERE salary < 2000 SQL- Statement Parser/ Optimizer Set of physical records Executor

  16. Gain of sending and applying physical changes

  17. Comparison with standard distr. locking • For all experiments: • Database: 10 relations a 1000 tuples • Transactions: 10 updates per transaction • Workload: 10 transactions per second • 5 concurrent clients(each submitting a transaction each 500 milliseconds)

  18. Response Time vs. Throughput

  19. Scalability with fixed workload • Workload: • 1 update transaction per second per server • 14 queries per second per server • 3 clients per server

  20. Differently loaded Nodes • Nodes: 10 nodes • 15 clients in total

  21. Conclusions • Eager, update everywhere replication is feasible (at least in clusters) by using adequate techniques • As few messages as possible within transaction boundaries • As few synchronization points as possible • Complete transaction execution only at one site • Simple to adjust to existing concurrency control mechanisms

  22. Current Work • Recovery under various failure models • Development of a middleware replication tool • Development of group communication protocols that better support the needs of the database system • Ordering semantics • Failure models • Building a complete system • Adding other replica control protocols • System administration • Partial replication functionality

More Related