1 / 45

III. Current Trends

3C13/D6. III. Current Trends . Distributed DBMSs: Advanced Concepts. 13.0 Content. Content. 13.1 Objectives 13.2 Distributed Transaction Management 13.3 Distributed Concurrency Control - Objectives - Locking Protocols - Timestamp Protocols 13.4 Distributed Deadlock Management

cbatson
Download Presentation

III. Current Trends

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 3C13/D6 III. Current Trends Distributed DBMSs: Advanced Concepts

  2. 13.0 Content Content 13.1 Objectives 13.2 Distributed Transaction Management 13.3 Distributed Concurrency Control - Objectives - Locking Protocols - Timestamp Protocols 13.4 Distributed Deadlock Management 13.5 Distributed Database Recovery - Failures in a distributed environment - How failure affects recovery - Two-Phase Commit (2PC) - Three-Phase Commit (3PC) 13.6 Replication Servers - Data Replication Concepts 13.7 Mobile Databases 13.8 Summary

  3. 13.1 Objectives Objectives In this Lecture you will learn: • Distributed transaction management. • Distributed concurrency control. • Distributed deadlock detection. • Distributed recovery control. • Distributed integrity control. • X/OPEN DTP standard. • Replication servers as an alternative. • How DBMSs can support the mobile worker.

  4. 13.2 Distributed Transaction Management Distributed Transaction Management Objectives of distributed T processing same as centralized. – more complex: ensure atomicity of global T and each subT Previously: Four CENTRALIZED database modules: • Transaction manager • Scheduler: (or Lock manager) • Recovery manager 4. Buffer manager • DDBMS has these in local DBMSs • In addition: Global Transaction Manager (or transaction coordinator TC) at each site. • TC coordinates local and global Ts initiated at that site. • Inter-site communication still through Data communications

  5. 13.2 Distributed Transaction Management Distributed Transaction Management Objectives of distributed T processing same as centralized. – more complex: ensure atomicity of global T and each subT Previously: Four CENTRALIZED database modules: • Transaction manager • Scheduler: (or Lock manager) • Recovery manager 4. Buffer manager • DDBMS has these in local DBMSs • In addition: Global Transaction Manager (or transaction coordinator TC) at each site. • TC coordinates local and global Ts initiated at that site. • Inter-site communication still through Data communications

  6. 13.3 Distributed Concurrency Control Distributed Concurrency Control Objectives: Given no failure: all concurrency control (CC) mechanisms must ensure • consistency of data items is preserved. • each atomic action completes in finite time

  7. 13.3 Distributed Concurrency Control Distributed Concurrency Control Objectives: Given no failure: all concurrency control (CC) mechanisms must ensure • consistency of data items is preserved. • each atomic action completes in finite time • In addition good CC mechanisms should: • Be resilient to site and comms failure • Permit parallelism • Have modest comp and storage overheads • Perform in a network with comms delay • Place few constraints on structure of atomic actions

  8. 13.3 Distributed Concurrency Control Distributed Concurrency Control Objectives: Given no failure: all concurrency control (CC) mechanisms must ensure • consistency of data items is preserved. • each atomic action completes in finite time Multiple-copy consistency problem: Update of original, but not copies of data stored in different locations -database becomes inconsistent Assume for now updates of copies are synchronous… • In addition good CC mechanisms should: • Be resilient to site and comms failure • Permit parallelism • Have modest comp and storage overheads • Perform in a network with comms delay • Place few constraints on structure of atomic actions

  9. 13.3 Distributed Concurrency Control Locking Protocols Can employ one of the following 4 protocols (based on 2PL) to ensure serializability for DDBMSs…

  10. 13.3 Distributed Concurrency Control Locking Protocols Can employ one of the following 4 protocols (based on 2PL) to ensure serializability for DDBMSs… • Centralized 2PL:single site maintains all locking information. • Local TMs involved in global T: request and release locks from lock manager. • Or, T-Controller can make all locking requests on behalf of local TMs. • Advantage: easy to implement. • Disadvantages: bottlenecks and lower reliability.

  11. 13.3 Distributed Concurrency Control Locking Protocols Can employ one of the following 4 protocols (based on 2PL) to ensure serializability for DDBMSs… • Centralized 2PL:single site maintains all locking information. • Local TMs involved in global T: request and release locks from lock manager. • Or, T-Controller can make all locking requests on behalf of local TMs. • Advantage: easy to implement. • Disadvantages: bottlenecks and lower reliability. • Primary Copy 2PL: lock managers distributed to a number of sites. • Each lock manager responsible for managing locks for set of data items. • For data copies, one copy chosen as primary copy, others are slave copies • Only need to write-lock primary copy of data item that is to be updated. • changes can be propagated to slaves. • Advantages: lower comms costs and faster than centralized 2PL. • Disadvantages: deadlock handling more complex; still rather centralized

  12. 13.3 Distributed Concurrency Control Locking Protocols 3. Distributed 2PL:Lock managers distributed to every site. • Each lock manager responsible for locks for data at that site. • If data not replicated, equivalent to primary copy 2PL. • Otherwise, implements a Read-One-Write-All (ROWA) replica control protocol. • Using ROWA protocol: • Any copy of replicated item can be used for read. • All copies must be write-locked before item can be updated. • Disadvantages: deadlock handling more complex; comms costs higher

  13. 13.3 Distributed Concurrency Control Locking Protocols 3. Distributed 2PL:Lock managers distributed to every site. • Each lock manager responsible for locks for data at that site. • If data not replicated, equivalent to primary copy 2PL. • Otherwise, implements a Read-One-Write-All (ROWA) replica control protocol. • Using ROWA protocol: • Any copy of replicated item can be used for read. • All copies must be write-locked before item can be updated. • Disadvantages: deadlock handling more complex; comms costs higher 4. Majority Locking:Extension of distributed 2PL. • To read/write data replicated at n sites, sends lock request to>1/2n sites • Transaction cannot proceed until majority of locks obtained. • Overly strong in case of read locks.

  14. 13.3 Distributed Concurrency Control Timestamp Protocols Objective: order Ts globally so older Ts (smaller timestamps) get priority in event of conflict. Distributed: • Need to generate unique timestamps both locally and globally.

  15. 13.3 Distributed Concurrency Control Timestamp Protocols Objective: order Ts globally so older Ts (smaller timestamps) get priority in event of conflict. Distributed: • Need to generate unique timestamps both locally and globally. • System clock/event counter at each site unsuitable.

  16. 13.3 Distributed Concurrency Control Timestamp Protocols Objective: order Ts globally so older Ts (smaller timestamps) get priority in event of conflict. Distributed: • Need to generate unique timestamps both locally and globally. • System clock/event counter at each site unsuitable. • Concatenate local timestamp with a unique site identifier: <local timestamp, site identifier> Site identifier placed in least significant position - ensures events ordered according to occurrence not location.

  17. 13.3 Distributed Concurrency Control Timestamp Protocols Objective: order Ts globally so older Ts (smaller timestamps) get priority in event of conflict. Distributed: • Need to generate unique timestamps both locally and globally. • System clock/event counter at each site unsuitable. • Concatenate local timestamp with a unique site identifier: <local timestamp, site identifier> Site identifier placed in least significant position - ensures events ordered according to occurrence not location. • To prevent busy site generating larger timestamps than slower sites: • Each site includes their timestamps in messages. • Site compares timestamps with message and, if its timestamp smaller, sets it to some value greater than message timestamp.

  18. 13.4 Distributed Deadlock Management Distributed Deadlock Management Any locking based/timestamp based Concurrency Control mechanism may result in deadlock. - more complicated to detect if lock manager not centralized

  19. 13.4 Distributed Deadlock Management Distributed Deadlock Management Any locking based/timestamp based Concurrency Control mechanism may result in deadlock. - more complicated to detect if lock manager not centralized 1. Centralized deadlock detection:Single site appointed deadlock detection coordinator (DDC). - DDC responsible for constructing and maintaining Global “Wait-For Graph” (WFG). - If cycles>0, DDC breaks each cycle (selects Ts to be rolled back and restarted)

  20. 13.4 Distributed Deadlock Management Distributed Deadlock Management Any locking based/timestamp based Concurrency Control mechanism may result in deadlock. - more complicated to detect if lock manager not centralized 1. Centralized deadlock detection:Single site appointed deadlock detection coordinator (DDC). - DDC responsible for constructing and maintaining Global “Wait-For Graph” (WFG). - If cycles>0, DDC breaks each cycle (selects Ts to be rolled back and restarted) 2. Hierarchical deadlock detection:Sites are organized into a hierarchy. - Each site sends its Local WFG to detection site above it in hierarchy. - Reduces dependence on centralized detection site.

  21. 13.4 Distributed Deadlock Management Distributed Deadlock Management Any locking based/timestamp based Concurrency Control mechanism may result in deadlock. - more complicated to detect if lock manager not centralized 1. Centralized deadlock detection:Single site appointed deadlock detection coordinator (DDC). - DDC responsible for constructing and maintaining Global “Wait-For Graph” (WFG). - If cycles>0, DDC breaks each cycle (selects Ts to be rolled back and restarted) 2. Hierarchical deadlock detection:Sites are organized into a hierarchy. - Each site sends its Local WFG to detection site above it in hierarchy. - Reduces dependence on centralized detection site. 3. Distributed deadlock detection:Most well-known method developed by Obermarck (1982). - An external node, Text, is added to Local WFG to indicate remote agent. - If a Local WFG contains a cycle that does not involve Text, then site and DDBMS are in deadlock. If it contains a cycle that does involve Text, then MAYBE deadlock

  22. 13.5 Distributed Database Recovery Distributed Database Recovery Four types of failure particular to distributed DBMSs: • Loss of a message • Failure of communication link • Failure of a site • Network partitioning

  23. 13.5 Distributed Database Recovery Distributed Database Recovery Four types of failure particular to distributed DBMSs: • Loss of a message • Failure of communication link • Failure of a site • Network partitioning • DDBMS highly dependent on ability of all sites to be able to communicate reliably with one another.

  24. 13.5 Distributed Database Recovery Distributed Database Recovery Four types of failure particular to distributed DBMSs: • Loss of a message • Failure of communication link • Failure of a site • Network partitioning • DDBMS highly dependent on ability of all sites to be able to communicate reliably with one another. • Comms failures can result in network becoming split into 2+ partitions.

  25. 13.5 Distributed Database Recovery Distributed Database Recovery Four types of failure particular to distributed DBMSs: • Loss of a message • Failure of communication link • Failure of a site • Network partitioning • DDBMS highly dependent on ability of all sites to be able to communicate reliably with one another. • Comms failures can result in network becoming split into 2+ partitions. • May be difficult to distinguish whether comm link or site has failed.

  26. 13.5 Distributed Database Recovery 2-Phase commit Two phases: a voting phase and a decision phase. • Coordinator asks all participants if they are prepared to commit T. • If one participant votes abort, or fails to respond within a timeout period, coordinator instructs all participants to abort transaction. • If all vote commit, coordinator instructsall participants to commit. All participants must adopt global decision.

  27. 13.5 Distributed Database Recovery 2-Phase commit Two phases: a voting phase and a decision phase. • Coordinator asks all participants if they are prepared to commit T. • If one participant votes abort, or fails to respond within a timeout period, coordinator instructs all participants to abort transaction. • If all vote commit, coordinator instructsall participants to commit. All participants must adopt global decision. - Assumes each site has own local log and can rollback or commit T reliably. - If participant fails to vote, abort is assumed. - If participant gets no vote instruction from coordinator, can abort.

  28. 13.5 Distributed Database Recovery 2-Phase commit Two phases: a voting phase and a decision phase. • Coordinator asks all participants if they are prepared to commit T. • If one participant votes abort, or fails to respond within a timeout period, coordinator instructs all participants to abort transaction. • If all vote commit, coordinator instructsall participants to commit. All participants must adopt global decision. - Assumes each site has own local log and can rollback or commit T reliably. - If participant fails to vote, abort is assumed. - If participant gets no vote instruction from coordinator, can abort. State transitions 2PC. (a) Coordinator (b) Participant

  29. 13.5 Distributed Database Recovery 2-Phase commit Termination Protocols:invoked whenever a coordinator or participant fails to receive an expected message and times out. Coordinator • Timeout in WAITING state - Globally abort the transaction. • Timeout in DECIDED state - Send global decision again to sites that have not acknowledged. Participant Simplest termination protocol: leave participant blocked until comm with coordinator is re-established. Alternatively: • Timeout in INITIAL state - Unilaterally abort the transaction. • Timeout in the PREPARED state - Without more information, participant blocked. Could get decision from another participant .

  30. 13.5 Distributed Database Recovery 2-Phase commit Recovery Protocols: Action to be taken by operational site in event of failure. Depends on what stage coordinator or participant had reached. Coordinator Failure • Failure in INITIAL state - Recovery starts commit procedure. • Failure in WAITING state - Recovery restarts commit procedure • Failure in DECIDED state - On restart, if coordinator received all, complete successfully. Otherwise, has to initiate termination protocol Participant Failure Objective to ensure that participant on restart performs same action as all other participants and that this restart can be performed independently. • Failure in INITIAL state - Unilaterally abort the transaction. • Failure in PREPARED state - Recovery via termination protocol • Failure in ABORTED/COMMITTED states - On restart, no further action

  31. 13.5 Distributed Database Recovery 3-Phase commit For example, a process that times out after voting commit, but before receiving global instruction, is blocked if it can communicate only with sites that do not know global decision. 2PC is not a non-blocking protocol. • Probability of blocking occurring in practice is sufficiently rare that most existing systems use 2PC.

  32. 13.5 Distributed Database Recovery 3-Phase commit For example, a process that times out after voting commit, but before receiving global instruction, is blocked if it can communicate only with sites that do not know global decision. 2PC is not a non-blocking protocol. • Probability of blocking occurring in practice is sufficiently rare that most existing systems use 2PC. • 3PC introduces 3rd phase: pre-commit • On receiving all votes from participants, coordinator sends global pre-commit message. • Participant who receives global pre-commit, knows all other participants have voted commit and that, in time, participant itself will definitely commit.

  33. 13.5 Distributed Database Recovery 3-Phase commit For example, a process that times out after voting commit, but before receiving global instruction, is blocked if it can communicate only with sites that do not know global decision. 2PC is not a non-blocking protocol. • Probability of blocking occurring in practice is sufficiently rare that most existing systems use 2PC. • 3PC introduces 3rd phase: pre-commit • On receiving all votes from participants, coordinator sends global pre-commit message. • Participant who receives global pre-commit, knows all other participants have voted commit and that, in time, participant itself will definitely commit. • Coordinator • Participant

  34. 13.7 Replication Servers Replication Servers General purpose DDBMSs have not been widely accepted. Instead, Database replication: the copying and maintenance of data on multiple servers, may be more preferred solution.

  35. 13.7 Replication Servers Replication Servers General purpose DDBMSs have not been widely accepted. Instead, Database replication: the copying and maintenance of data on multiple servers, may be more preferred solution. • Synchronous:updates to replicated data part of enclosing transaction. • - If sites>0 that hold replicas are unavailable T cannot complete. • - Large no. of messages required to coordinate synchronization. • Asynchronous: target database updated after source database modified. • - Delay in regaining consistency may range from few seconds to days

  36. 13.7 Replication Servers Replication Servers General purpose DDBMSs have not been widely accepted. Instead, Database replication: the copying and maintenance of data on multiple servers, may be more preferred solution. • Synchronous:updates to replicated data part of enclosing transaction. • - If sites>0 that hold replicas are unavailable T cannot complete. • - Large no. of messages required to coordinate synchronization. • Asynchronous: target database updated after source database modified. • - Delay in regaining consistency may range from few seconds to days Functionality: has to be able to copy data from one database to another with • Scalability. • Mapping and Transformation. • Object Replication. • Specification of Replication Schema. • Subscription mechanism. • Initialization mechanism.

  37. 13.7 Replication Servers Data Replication Concepts Data Ownership:ownership relates to which site has privilege to update the data. 3 Main types of ownership:

  38. 13.7 Replication Servers Data Replication Concepts Data Ownership:ownership relates to which site has privilege to update the data. 3 Main types of ownership: • Master/slave (or asymmetric replication), • Asynchronously replicated data is owned by one (master) site, and can be updated by only that site. • Using ‘publish-and-subscribe’ metaphor, master site is ‘publisher’ • Other sites ‘subscribe’ to data and receive read-only copies. • Potentially, each site can be master for non-overlapping data sets, but need to avoid update conflicts.

  39. 13.7 Replication Servers Data Replication Concepts Data Ownership:ownership relates to which site has privilege to update the data. 3 Main types of ownership: • Master/slave (or asymmetric replication), • Asynchronously replicated data is owned by one (master) site, and can be updated by only that site. • Using ‘publish-and-subscribe’ metaphor, master site is ‘publisher’ • Other sites ‘subscribe’ to data and receive read-only copies. • Potentially, each site can be master for non-overlapping data sets, but need to avoid update conflicts. Example: mobile computing. Replication is one method of providing data to mobile workforce. Download/upload data on demand from local workgroup server.

  40. 13.7 Replication Servers Data Replication Concepts 2. Workflow Ownership • Avoids update conflicts, provides more dynamic ownership model. • Allows right to update replicated data to move from site to site. • However, at any one moment, only ever one site that may update that particular data set.

  41. 13.7 Replication Servers Data Replication Concepts 2. Workflow Ownership • Avoids update conflicts, provides more dynamic ownership model. • Allows right to update replicated data to move from site to site. • However, at any one moment, only ever one site that may update that particular data set. Example: order processing system, which follows series of steps, such as order entry, credit approval, invoicing, shipping, and so on.

  42. 13.7 Replication Servers Data Replication Concepts 2. Workflow Ownership • Avoids update conflicts, provides more dynamic ownership model. • Allows right to update replicated data to move from site to site. • However, at any one moment, only ever one site that may update that particular data set. Example: order processing system, which follows series of steps, such as order entry, credit approval, invoicing, shipping, and so on. 3. Update anywhere (symmetric replication) ownership • Creates peer-to-peer environment where multiple sites have equal rights to update replicated data. • Allows local sites to function autonomously, • Shared ownership can lead to conflict scenarios and have to employ methodology for conflict detection and resolution.

  43. 13.8 Mobile Databases Mobile Databases Work as if in the office but in reality working from remote locations. ‘Office’ may accompany remote worker in form of laptop, PDA Mobile Database: is portable and physically separate from a centralized database server but capable of comms with server from remote sites allowing sharing of corporate data.

  44. 13.8 Mobile Databases Mobile Databases Work as if in the office but in reality working from remote locations. ‘Office’ may accompany remote worker in form of laptop, PDA Mobile Database: is portable and physically separate from a centralized database server but capable of comms with server from remote sites allowing sharing of corporate data.

  45. 13.8 Mobile Databases Mobile Databases Work as if in the office but in reality working from remote locations. ‘Office’ may accompany remote worker in form of laptop, PDA Mobile Database: is portable and physically separate from a centralized database server but capable of comms with server from remote sites allowing sharing of corporate data. Functionality of mobile DBMSs: • comm with centralized database server - wireless or Internet access • replicate data on centralized database server and mobile device • synchronize data on centralized database server and mobile device • capture data from various sources • create customized mobile apps

More Related