1 / 279

Transaction Management

Transaction Management. Transaction Management. Atomicity Either all or none of the transaction’s operations are performed. Atomicity requires that if a transaction is interrupted by a failure, its partial results are undone.

ernst
Download Presentation

Transaction Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transaction Management

  2. Transaction Management • Atomicity • Either all or none of the transaction’s operations are performed. Atomicity requires that if a transaction is interrupted by a failure, its partial results are undone. Reasons for transaction not completed: Transaction aborts or system crashes. Commitment: Completion of a transaction. Transaction primitives: BEGIN, COMMIT, ABORT • Global of transaction management: Efficient, reliable, and concurrent execution of transactions. • Agents: A local process which performs some actions on behalf of an application. • Root Agent: • Issuing begin transaction, commit, and abort primitives. • Create a new agent.

  3. System forces abort Begin_Transaction Begin_Transaction Begin_Transaction X Commit Abort Types of transaction termination.

  4. Failure Recovery Basic Techniques: LOG A log contains information for undoing or redoing all actions which are performed by transactions. Undo: Reconstruct the database as prior to its execution (e.g., abort) Redo: Perform again its action (e.g., failure of volatile storage before writing onto stable storage, but already committed) Undo and redo must be independent. Performing them several times should be equivalent to performing them once.

  5. Failure Recovery (cont’d) A log contains: • Transaction ID • Record ID • Type of action (insert, delete, modify) • The old record value (required for undo) • The new record value (required for redo) • Information for recovery (e.g., a pointer to the previous log record of the same transaction). 7. Transaction status (begin, abort, commit).

  6. Failure Recovery (cont’d) • Log Write-ahead protocol • Before performing a database update, log record recorded on stable storage. • Before committing a transaction, all log records of the transaction must have been recorded on stable storage. • Recovery Procedure Via Check Points Check points are operations which are periodically performed (e.g. few minutes) writing the following to stable storage. • All log records and all database updates which are still in volatile storage. • Check point record which contains the indication of transactions that are active at the time when check point is done.

  7. Transaction Manager • Local Transaction Manager (LTM) • Provides transaction management at local site, for example, local begin, local commit, local abort, perform sub-transaction (local transaction). • Distributed Transaction Manager (DTM) • Provides global transaction management. • LTM has the capabilities • Ensuring the atomicity of a sub-transaction. • Write record on stable storage on behalf of DTM. Atomicity at LTM is not sufficient for atomicity at DTM (i.e., single site vs. all sites).

  8. Two Phase Commit Protocol Coordinator: Making the final commit or abort decision (e.g., DTM). Participants: Responsible for local sub-transactions (e.g., LTM). BasicIdea: Unique decisions for all participants with respect to committing or aborting all the local sub-transactions. 1stPhase: To reach a common decision. 2ndPhase: Global commit or global abort (recording the decision on the stable storage).

  9. Phase 1 • The coordinator asks all the participants to prepare for commitment. • Each participant answers READY if it is ready to commit and willing to do so. Each participant record on the stable storage. • All information which is required for locally committing the sub-transactions. • “ready” log record must be recorded on the stable storage. • The coordinator records a “prepare” log on the stable storage, which contains all the participants’ identification and also activates a time out mechanism.

  10. Phase 2 • The coordinator recording on the stable storage of its decision “global commit” or “global abort”. • The coordinator informs all the participants of its decision. • All participants write a commit or abort record on the log (assure local sub-transaction will not be lost). • All participants send a final acknowledgment message to the coordinator and perform the actions required for committing or aborting the sub-transaction. • Coordinator writes a “complete” record on the stable storage.

  11. Basic 2-Phase-Commit Protocol Coordinator: Write a “prepare” record in the log: Send PREPARE message and activate time-out Participant: Wait for PREPARE message: If the participant is willing to commit then begin Write sub-transaction’s records in the log; Write “ready” record in the log; Send READY answer message to coordinator end else begin Write “abort” record in the log; Send ABORT answer message to coordinator end

  12. Basic 2-Phase-Commit Protocol (cont’d) Coordinator: Wait for ANSWER message (READY or ABORT) from all participants or time-out; If time-out expired or some answer message is ABORT then begin Write “global_abort” record the log; Send ABORT command message to all participants end else (*all answers arrived and were READY*) begin Write “global_commit” record in the log; Send COMMIT command message to all participants end

  13. Basic 2-Phase-Commit Protocol (cont’d) Participant: Wait for command message; Write “abort” or “commit” record in the log: Send the ACK message to coordinator; Execute command Coordinator: Wait for ACK message form all participants: Write “complete” record in the log

  14. Elimination of the PREPARE Message: 2P Commit Coordinator: Write prepare record in the log; Request operations for participants, and activate time-out; Wait for completion of participants (READY message) or time-out expired: Write global_commit or global_abort record in the log; Send command message to all participants. Participant: Receive request for operation: Perform local processing and write log records: Send READY message and write ready record in the log: Wait for command message: Write commit or abort records in the log; Execute command.

  15. The Consistency Problem InA Distributed Database System • Multiple copies of the same data at different sites improve • Availability • Better response time • Every update will result in a local execution and a sequence of updates sent to the various sites where there is a copy of the database.

  16. Concurrency Control • Purpose: • To give each user the illusion that he is executing alone on a dedicated system when, in fact, many users are executing simultaneously on a shared system. • Goals: • Mutual consistency • Internal consistency • Problems: • Data stored at multiple sites • Communication delays Concurrency control is a component of a distributed database management system.

  17. Criteria For Consistency • Mutual consistency among the redundant copies. • Internal consistency of each copy. • Any alterations of a data item must be performed in all the copies. • Two alterations to a data item must be performed in the same order in all copies.

  18. The Problem Site A Part # Price . . . . . . 102116 $10.00 . . . . . . Site B Part # Price . . . . . . 102116 $10.00 . . . . . . Two Simultaneous Transactions Price  $15.00 Price  $12.00 Possible Result Part # Price . . . . . . 102116 $12.00 Part # Price . . . . . . 102116 $15.00 Mutual consistency is not reserved.

  19. The Solution Mutual consistency can be ensure by the use of time stamp given the update message: TS 87, 6, 1, 12.01 ID PRICE 102116 $4.00 103522 $7.50 The DB Copy was: . . . . . . . . . 102116 $2.50 87, 5, 15, 9.12 . . . . . . . . . 103522 $7.90 87, 6, 1, 12.15 After the Update: . . . . . . . . . 112116 $4.00 87, 6, 1, 12.01 . . . . . . . . . 103522 $7.90 87, 6, 1, 12.15

  20. X, Y, and Z are three data fields such that X + Y + Z = 3 Site 2 X=1 Y=1 Z=1 Site 1 X=1 Y=1 Z=1 Suppose Site 1 executes X  -1, Y  3 Site 2 executes Y  -1, Z  3 Possible Result Site 1 X -1 Y -1 Z 3 Site 2 X -1 Y -1 Z 3 Mutual consistency was preserved but internal consistency was not.

  21. A Good Solution Must Be • Deadlock free • Speed independent • Partially operable

  22. Concurrency Control Correctness => Serializable Executions Serializable Execution => Serial Execution Serial Execution => No Concurrency • Two operations are said to conflict if they operate on the same data item and at least one is a write. • Two types of conflicts: • Write-Write (WW) • Read-Write (RW) • Bernstein and Goodman separate techniques may be used to insure RW and WW synchronization. The two techniques can be “glued” together via an interface, which assures one serial order consistent with both.

  23. Definitions of Concurrency Control • A schedule (history or log) is a sequence of operations performed by transactions. S 1: Ri(x)Rj(x)Wi(y)Rk(y)Wj(x) • Two transactions Ti and Tj execute serially on a schedule S if the last operation of Ti precedes the first operation of Tj in S; otherwise they execute concurrently in it. • A schedule is serial if no transactions execute concurrently in it. For example: S 2: Ri(x)Wi(x)Rj(x)Wj(y) Rk(y)Wk(x) = TiTjTk Given a schedule S, operation 0i precedes 0j(0i< 0j), if 0iappears to the left of 0jin S. • A schedule is correct if it is serializable; it is computationally equivalent to a serial schedule.

  24. Serializability in a Distributed Database Serializability of local schedules is not sufficient to ensure the correctness of the executions of a set of distributed transactions. For example: S1: Ti > Tj S2: Tj > Ti Thus, the execution of T1, ..., Tn is correct if • Each local schedule Sk satisfy the serializable. 2) There exists a total order of Ti, ..., Tn such that if Ti < Tj in this total ordering, then there is a serial schedule Sk’, such that Sk is equivalent to Sk’ and Ti < Tj in Sk’ for site K.

  25. Consistency Control Techniques • Time stamps • Locking • Primary site locking • Exclusive-Writer • Exclusive-writer using sequence number • Exclusive-writer using sequence number with lock options

  26. Two Phase Locking (2PL) • Read and Write locks • Locks may be obtained only if they do not conflict with a lock owned by another transaction. • Conflicts occur only if the locks refer to the same data item and: • RW - one is a read lock and the other is a write lock • WW - both are write locks • “Two-phased-ness” • Growing phase • Locked-point • Shrinking phase • Once a lock is released, no new locks may be obtained • Locked-point determines serialization order

  27. Centralized Locking Algorithm • All requests for resources must be sent to one site called the lock controller. • The lock controller maintains a lock table for all the resources in the system. • Before a site can execute a transaction, it must obtain the required locks from the lock controller. • Advantage: • Fewer messages required for setting locks than in distributed locking algorithms. • Disadvantages: • Poor reliability; Backup system required. • Lock controller must handle large traffic volume.

  28. Distributed Locking Algorithm • Wait for a transaction request from user. • Send n lock request messages. • In case of any lock reject, send lock release and go to 2 to retry after a random internal of time. • Perform local transaction and send n update messages. • Wait for update ACK messages. • Send n lock releases, notify user the transaction is done, go to 1. • 5 n computer to computer message • Time consuming • Long delay

  29. Solutions With 2 Transmission Nodes organized in a ring structure. • Requests propagate around the loop and are accepted when they return to sender. • Update messages (in the same manner) are their own completion ACK. • Priority to solve simultaneous requests • Serial propagation increases delay – good for small networks. 0 0 0 0 0 0

  30. Voting Algorithm • The data base manager process sends an update request to the other DBMP. • The requests contain the variables that participate in the query with their time stamps and the new values for the updates variables. • Each DBMP votes OK, REJ, or pass, or defers voting. • The update will be rejected if any DBMP rejects. • The transaction is accepted if the majority of the DBMP accepting the transaction voted OK so the transaction is OK. If two requests are accepted, it means that at least one DBMP voted OK for both. Broadcast 2.5 n Transmission Daisy Chain 1.5 n Transmission

  31. Primary Site Locking (PSL) PS A Lock-Request B Update A Update A D Lock-Grant B B Update B Update B • Task Execution • Intercomputer Synchronization Delay D

  32. Characteristics of Primary Site Locking • Serializability • Mutual consistency • Moderate to high complexity • Can cause deadlocks • Inter-computer synchronization delays

  33. Variable Level of Synchronization • Global database lock is not required by most of the transactions. • Different types of transactions need different levels of synchronization. • The level of synchronization can be represented by algorithms (protocols), which are executed when a transaction is requested. Goal: Each transaction should run under the protocol that gives the least delay without compromising the consistency. In SDD-1, four protocols are available. Different levels of synchronization yield different delays. Disadvantage: High overhead costs.

  34. Read Only TM1 TM2 TMM Update update-request message update-request message update-request message EW Read & Update Shared File F TM1, …, TMM are Transactions The Exclusive Writer Approach

  35. The Exclusive-Writer Protocol • Implementation requirements • Each copy of a file has an update sequence number (SN). • Operation • Only the Exclusive-Writer (EW) distributes file updates. • Updating tasks sends to the EW’s site update-request messages (update and SN). • Before the EW distributes a file update, it increments the file’s SN. • The EW detects a data conflict when the SN in the update-request is less than the SN of the corresponding file at the EW’s site.

  36. Transactions TMA and TMB only access File FK FK is replicated at all sites Site I EW Site Site J TMA Arrives SNK, I SNK, EW SNK, J N N N TMB Arrives TE(A) TMA TU(A) Update-Request A TE(B) SN = N + N+1 TMB Update-Request SN = N Update A SN=N+1 Update A * SN=N+1 + N+1 N+1 + Notification of Discard (Optional) SNK, I = Update sequence number for the copy of File FK at Site I = Transaction Execution = Transaction Execution Response Time = Update Confirmation Response Time = Update is Written = Update-Request is Discarded TE TU + * Timing Diagram for Exclusive-Writer Protocol (EWP)

  37. ... ... T1 TI TJ TK ... EW1 EW2 EWJ F1 F2 F1 F2 F3 Interconnection Network TJ = Task J EWJ = Exclusive-Writer for FJ FJ = File J A Distributed Processing System which uses the EWP.

  38. Protocol Diagram for the EWL Site I EW Site Site J NK, 1 SNK, EW SNK, J N TMA N N TMB Update-Request A SN = N Update-Request B N+1 SN = N Update A Update A D SN=N+1 N+1 N+1 SN=N+1 * Lock-Grant B SN=N+1 TMB N+2 Update B Update B SN=N+2 SN=N+2 N+2 N+2 ** • TM Execution • File is Locked • File is Unlocked • Intercomputer Synchronization Delay * ** D

  39. Comparison of PSL and the EWP • PSL • No discarded updates • Inter-computer synchronization delays • Can cause deadlocks • EWP • Conflicting updates are discarded (EWP without lock option) • Non inter-computer synchronization delays • Lower message volume than PSL • Design Issue • Selection of primary site and exclusive-writer site • Limited replication of shared files • Performance Analysis • Volume of message • Response time • Processing overhead

  40. Transactions TMA and TMB only access File FK FK is replicated at all sites TS(B) < TS(A) Site I Site J TMA Arrives TMB Arrives TU(A) + TMA TE(B) Update A, TS(A) + TMB Update B, TS’(B) A is Late TU(A) * Reject A + TU(B) Accept B + TMA Update A, TS(A) + Accept A + = Transaction Executed = Transaction Execution Response Time = Update Confirmation Response Time = Update is Written = Database Rollback = Timestamp TE * TU TS Timing Diagram for Basic Timestamp

  41. Escrow LockingPat O’Neil For updating numerical values • money • disk space Uses Primary Site. Lock in advance only the required amount (Errors). Release excess resources after execution. Advantage: • Less lock conflict, therefore, more data availability. • Less concurrency control overhead for update; good for long transactions. Disadvantage: • Weak data consistency. • Data usually are inconsistent, but within a certain bond. EXAMPLE: Bank account with $50. Need to clear a check for up to $30. Escrow lock $30 - other $20 is still available. If check is only for $25, return remaining $5.

  42. Escrow Locking Under Partitioning • Similar to PSL • Only Primary Site partition can update. • Primary Site may be isolated in a small partition. • Further • Escrows may be outstanding when partitioning occurs. Solution • Grant the escrow amount to each partition. • Based on user profile/history. • Based on size/importance of the partitions.

  43. Escrow Locking Under Partitioning (cont’d) EXAMPLE: • Escrow amount = total amount/# of partitions Bank account with $50 If two partitions occur escrow $25 in each partition (for that partition to use) If some updates require $30, then update will be blocked • Based on historical information, give different escrow portions to different partitions. E.g., Escrow for partition A = 35 Escrow for partition B = 15 • Use normal escrow locking in each partition. • Reconcile database afterwards.

  44. PS Quasi Copies for Federated DatabasesHector Garcia-Molina, IEEE DE 1989 • Every database has a single controlling site • Updates propagated to other (Read/Only) Sites • If value changes (in percentage) by p > w • If value changes (in absolute value) by a > x • After a timeout period t > y • After a fixed number of updates u > z • Some Boolean combination (t > y) AND (p > w)

  45. Quasi Copies for Federated Databases (cont’d) • Advantage: • Reduce update overhead • Disadvantage: • Weak concurrency control • Remote reads may read out-of-date information but still guaranteed within a certain tolerance • Examples: • Catalog • Prices are guaranteed for one year • Government • Old census data might be used to determine current representation

  46. Process 1 has A needs B Process 2 has B needs A P1 A P2 B A Simple Deadlock

  47. Deadlock Prevention Mechanisms • Deadlock Detection Mechanisms

  48. Deadlock Prevention • Priority • Timestamps: • A transaction’s timestamp is the time at which it begins execution. • Old transactions have higher priority than younger ones.

  49. Timestamp Deadlock Prevention Schemes • Assume older transaction has higher priority than younger transaction. • Wait-Die-Non-preemptive technique. If Ti requests a lock on a data item which is already locked by Tj and if Tihas higher priority than Tj(i.e., Tiis older than Tj), then Tiis permitted to wait. If Ti is younger than Tj, then Ti is aborted (“dies”) and restarts with the same timestamp. “It is better always to restart the younger transaction.”

  50. Timestamp Deadlock Prevention Schemes (cont’d) • Wound-Wait-Preemptive counterpart to wait-die. Assume Ti requests lock on a data item which is already locked by Tj. If it is younger than Tj, then Ti is permitted to wait. If it is older than Tj, Tjis aborted and the lock is granted to Ti. “Allow older transactions to pre-empt younger ones and therefore only younger transactions wait for older ones.”

More Related