Replication management
Download
1 / 38

- PowerPoint PPT Presentation


  • 142 Views
  • Uploaded on

Replication Management. Motivations for Replication. Performance enhancement Increased availability Fault tolerance. General Requirements. Replication transparency Consistency. An Architecture for Replication Management.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '' - omar


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Motivations for replication
Motivations for Replication

  • Performance enhancement

  • Increased availability

  • Fault tolerance


General requirements
General Requirements

  • Replication transparency

  • Consistency


An architecture for replication management
An Architecture for Replication Management

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Phases of request processing
Phases of Request Processing

  • Issuance: unicast or multicast (from the front end to replica managers)

  • Coordination

  • Execution

  • Agreement

  • Response

    * The ordering varies for different systems.


Services for process groups
Services for Process Groups

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


View synchronous group communications
View-Synchronous Group Communications

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Sequential consistency
Sequential Consistency

  • The one-copy semantics of the replicated objects is respected.

  • The order of operations is preserved for each client.


The primary backup model
The Primary-Backup Model

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Active replication
Active Replication

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


The gossip architecture
The Gossip Architecture

  • A framework for providing high availability of service through lazy replication

  • A request normally executed at one replica

  • Replicas updated by lazy exchange of gossip messages (containing most recent updates).


Operations in a gossip service
Operations in a Gossip Service

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Timestamps
Timestamps

  • Each front end keeps a vector timestamp reflecting the latest version accessed.

  • The timestamp is attached to every request sent to a replica.

  • Two front ends may exchange messages directly; these messages also carry timestamps.

  • The merging of timestamps is done as usual.


Timestamps cont d
Timestamps (cont’d)

  • Each replica keeps a replica timestamp representing those updates it has received.

  • It also keeps a value timestamp, reflecting the updates in the replicated value.

  • The replica timestamp is attached to the reply to an update, while the value timestamp is attached to the reply to a query.


Timestamp propagations
Timestamp Propagations

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


The update log
The Update Log

  • Every update, when received by a replica, is recorded in the update log of the replica.

  • Two reasons for keeping a log:

    * The update cannot be applied yet; it is held back.

    * It is uncertain if the update has been received by all replicas.

  • The entries are sorted by timestamps.


The executed operation table
The Executed Operation Table

  • The same update may arrive at a replica from a front end and in a gossip message from another replica.

  • To prevent an update from being applied twice, the replica keeps a list of identifiers of the updates that have been applied so far.


A gossip replica manager
A Gossip Replica Manager

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Processing query requests
Processing Query Requests

  • A query request q carries a timestamp q.prev, reflecting the latest version of the value that the front end has seen.

  • Request q can be applied (i.e., it is stable) if q.prev valueTS (the value timestamp of the replica that received q).

  • Once q is applied, the replica returns the current valueTS along with the reply.


Processing update requests
Processing Update Requests

  • For an update u (not a duplicate), replica i

    * increments the i-th element of its replica timestamp replicaTS by one,

    * adds an entry to the log with a timestamp ts derived from u.prev by replacing the i-th element with that of replicaTS, and

    * return ts to the front end immediately.

  • When the stability condition u.prev  valueTS holds, update u is applied and u.prev is merged with valueTS.


Processing gossip messages
Processing Gossip Messages

  • For every gossip message received, a replica does the following:

    * Merge the arriving log with its own; duplicated updates are discarded.

    * Apply updates that have become stable.

  • A gossip message need not contain the entire log, if it is certain that some of the updates have been seen by the receiving replica.


Updates in bayou
Updates in Bayou

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


About bayou
About Bayou

  • Consistency guarantees

  • Merging of updates

  • Dependency checks

  • Merge procedures


Coda vs afs
Coda vs. AFS

  • More general replication

  • Greater tolerance toward server crashes

  • Allowing disconnected operations


Transactions with replicated data
Transactions with Replicated Data

  • A replicated transactional service should appear the same as one without replicated data.

  • The effects of transactions performed by various clients on replicated data are the same as if they had been performed one at a time on single data items; this property is called one-copy serializability.


Transactions with replicated data cont d
Transactions with Replicated Data (cont’d)

  • Failures should be serialized with respect to transactions.

  • Any failure observed by a transaction must appear to have happened before the transaction started.


Schemes for one copy serializability
Schemes for One-Copy Serializability

  • Read one/write all

  • Available copies replication

  • Schemes that also tolerate network partitioning:

    * available copies with validation

    * quorum consensus

    * virtual partition


Transactions on replicated data

Client + front end

Client + front end

U

T

deposit(B,3);

getBalance(A)

B

Replica managers

Replica managers

A

A

B

B

B

A

Transactions on Replicated Data

Source: Instructor’s guide for G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Available copies replication
Available Copies Replication

  • A client's read request on a logical data item may be performed by any available replica, but a client's update request must be performed by all available replicas.

  • A local validation procedure is required to ensure that any failure or recovery does not appear to happen during the progress of a transaction.


Available copies replication cont d

Client + front end

T

U

Client + front end

getBalance(B)

deposit(A,3);

getBalance(A)

Replica managers

deposit(B,3);

B

M

Replica managers

B

B

A

A

N

P

X

Y

Available Copies Replication (cont’d)

Source: Instructor’s guide for G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Network partition
Network Partition

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Available copies with validation
Available Copies with Validation

  • The available copies algorithm is applied within each partition.

  • When a partition is repaired, the possibly conflicting transactions that took place in the separate partitions are validated.

  • If the validation fails, some of the transactions have to be aborted.


Quorum consensus methods
Quorum Consensus Methods

  • One way to ensure consistency across different partitions is to make a rule that operations can only be carried out within one of the partitions.

  • A quorum is a subgroup of replicas whose size gives it the right to execute operations.

  • Version numbers or timestamps may be used to determine whether copies of the data item are up to date.


An example for quorum consensus
An Example for Quorum Consensus

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Two network partitions
Two Network Partitions

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Virtual partition
Virtual Partition

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Overlapping virtual partitions
Overlapping Virtual Partitions

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


Creating virtual partitions
Creating Virtual Partitions

Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.


ad