1 / 27

Dynamo: Amazon's Highly Available Key-value Store

Dynamo: Amazon's Highly Available Key-value Store. Guiseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swami Sivasubramanian, Peter Vosshall, and Werner Vogels. Cloud Data Services & Relational Database Systems go hand in hand.

Download Presentation

Dynamo: Amazon's Highly Available Key-value Store

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamo: Amazon's HighlyAvailable Key-value Store Guiseppe DeCandia, Deniz Hastorun,Madan Jampani, Gunavardhan Kakulapati,Avinash Lakshman, Alex Pilchin,Swami Sivasubramanian, Peter Vosshall,and Werner Vogels

  2. Cloud Data Services & Relational Database Systems go hand in hand • Oracle, Microsoft SQL Server and even MySQL have traditionally powered enterprise and online data clouds • Clustered - Traditional Enterprise RDBMS provide the ability to cluster and replicate data over multiple servers – providing reliability • Highly Available – Provide Synchronization (“Always Consistent”), Load-Balancing and High-Availability features to provide nearly 100% Service Uptime • Structured Querying – Allow for complex data models and structured querying – It is possible to off-load much of data processing and manipulation to the back-end database

  3. Traditional RDBMS clouds are:EXPENSIVE! To maintain, license and store large amounts of data • The service guarantees of traditional enterprise relational databases like Oracle, put high overheads on the cloud • Complex data models make the cloud more expensive to maintain, update and keep synchronized • Load distribution often requires expensive networking equipment • To maintain the “elasticity” of the cloud, often requires expensive upgrades to the network

  4. The Solution • Downgrade some of the service guarantees of traditional RDBMS • Replace the highly complex data models Oracle and SQL Server offer, with a simpler one – This means classifying service data models based on the complexity of the data model they may required • Replace the “Always Consistent” guarantee synchronization model with an “Eventually Consistent” model – This means classifying services based on how “updated” its data set must be Redesign or distinguish between services that require a simpler data model and lower expectations on consistency. We could then offer something different from traditional RDBMS!

  5. The Solution • Amazon’s Dynamo – Used by Amazon’s EC2 Cloud Hosting Service. Powers their Elastic Storage Service called S2 as well as their E-commerce platform Offers a simple Primary-key based data model. Stores vast amounts of information on distributed, low-cost virtualized nodes • Google’s BigTable – Google’s principle data cloud, for their services – Uses a more complex column-family data model compared to Dynamo, yet much simpler than traditional RMDBSGoogle’s underlying file-system provides the distributed architecture on low-cost nodes • Facebook’s Cassandra – Facebook’s principle data cloud, for their services. This project was recently open-sourced. Provides a data-model similar to Google’s BigTable, but the distributed characteristics of Amazon’s Dynamo

  6. What is Dynamo • Dynamo is a highly available distributed key-value storage system • put(), get() interface • Sacrifices consistency for availability • Provides storage for some of Amazon's key products (e.g., shopping carts, best seller lists, etc.)‏ • Uses “synthesis of well known techniques to achieve scalability and availability” • Consistent hashing, object versioning, conflict resolution, etc.

  7. Scale • Amazon is busy during the holidays • Shopping cart: tens of millions of requests for 3 million checkouts in a single day • Session state system: 100,000s of concurrently active sessions • Failure is common • Small but significant number of server and network failures at all times • “Customers should be able to view and add items to their shopping cart even if disks are failing, network routes are flapping, or data centers are being destroyed by tornados.”

  8. Flexibility • Minimal need for manual administration • Nodes can be added or removed without manual partitioning or redistribution • Apps can control availability, consistency, cost-effectiveness, performance • Can developers know this up front? • Can it be changed over time?

  9. System Assumptions & Requirements • Query Model: simple read and write operations to a data item that is uniquely identified by a key. • values are small (<1MB) binary objects • No ACID Properties: Atomicity, Consistency, Isolation, Durability. • Efficiency: latency requirements which are in general measured at the 99.9th percentile of the distribution. • Other Assumptions: operation environment is assumed to be non-hostile and there are no security related requirements such as authentication and authorization.

  10. Service level agreements • SLAs are used widely at Amazon • Sub-services must meet strict SLAs • e.g., 300ms response time for 99.9% of requests at peak load of 500 requests/s • Average-case SLAs are not good enough • Mentioned a cost-benefit analysis that said 99.9% is the right number • Rendering a single page can make requests to 150 services Service-oriented architecture of Amazon’s platform

  11. Sacrifice strong consistency for availability • Eventual consistency • “Always writable” • Can always write to shopping cart • Pushes conflict resolution to reads • Application-driven conflict resolution • e.g., merge conflicting shopping carts • Or Dynamo enforces last-writer-wins • How often does this work?

  12. Other Design Principles • Incremental scalability • Minimal management overhead • Symmetry • No master/slave nodes • Decentralized • Centralized control leads to too many failures • Heterogeneity • Exploit capabilities of different nodes

  13. Summary of techniques used in Dynamo and their advantages

  14. Interface • get(key) returns object replica(s) for key, plus a context object • context encodes metadata, opaque to caller • put(key, context, object) stores object

  15. Variant of consistent hashing Key K A G B Each node isassigned tomultiple pointsin the ring (e.g., B, C, Dstore keyrange(A, B) F C # of points canbe assigned basedon node’s capacity E If node becomesunavailable, load isdistributed to others D

  16. Replication Key K Coordinator for key K A G B B maintains a preferencelist for each data itemspecifying nodes storingthat item F C Preference list skipsvirtual nodes in favor ofphysical nodes E D D stores (A, B], (B, C], (C, D]

  17. Data versioning • put() can return before update is applied to all replicas • Subsequent get()s can return older versions • This is okay for shopping carts • Branched versions are collapsed • Deleted items can resurface • A vector clock is associated with each object version • Comparing vector clocks can determine whether two versions are parallel branches or causally ordered • Vector clocks passed by the context object in get()/put() • Application must maintain this metadata?

  18. Vector Clock • A vector clock is a list of (node, counter) pairs. • Every version of every object is associated with one vector clock. • If the counters on the first object’s clock are less-than-or-equal to all of the nodes in the second clock, then the first is an ancestor of the second and can be forgotten.

  19. Vector clock example

  20. “Quorum-likeness” • get() & put() driven by two parameters: • R: the minimum number of replicas to read • W: the minimum number of replicas to write • R + W > N yields a “quorum-like” system • Latency is dictated by the slowest R (or W) replicas • Sloppy quorum to tolerate failures • Replicas can be stored on healthy nodes downstream in the ring, with metadata specifying that the replica should be sent to the intended recipient later

  21. Adding and removing nodes • Explicit commands issued via CLI or browser • Gossip-style protocol propagates changes among nodes • New node chooses virtual nodes in the hash space

  22. Implementation • Persistent store either Berkeley DB Transactional Data Store, BDB Java Edition, MySQL, or in-memory buffer w/ persistent backend • All in Java! • Common N, R, W setting is (3, 2, 2) • Results are from several hundred nodes configured as (3, 2, 2) • Not clear whether they run in a single datacenter…

  23. One tick= 12 hours

  24. One tick= 1 hour

  25. During periods of high loadpopular objects dominate During periods of low load,fewer popular objects are accessed One tick= 30 minutes

  26. Quantifying divergent versions • In a 24 hour trace • 99.94% of requests saw exactly one version • 0.00057% received 2 versions • 0.00047% received 3 versions • 0.00009% received 4 versions • Experience showed that diversion came usually from concurrent writers due to automated client programs (robots), not humans

  27. Conclusions • Scalable: • Easy to shovel in more capacity at Christmas • Simple: • get()/put() maps well to Amazon’s workload • Flexible: • Apps can set N, R, W to match their needs • Inflexible: • Apps have to set N, R, W to match their needs • Apps may have to do their own conflict resolution • They claim it’s easy to set these – does this mean that there aren’t many interesting points? • Interesting?

More Related