1 / 39

Lecture #8 Gian t-Scale Services

Lecture #8 Gian t-Scale Services. CS492 Special Topics in Computer Science: Distributed Algorithms and Systems. Lessons from Giant-Scale Services. Eric Brewer UC Berkeley and Inktomi IEEE Internet Computing July/August 2001. Trade-Offs. Memory CPU speed Hard Disk Operating Systems.

gittel
Download Presentation

Lecture #8 Gian t-Scale Services

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture #8Giant-Scale Services CS492 Special Topics in Computer Science: Distributed Algorithms and Systems

  2. Lessons from Giant-Scale Services Eric Brewer UC Berkeley and Inktomi IEEE Internet Computing July/August 2001

  3. Trade-Offs Memory CPU speed Hard Disk Operating Systems

  4. “Giant-Scale” Services • Key real-world challenges • High avaiability • Evolution • Growth

  5. Advantages of Giant-Scale Services Access anywhere, anytime Availability via multiple devices Groupware support Lower overall cost Simplified service updates

  6. Basic Model for Giant-Scale Services

  7. Assumptions Service provide has limited control over clients/network Queries drive the service Read-only queries greatly outnumber updates

  8. Google Data Centers Craig Mitchelldyer/Getty Images Brian Nettles/The Post and Courier

  9. Load Management Round-Robin DNS “Layer 4” switch “Layer 7” switch

  10. Comparison

  11. Availability Metrics • Uptime • Uptime = (MTBF – MTTR) / MTBF • Mean-time-between-failure (MTBF) • Mean-time-to-repair (MTTR) • Yield = queries completed/queried offered • Harvest = data available/complete data

  12. DQ Principle • Data per query x queries per second -> constant • Amount of data that has to be moved per sec

  13. Yield vs Harvest • Replicated • Map faults to reduced capacity • Yield drops 50% when half dies • Partitioned • Map faults to reduced harvest • Yield remains, but harvest drops 50% • But both halve in DQ • Replicated: Q • Partitioned: D

  14. Bottom Line Is … When you double capacity, make sure your DQ doubles

  15. Graceful Degradation • Peak-to-avg ratio = 1.6:1 to 6:1 • Single-event bursts can generate far above-average traffic • Some faults are not independent • Explicit process of managing the effect of saturation • Cost-based access control • Priority or value-based access control • Reduced data freshness

  16. Online Evolution and Growth • “Internet time” - frequent product releases • Maintenance and upgrades = controlled failures = “online evolution” • Fast reboot • Rolling upgrade • Big flip

  17. Lessons Get the basics right Decide on your availability metrics Focus on MTTR at least as much as MTBF Understand load redirection during faults Graceful degradation Use DQ analysis on all upgrades Automate upgrades as much as possible

  18. Questions from last class How many copies of a chunk? How large is write delay?

  19. The Google File System Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung SOSP 2003

  20. What do you remember about FS?

  21. Distributed File System? How is it different from a local file system?

  22. Motivation Google needed a good distributed file system Redundant storage of massive amounts of data oncheap and unreliable computers Why not use an existing file system? Google’s problems are different from anyone else’s Different workload and design priorities GFS is designed for Google apps and workloads Google apps are designed for GFS

  23. Assumptions • High component failure rates • Inexpensive commodity components fail all the time • “Modest” number of HUGE files • Just a few million • Each is 100MB or larger; multi-GB files typical • Files are write-once, mostly appended to • Perhaps concurrently • Large streaming reads • High sustained throughput favored over low latency

  24. GFS Design Decisions • Files stored as chunks • Fixed size (64MB) • Reliability through replication • Each chunk replicated across 3+ chunkservers • Single master to coordinate access, keep metadata • Simple centralized management • No data caching • Little benefit due to large data sets, streaming reads • Familiar interface, but customize the API • Simplify the problem; focus on Google apps • Add snapshot and record append operations

  25. GFS Architecture Single master Mutiple chunkservers …Can anyone see a potential weakness in this design?

  26. Single master • From distributed systems we know this is a: • Single point of failure • Scalability bottleneck • GFS solutions: • Shadow masters • Minimize master involvement • never move data through it, use only for metadata • and cache metadata at clients • large chunk size • master delegates authority to primary replicas in data mutations (chunk leases) • Simple, and good enough!

  27. Metadata (1/2) • Global metadata is stored on the master • File and chunk namespaces • Mapping from files to chunks • Locations of each chunk’s replicas • All in memory (64 bytes / chunk) • Fast • Easily accessible

  28. Metadata (2/2) • Master has an operation log for persistent logging of critical metadata updates • persistent on local disk • replicated • checkpoints for faster recovery

  29. Mutations • Mutation = write or append • must be done for all replicas • Goal: minimize master involvement • Lease mechanism: • master picks one replica asprimary; gives it a “lease” for mutations • primary defines a serial order of mutations • all replicas follow this order • Data flow decoupled fromcontrol flow

  30. Atomic record append • Client specifies data • GFS appends it to the file atomically at least once • GFS picks the offset • works for concurrent writers • Used heavily by Google apps • e.g., for files that serve as multiple-producer/single-consumer queues

  31. Relaxed consistency model (1/2) • “Consistent” = all replicas have the same value • “Defined” = replica reflects the mutation, consistent • Some properties: • concurrent writes leave region consistent, but possibly undefined • failed writes leave the region inconsistent • Some work has moved into the applications: • e.g., self-validating, self-identifying records

  32. Relaxed consistency model (2/2) • Simple, efficient • Google apps can live with it • what about other apps? • Namespace updates atomic and serializable

  33. Master’s Responsibilities (1/2) • Metadata storage • Namespace management/locking • Periodic communication with chunkservers • give instructions, collect state, track cluster health • Chunk creation, re-replication, rebalancing • balance space utilization and access speed • spread replicas across racks to reduce correlated failures • re-replicate data if redundancy falls below threshold • rebalance data to smooth out storage and request load

  34. Master’s responsibilities (2/2) • Garbage Collection • simpler, more reliable than traditional file delete • master logs the deletion, renames the file to a hidden name • lazily garbage collects hidden files • Stale replica deletion • detect “stale” replicas using chunk version numbers

  35. Fault Tolerance • High availability • fast recovery • master and chunkservers restartable in a few seconds • chunk replication • default: 3 replicas. • shadow masters • Data integrity • checksum every 64KB block in each chunk

  36. Performance

  37. Deployment in Google Many GFS clusters hundreds/thousands of storage nodes each Managing petabytes of data GFS is under BigTable, etc.

  38. Conclusion • GFS demonstrates how to support large-scale processing workloads on commodity hardware • design to tolerate frequent component failures • optimize for huge files that are mostly appended and read • feel free to relax and extend FS interface as required • go for simple solutions (e.g., single master) • GFS has met Google’s storage needs… it must be good!

  39. Discussion • How many sys-admins does it take to run a system like this? • much of management is built in • Is GFS useful as a general-purpose commercial product? • small write performance not good enough? • relaxed consistency model

More Related