1 / 61

Question

Question. Scalability vs Elasticity What is the difference?. Homework 1. Installing the open source cloud Eucalyptus in SEC3429 Individual assignment Will need two machines – machine to help with installation and machine on which to install cloud so BRING YOUR LAPTOP

mindy
Download Presentation

Question

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Question • Scalability vs Elasticity • What is the difference?

  2. Homework 1 • Installing the open source cloud Eucalyptus in SEC3429 • Individual assignment • Will need two machines – machine to help with installation and machine on which to install cloud so BRING YOUR LAPTOP • Guide to help you – step by step, but you will also need to use Eucalyptus Installation Guide • When you are done you will have a cloud with a VM instance running on it • Can use for future work and if not, can say you have installed a cloud and VM image

  3. Components of Eucalyptus • CLC – cloud controller • Walrus – Amazon’s S3 for storing VM images • SC – storage controller • CC – cluster controllers • NC – node controllers

  4. Cloud Controller - CLC • Java program (EC2 compatible interface) and web interface • Administrative interface for cloud management • Resource scheduling • Authentication, accounting, reporting • Only one CLC per cloud

  5. Walrus • Written in Java (equivalent to AWS Simple Storage Service S3) • Persistent storage to • all VMs • VM Images • Application data • Volume snapshots (point-in-time copies) • Can be used as put/get storage as a service • Only one Walrus per cloud • Why is it called Walrus – WS3?

  6. Cluster Controller - CC • Written in C • Front-end for a cluster within the cloud • Communicates with Storage Controller and Node Controller • Manages VM instance execution and SLAs per cluster

  7. Availability Zones • Each cluster exists in an availability zone • A cloud can have multiple locations • Within each location is a region • Each region has multiple isolated locations which are called availability zones • Availability zones are connected through low-latency links

  8. Storage Controller - SC • Written in Java (equivalent to AWS Elastic Block Store EBS) • Communicates with CC and NC • Manages Eucalyptus block volumes and snapshots of instances within cluster • If larger storage needed for application

  9. Node Controller - NC • Written in C • Hosts VM instances – where they run • Manages virtual network endpoints • Downloads, caches images from Walrus • Creates and caches instances • Many NCs per cluster

  10. Interesting Info on Clouds • What Americans think a compute cloud is • http://www.citrix.com/lang/English/lp/lp_2328330.asp

  11. Send me interesting links about clouds

  12. Homework • Read the paper on GFS: Evolution on Fast-Forward • Also a link to a longer paper on GFS – original paper from 2003 • I assume you are reading papers as specified in the class schedule

  13. The Original Google File SystemGFS Some slides from Michael Raines

  14. During the lecture, you should point out problems with GFS design decisions

  15. Common Goals of GFSand most Distributed File Systems • Performance • Reliability • Scalability • Availability

  16. GFS Design Considerations • Component failures are the norm rather than the exception. • File System consists of hundreds or even thousands of storage machines built from inexpensive commodity parts. • Files are Huge. Multi-GB Files are common. • Each file typically contains many application objects such as web documents. • Append, Append, Append. • Most files are mutated by appending new data rather than overwriting existing data. • Co-Designing • Co-designing applications and file system API benefits overall system by increasing flexibility

  17. GFS • Why assume hardware failure is the norm? • The amount of layers in a distributed system (network, disk, memory, physical connections, power, OS, application) mean failure on any could contribute to data corruption. • It is cheaper to assume common failure on poor hardware and account for it, rather than invest in expensive hardware and still experience occasional failure.

  18. Initial Assumptions • System built from inexpensive commodity components that fail • Modest number of files – expect few million and 100MB or larger. Didn’t optimize for smaller files • 2 kinds of reads – large streaming read (1MB), small random reads (batch and sort) • Well-defined semantics: • Master/slave, producer/ consumer and many-way merge. 1 producer per machine append to file. • Atomic RW • High sustained bandwidth chosen over low latency (difference?)

  19. High bandwidth versus low latency • Example: • An airplane flying across the country filled with backup tapes has very high bandwidth because it gets all data at destination faster than any existing network • However – each individual piece of data had high latency

  20. Interface • GFS – familiar file system interface • Files organized hierarchically in directories, path names • Create, delete, open, close, read, write • Snapshot and record append (allows multiple clients to append simultaneously) • This means atomic read/writes – not transactions!

  21. Master/Servers (Slaves) • Single master, multiple chunkservers • Each file divided into fixed-size chunks of 64 MB • Chunks stored by chunkservers on local disks as Linux files • Immutable and globally unique 64 bit chunk handle (name or number) assigned at creation

  22. Master/Servers • R or W chunk data specified by chunk handle and byte range • Each chunk replicated on multiple chunkservers – default is 3

  23. Master/Servers • Master maintains all file system metadata • Namespace, access control info, mapping from files to chunks, location of chunks • Controls garbage collection of chunks • Communicates with each chunkserver through HeartBeat messages • Clients interact with master for metadata, chunksevers do the rest, e.g. R/W on behalf of applications • No caching – • For client working sets too large, simplified coherence • For chunkserver – chunks already stored as local files, Linux caches MFU in memory

  24. Heartbeats • What do we gain from Heartbeats? • Not only do we get the new state of a remote system, this also updates the master regarding failures. • Any system that fails to respond to a Heartbeat message is assumed dead. This information allows the master to update his metadata accordingly. • This also queues the Master to create more replicas of the lost data.

  25. Client • Client translates offset in file into chunk index within file • Send master request with file name/chunk index • Master replies with chunk handle and location of replicas • Client caches info using file name/chunk index as key • Client sends request to one of the replicas (closest) • Further reads of same chunk require no interaction • Can ask for multiple chunks in same request

  26. Master Operations • Master executes all namespace operations • Manages chunk replicas • Makes placement decision • Creates new chunks (and replicas) • Coordinates various system-wide activities to keep chunks fully replicated • Balance load • Reclaim unused storage

  27. Do you see any problems? • Do you question any design decisions?

  28. Master - Justification • Single Master – • Simplifies design • Placement, replication decisions made with global knowledge • Doesn’t R/W, so not a bottleneck • Client asks master which chunkservers to contact

  29. Chunk Size - Justification • 64 MB, larger than typical • Replica stored as plain Linux file, extended as needed • Lazy space allocation • Reduces interaction of client with master • R/W on same chunk only 1 request to master • Mostly R/W large sequential files • Likely to perform many operations on given chunk (keep persistent TCP connection) • Reduces size of metadata stored on master

  30. Chunk problems • But – • If small file – one chunk may be hot spot • Can fix this with replication, stagger batch application start times

  31. Metadata • 3 types: • File and chunk namespaces • Mapping from files to chunks • Location of each chunk’s replicas • All metadata in memory • First two types stored in logs for persistence (on master local disk and replicated remotely)

  32. Metadata • Instead of keeping track of chunk location info • Poll – which chunkserver has which replica • Master controls all chunk placement • Disks may go bad, chunkserver errors, etc.

  33. Metadata - Justification • In memory –fast • Periodically scans state • garbage collect • Re-replication if chunkserver failure • Migration to load balance • Master maintains < 64 B data for each 64 MB chunk • File namespace < 64B

  34. Chunk size (again)- Justification • 64 MB is large – think of typical size of email • Why Large Files? • METADATA! • Every file in the system adds to the total overhead metadata that the system must store. • More individual data means more data about the data is needed.

  35. Operation Log • Historical record of critical metadata changes • Provides logical time line of concurrent ops • Log replicated on remote machines • Flush record to disk locally and remotely • Log kept small – checkpoint when > size • Checkpoint in B-tree form • New checkpoint built without delaying mutations (takes about 1 min for 2 M files) • Only keep latest checkpoint and subsequent logs

  36. Snapshot • Snapshot makes copy of file • Used to create checkpoint or branch copies of huge data sets • First revokes leases on chunks • Newly created snapshot points to same chunks as source file • After snapshot, client sends request to master to find lease holder • Master give lease to new copy

  37. Shadow Master • Master Replication • Replicated for reliability • Not mirrors, so may lag primary slightly (fractions of second) • Shadow master read replica of operation log, applies same sequence of changes to data structures as the primary does

  38. Shadow Master • If Master fails: • Start shadow instantly • Read-only access to file systems even when primary master down • If machine or disk mails, monitor outside GFS starts new master with replicated log • Clients only use canonical name of master

  39. Creation, Re-replication, Rebalancing • Master creates chunk • Place replicas on chunkservers with below-average disk utilization • Limit number of recent creates per chunkserver • New chunks may be hot • Spread replicas across racks • Re-replicate • When number of replicas falls below goal • Chunkserver unavailable, corrupted, etc. • Replicate based on priority (fewest replicas) • Master limits number of active clone ops

  40. Creation, Re-replication, Rebalancing • Rebalance • Periodically moves replicas for better disk space and load balancing • Gradually fills up new chunkserver • Removes replicas from chunkservers with below-average free space

  41. Leases and Mutation Order • Chunk lease • One replica chosen as primary - given lease • Primary picks serial order for all mutations to chunk • Lease expires after 60 s

  42. Consistency Model • Why Append Only? • Overwriting existing data is not state safe. • We cannot read data while it is being modified. • A customized ("Atomized") append is implemented by the system that allows for concurrent read/write, write/write, and read/write/write events.

  43. Consistency Model

  44. Consistency Model • File namespace mutation (update) atomic • File Region • Consistent if all clients see same data • Region – defined after file data mutation (all clients see writes in entirety, no interference from writes) • Undefined but Consistent - concurrent successful mutations – all clients see same data, but not reflect what any one mutation has written, fragments of updates • Inconsistent – if failed mutation (retries)

  45. Consistency • Relaxed consistency can be accommodated – relying on appends instead of overwrites • Appending more efficient/resilient to failure than random writes • Checkpointing allows restart incrementally and no processing of incomplete successfully written data

  46. Namespace Management and Locking • Master ops can take time, e.g. revoking leases • allow multiple ops at same time, use locks over regions for serialization • GFS does not have per directory data structure listing all files • Instead lookup table mapping full pathnames to metadata • Each name in tree has R/W lock • If accessing: /d1/d2/ ../dn/leaf, R lock on /d1, /d1/d2, etc., W lock on /d1/d2 …/leaf

More Related