1 / 48

Distributed Systems CS 15-440

Distributed Systems CS 15-440. Programming Models- Part V Replication and Consistency- Part I Lecture 18, Oct 29, 2014 Mohammad Hammoud. Today…. Last Session: Programming Models – Part IV : Pregel Today’s Session: Programming Models – Part V : GraphLab

harringtonc
Download Presentation

Distributed Systems CS 15-440

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed SystemsCS 15-440 Programming Models- Part V Replication and Consistency- Part I Lecture 18, Oct 29, 2014 Mohammad Hammoud

  2. Today… • Last Session: • Programming Models – Part IV: Pregel • Today’s Session: • Programming Models – Part V: GraphLab • Replication and Consistency- Part I: Motivation, Overview & Types of Consistency Models • Announcements: • Project 3 is now posted. It is due on Wednesday Nov 12, 2014 by midnight • PS4 is now posted. It is due on Saturday Nov 15, 2014 by midnight • We will practice more on MPI tomorrow in the recitation

  3. Objectives Discussion on Programming Models MapReduce, Pregel and GraphLab MapReduce, Pregel and GraphLab Message Passing Interface (MPI) Types of Parallel Programs Traditional Models of parallel programming Parallel computer architectures Why parallelizing our programs? Cont’d Last 4 Sessions

  4. Objectives Discussion on Programming Models MapReduce, Pregel and GraphLab Message Passing Interface (MPI) Types of Parallel Programs Traditional Models of parallel programming Parallel computer architectures Why parallelizing our programs?

  5. The GraphLab Analytics Engine GraphLab The Computation Model Fault-Tolerance Motivation & Definition Input, Output & Components The Architectural Model The Programming Model

  6. Motivation for GraphLab • There is an exponential growth in the scale of Machine Learning and Data Mining (MLDM) algorithms • Designing, implementing and testing MLDM at large-scale are challenging due to: • Synchronization • Deadlocks • Scheduling • Distributed state management • Fault-tolerance • The interest on analytics engines that can execute MLDM algorithms automatically and efficiently is increasing • MapReduce is inefficient with iterative jobs (common in MLDM algorithms) • Pregel cannot run asynchronous problems (common in MLDM algorithms)

  7. What is GraphLab? • GraphLab is a large-scale graph-parallel distributed analytics engine • Some Characteristics: • In-Memory (opposite to MapReduce and similar to Pregel) • High scalability • Automatic fault-tolerance • Flexibility in expressing arbitrary graph algorithms (more flexible than Pregel) • Shared-Memory abstraction (opposite to Pregel but similar to MapReduce) • Peer-to-peer architecture (dissimilar to Pregel and MapReduce) • Asynchronous (dissimilar to Pregel and MapReduce)

  8. The GraphLab Analytics Engine GraphLab The Computation Model Fault-Tolerance Motivation & Definition Input, Output & Components The Architectural Model The Programming Model

  9. Input, Graph Flow and Output • GraphLab assumes problems modeled as graphs • It adopts two phases, the initialization and the execution phases Initialization Phase GraphLab Execution Phase (MapReduce) Graph Builder Cluster Distributed File system Distributed File system Distributed File system TCP RPC Comms Parsing + Partitioning Atom Index Atom Index Raw Graph Data Monitoring + Atom Placement Atom File Atom File Atom File Atom File Atom Collection GL Engine Raw Graph Data Atom File Atom File Atom File Atom File GL Engine Atom File Atom File Atom File Atom File Index Construction GL Engine

  10. Components of the GraphLab Engine: The Data-Graph • The GraphLab engine incorporates three main parts: • The data-graph, which represents the user program state at a cluster machine Vertex Edge Data-Graph

  11. Components of the GraphLab Engine: The Update Function • The GraphLab engine incorporates three main parts: • The update function, which involves two main sub-functions: 2.1- Altering data within a scope of a vertex 2.2- Scheduling future update functions at neighboring vertices Sv The scope of a vertex v (i.e., Sv) is the data stored in v and in all v’s adjacent edges and vertices v

  12. Components of the GraphLab Engine: The Update Function • The GraphLab engine incorporates three main parts: • The update function, which involves two main sub-functions: 2.1- Altering data within a scope of a vertex 2.2- Scheduling future update functions at neighboring vertices Schedule v The update function

  13. Components of the GraphLab Engine: The Update Function b d a c • The GraphLab engine incorporates three main parts: • The update function, which involves two main sub-functions: 2.1- Altering data within a scope of a vertex 2.2- Scheduling future update functions at neighboring vertices e f g CPU 1 c b i k h j Scheduler e f b a i h i j CPU 2 The process repeats until the scheduler is empty

  14. Components of the GraphLab Engine: The Sync Operation • The GraphLab engine incorporates three main parts: • The sync operation, which maintains global statistics describing data stored in the data-graph • Global values maintained by the sync operation can be written by all update functions across the cluster machines • The sync operation is similar to Pregel’s aggregators • A mutual exclusion mechanism is applied by the sync operation to avoid write-write conflicts • For scalability reasons, the sync operation is not enabled by default

  15. The GraphLab Analytics Engine GraphLab The Computation Model Fault-Tolerance Motivation & Definition Input, Output & Components The Architectural Model The Programming Model

  16. The Architectural Model • GraphLab adopts a peer-to-peer architecture • All engine instances are symmetric • Engine instances communicate together using Remote Procedure Call (RPC) protocol over TCP/IP • The first triggered engine has an additional responsibility of being a monitoring/master engine • Advantages: • Highly scalable • Precludes centralized bottlenecks and single point of failures • Main disadvantage: • Complexity

  17. The GraphLab Analytics Engine GraphLab The Computation Model Fault-Tolerance Motivation & Definition Input, Output & Components The Architectural Model The Programming Model

  18. The Programming Model • GraphLab offers a shared-memory programming model • It allows scopes to overlap and vertices to read/write from/to their scopes

  19. Consistency Models in GraphLab Full Consistency • GraphLab guarantees sequential consistency • Provides the same result as a sequential execution of the computational steps • User-defined consistency models • Full Consistency • Vertex Consistency • Edge Consistency Edge Consistency Vertex Consistency Vertex v

  20. Consistency Models in GraphLab Read Full Consistency Model Write D3↔4 D4↔5 D1↔2 D2↔3 D3 D4 D1 D2 D5 Read Write Edge Consistency Model D3↔4 D4↔5 D1↔2 D2↔3 D3 D4 D1 D2 D5 Read Write Vertex Consistency Model 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 D3↔4 D4↔5 D1↔2 D2↔3 D3 D4 D1 D2 D5

  21. The GraphLab Analytics Engine GraphLab The Computation Model Fault-Tolerance Motivation & Definition Input, Output & Components The Architectural Model The Programming Model

  22. The Computation Model • GraphLab employs an asynchronous computation model • It suggests two asynchronous engines • Chromatic Engine • Locking Engine • The chromatic engine executes vertices partially asynchronous • It applies vertex coloring (e.g., no adjacent vertices share the same color) • All vertices with the same color are executed before proceeding to a different color • The locking engine executes vertices fully asynchronously • Data on vertices and edges are susceptible to corruption • It applies a permission-based distributed mutual exclusion mechanism to avoid read-write and write-write hazards

  23. The GraphLab Analytics Engine GraphLab The Computation Model Fault-Tolerance Motivation & Definition Input, Output & Components The Architectural Model The Programming Model

  24. Fault-Tolerance in GraphLab • GraphLab uses distributed checkpointing to recover from machine failures • It suggests two checkpointing mechanisms • Synchronous checkpointing (it suspends the entire execution of GraphLab) • Asynchronous checkpointing

  25. How Does GraphLab Compare to MapReduce and Pregel?

  26. GraphLab vs. Pregel vs. MapReduce

  27. Today… • Replication and Consistency • Motivation • Overview • Types of Consistency Models A New Chapter

  28. Why Replication? • Replication is the process of maintaining the data at multiple computers • Replication is necessary for: • Improving performance • A client can access the replicated copy of the data that is near to its location • Increasing the availability of services • Replication can mask failures such as server crashes and network disconnection • Enhancing the scalability of the system • Requests to the data can be distributed to many servers which contain replicated copies of the data • Securing against malicious attacks • Even if some replicas are malicious, secure data can be guaranteed to the client by relying on the replicated copies at the non-compromised servers

  29. 1. Replication for Improving Performance • Example Applications • Caching webpages at the client browser • Caching IP addresses at clients and DNS Name Servers • Caching in Content Delivery Network (CDNs) • Commonly accessed contents, such as software and streaming media, are cached at various network locations Main Server Replicated Servers

  30. 2. Replication for High-Availability • Availability can be increased by storing the data at replicated locations (instead of storing one copy of the data at a server) • Example: Google File-System replicates the data at computers across different racks, clusters and data-centers • If one computer or a rack or a cluster crashes, then the data can still be accessed from another source

  31. 3. Replication for Enhancing Scalability • Distributing the data across replicated servers helps in avoiding bottlenecks at the main server • It balances the load between the main and the replicated servers • Example: Content Delivery Networks decrease the load on main servers of the website Main Server Replicated Servers

  32. n 4. Replication for Securing Against Malicious Attacks n n • If a minority of the servers that hold the data are malicious, the non-malicious servers can outvote the malicious servers, thus providing security • The technique can also be used to provide fault-tolerance against non-malicious but faulty servers • Example: In a peer-to-peer system, peers can coordinate to prevent delivering faulty data to the requester Number of servers with correct data outvote the faulty servers = Servers that do not have the requested data = Servers with correct data = Servers with faulty data

  33. Why Consistency? • In a DS with replicated data, one of the main problems is keeping the data consistent • An example: • In an e-commerce application, the bank database has been replicated across two servers • Maintaining consistency of replicated data is a challenge Event 2 = Add interest of 5% Event 1 = Add $1000 2 1 4 Bal=2000 Bal=2100 Bal=1000 3 Bal=1000 Bal=1050 Bal=2050 Replicated Database

  34. Overview of Consistency and Replication Today’s lecture • Consistency Models • Data-Centric Consistency Models • Client-Centric Consistency Models • Replica Management • When, where and by whom replicas should be placed? • Which consistency model to use for keeping replicas consistent? • Consistency Protocols • We study various implementations of consistency models Next lectures

  35. Overview • Consistency Models • Data-Centric Consistency Models • Client-Centric Consistency Models • Replica Management • Consistency Protocols

  36. Introduction to Consistency and Replication • In a distributed system, shared data is typically stored in distributed shared memory, distributed databases or distributed file systems • The storage can be distributed across multiple computers • Simply, we refer to a series of such data storage units as data-stores • Multiple processes can access shared data by accessing any replica on the data-store • Processes generally perform read and write operations on the replicas Process 1 Process 2 Process 3 Local Copy Distributed data-store

  37. Maintaining Consistency of Replicated Data DATA-STORE Replica 1 Replica 2 Replica 3 Replica n x=0 x=2 x=5 x=2 x=5 x=0 x=5 x=0 x=2 x=0 x=5 x=2 Process 1 R(x)0 W(x)2 R(x)5 R(x)? Process 2 R(x)2 R(x)? R(x)0 Process 3 W(x)5 • Strict Consistency • Data is always fresh • After a write operation, the update is propagated to all the replicas • A read operation will result in reading the most recent write • If there are occasional writes and reads, this leads to large overheads =Read variable x; Result is b = Write variable x; Result is b P1 =Process P1 =Timeline at P1 R(x)b W(x)b

  38. Maintaining Consistency of Replicated Data (Cont’d) DATA-STORE Replica 1 Replica 2 Replica 3 Replica n x=0 x=2 x=0 x=0 x=2 x=5 x=0 x=2 x=0 x=3 x=2 Process 1 R(x)0 W(x)2 R(x)5 R(x)? Process 2 R(x)? R(x)3 R(x)5 Process 3 W(x)5 • Loose Consistency • Data might be stale • A read operation may result in reading a value that was written long back • Replicas are generally out-of-sync • The replicas may sync at coarse grained time, thus reducing the overhead =Read variable x; Result is b = Write variable x; Result is b P1 =Process P1 =Timeline at P1 R(x)b W(x)b

  39. Trade-offs in Maintaining Consistency • Maintaining consistency should balance between the strictness of consistency versus efficiency • Good-enough consistency depends on your application Loose Consistency Strict Consistency Easier to implement, and is efficient • Generally hard to implement, and is inefficient

  40. Consistency Model • A consistency model is a contract between • the process that wants to use the data, and • the replicated data repository (or data-store) • A consistency model states the level of consistency provided by the data-store to the processes while reading and writing the data

  41. Types of Consistency Models • Consistency models can be divided into two types: • Data-Centric Consistency Models • These models define how the data updates are propagated across the replicas to keep them consistent • Client-Centric Consistency Models • These models assume that clients connect to different replicas at different times • The models ensure that whenever a client connects to a replica, the replica is brought up to date with the replica that the client accessed previously

  42. Summary • Replication is necessary for improving performance, scalability availability, and security • Replicated data-stores should be designed after carefully evaluating the trade-offs between tolerable data inconsistency and efficiency • Consistency Models describe the contract between the data-store and processes about what form of consistency to expect from the system • Consistency models can be classified into two types: • Data-Centric Consistency models • Client-Centric Consistency models

  43. Next Three Classes • Data-Centric Consistency Models • Sequential and Causal Consistency Models • Client-Centric Consistency Models • Eventual Consistency, Monotonic Reads, Monotonic Writes, Read Your Writes and Writes Follow Reads • Replica Management • Replica management studies: • when, where and by whom replicas should be placed • which consistency model to use for keeping replicas consistent • Consistency Protocols • We study various implementations of consistency models

  44. References • [1] Haifeng Yu and Amin Vahdat, “Design and evaluation of a conit-based continuous consistency model for replicated services” • [2] http://tech.amikelive.com/node-285/using-content-delivery-networks-cdn-to-speed-up-content-load-on-the-web/ • [3] http://en.wikipedia.org/wiki/Replication_(computer_science) • [4] http://en.wikipedia.org/wiki/Content_delivery_network • [5] http://www.cdk5.net • [6] http://www.dis.uniroma1.it/~baldoni/ordered%2520communication%25202008.ppt • [7] http://www.cs.uiuc.edu/class/fa09/cs425/L5tmp.ppt

  45. Back-up Slides

  46. PageRank • PageRank is a link analysis algorithm • The rank value indicates an importance of a particular web page • A hyperlink to a page counts as a vote of support • A page that is linked to by many pages with high PageRank receives a high rank itself • A PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank

  47. PageRank (Cont’d) • Iterate: • Where: • αis the random reset probability • L[j] is the number of links on page j 1 2 3 5 4 6

  48. PageRank Example in GraphLab • PageRank algorithm is defined as a per-vertexoperation working on the scopeof the vertex pagerank(i, scope){ // Get Neighborhood data (R[i], Wij, R[j]) scope; // Update the vertex data // Reschedule Neighbors if needed if R[i] changes then reschedule_neighbors_of(i); } Dynamic computation

More Related