1 / 26

Programming Models Cloud Computing Garth Gibson Greg Ganger Majd Sakr Raja Sambasivan

Programming Models Cloud Computing Garth Gibson Greg Ganger Majd Sakr Raja Sambasivan. Recall the SaaS , PaaS , IaaS Taxonomy. Service , Platform or Infrastructure as a Service SaaS : service is a complete application (client-server computing )

Download Presentation

Programming Models Cloud Computing Garth Gibson Greg Ganger Majd Sakr Raja Sambasivan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Programming Models Cloud Computing Garth Gibson Greg Ganger MajdSakr Raja Sambasivan

  2. Recall the SaaS, PaaS, IaaS Taxonomy • Service, Platform or Infrastructure as a Service • SaaS: service is a complete application (client-server computing) • Not usually a programming abstraction • PaaS: high level (language) programming model for cloud computer • Eg. Rapid prototyping languages • Turing complete but resource management hidden • IaaS: low level (language) computing model for cloud computer • Eg. Assembler as a language • Basic hardware model with all (virtual) resources exposed • For PaaS and IaaS, cloud programming is needed • How is this different from CS 101? Scale, fault tolerance, elasticity, ….

  3. Embarrassingly parallel “Killer app:” Web servers • Online retail stores (like the ice.com example) • Most of the computational demand is for browsing product marketing, forming and rendering web pages, managing customer session state • Actual order taking and billing not as demanding, have separate specialized services (Amazon bookseller backend) • One customer session needs a small fraction of one server • No interaction between customers (unless inventory near exhaustion) • Parallelism is more cores running identical copy of web server • Load balancing, maybe in name service, is parallel programming • Elasticity needs template service, load monitoring, cluster allocation • These need not require user programming, just configuration

  4. Eg., Obama for America Elastic Load Balancer

  5. What about larger apps? • Parallel programming is hard – how can cloud frameworks help? • Collection-oriented languages (Sipelstein&Blelloch, Proc IEEE v79, n4, 1991) • Also known as Data-parallel • Specify a computation on an element; apply to each in collection • Analogy to SIMD: single instruction on multiple data • Specify an operation on the collection as a whole • Union/intersection, permute/sort, filter/select/map • Reduce-reorderable (A) /reduce-ordered (B) • (A) Eg., ADD(1,7,2) = (1+7)+2 = (2+1)+7 = 10 • (B) Eg., CONCAT(“the “, “lazy “, “fox “) = “the lazy fox “ • Note the link to MapReduce …. its no accident

  6. High Performance Computing Approach • HPC was almost the only home for parallel computing in the 90s • Physical simulation was the killer app – weather, vehicle design, explosions/collisions, etc – replace “wet labs” with “dry labs” • Physics is the same everywhere, so define a mesh on a set of particles, code the physics you want to simulate at one mesh point as a property of the influence of nearby mesh points, and iterate • Bulk Synchronous Processing (BSP): run all updates of mesh points in parallel based on value at last time point, form new set of values & repeat • Defined “Weak Scaling” for bigger machines – rather than make a fixed problem go faster (strong scaling), make bigger problem go same speed • Most demanding users set problem size to match total available memory

  7. High Performance Computing Frameworks • Machines cost O($10-100) million, so • emphasis was on maximizing utilization of machines (congress checks) • low-level speed and hardware specific optimizations (esp. network) • preference for expert programmers following established best practices • Developed MPI (Message Passing Interface) framework (eg. MPICH) • Launch N threads with library routines for everything you need: • Naming, addressing, messaging, synchronization (barriers) • Transforms, physics modules, math libraries, etc • Resource allocators and schedulers space share jobs on physical cluster • Fault tolerance by checkpoint/restart requiring programmer save/restore • Proto-elasticity: kill N-node job & reschedule a past checkpoint on M nodes • Very manual, deep learning curve, few commercial runaway successes

  8. Broadening HPC: Grid Computing • Grid Computing started with commodity servers (predates Cloud) • Frameworks were less specialized, easier to use (& less efficient) • Beowulf, PVM (parallel virtual machine), Condor, Rocks, Sun Grid Engine • For funding reasons grid emphasized multi-institution sharing • So authentication, authorization, single-signon, parallel-ftp • Heterogeneous workflow (run job A on mach. B, then job C on mach. D) • Basic model: jobs selected from batch queue, take over cluster • Simplified “pile of work”: when a core comes free, take a task from the run queue and run to completion

  9. Cloud Programming, back to the future • HPC demanded too much expertise, too many details and tuning • Cloud frameworks all about making parallel programming easier • Willing to sacrifice efficiency (too willing perhaps) • Willing to specialize to application (rather than machine) • Canonical BigData user has data & processing needs that require lots of computer, but doesn’t have CS or HPC training & experience • Wants to learn least amount of computer science to get results this week • Might later want to learn more if same jobs become a personal bottleneck

  10. Cloud Programming Case Studies • MapReduce • Package two Sipelstein91 operators filter/map and reduce as the base of a data parallel programming model built around Java libraries • DryadLinq • Compile workflows of different data processing programs into schedulable processes • GraphLab • Specialize to iterative machine learning with local update operations and dynamic rescheduling of future updates

  11. MapReduce (Majd)

  12. DryadLinq • Simplify efficient data parallel code • Compiler support for imperative and declarative (eg., database) operations • Extends MapReduce to workflows that can be collectively optimized • Data flows on edges between processes at vertices (workflows) • Coding is processes at vertices and expressions representing workflow • Interesting part of the compiler operates on the expressions • Inspired by traditional database query optimizations – rewrite the execution plan with equivalent plan that is expected to execute faster

  13. DryadLinq • Data flowing through a graph abstraction • Vertices are programs (possibly different with each vertex) • Edges are data channels (pipe-like) • Requires programs to have no side-effects (no changes to shared state) • Apply function similar to MapReduce reduce – open ended user code • Compiler operates on expressions, rewriting execution sequences • Exploits prior work on compiler for workflows on sets (LINQ) • Extends traditional database query planning with less type restrictive code • Unlike traditional plans, virtualizes resources (so might spill to storage) • Knows how to partition sets (hash, range and round robin) over nodes • Doesn’t always know what processes do, so less powerful optimizer than database – where it can’t infer what is happening, it takes hints from users • Can auto-pipeline, remove redundant partitioning, reorder partitionings, etc

  14. Example: MapReduce (reduce-reorderable) • DryadLinqcompiler canpre-reduce,partition,sort-merge,partially aggregate • In MapReduceyou “configure”this youself

  15. Background on Regression • Regression problem: For given input A, and observation Y, find unknown x parameter • Sparse regression is one variation of regression that favors a small number of non-zero parameters corresponding to the most relevant features. A: n by m matrix Y: n by 1 x: m by 1 ….. Eg. Alzheimer Disease data463 sample by 509K features. In case of pair-wise study, # of features would be inflated to 1011. ….. X = ….. …. ……..

  16. Eg. Medical Research • Find genetic patterns predicting disease • Millions to 1011 (pair-wise gene study) dimensions Genetic variation associated with disease AT…….CG T AAA Samples(patients) AT…….CG G AAA AT…….CG T AAA AT…….CG T AAA

  17. HPC Style: Bulk Synchronous Parallel Iterative Improvement of Estimated “Solution” Thread 1 4 3 2 1 5 Thread 2 5 3 4 2 1 Thread 3 1 4 2 5 3 Thread 4 5 2 3 1 4 Time Threads synchronize (wait for each other) every iteration Parameters read/updated at synchronization barriers Repetitive file processing: Mahout, DryadLinq, Spark Distributed shared memory: Pregel, Hama, Giraph, GraphLab

  18. Synchronization can be costly Thread 1 1 2 3 Thread 2 1 2 3 Thread 3 1 2 3 Thread 4 1 2 3 Time Machines performance or work assigned unequal So threads must wait for each other And larger clusters suffer longer communication in barrier sync If you can, do more work between syncs, but not always possible

  19. Can we just run asynchronous? Parameters read/updated at any time Thread 1 1 2 3 4 5 6 Thread 2 1 2 3 4 5 6 Thread 3 1 2 3 4 5 6 Thread 4 1 2 3 4 5 6 Time Threads proceed to next iteration without waiting Threads not on same iteration # In most computations this leads to wrong answer In search/solve, however, it might still converge, but it might also diverge

  20. GraphLab: managing asynch ML • GraphLab provides a higher level programming model • Data is associated with vertices and edges between vertices, inherently sparse (or we’d use a matrix representation instead) • Program a vertex update based on its edges & neighbor vertices • Background code used to test if its time to terminate updating • BSP can be cumbersome & inefficient • Iterative algorithms may do little work per iteration and may not need to move all the data each iteration • Use traditional database transaction isolation for consistency

  21. Scheduling is green field in ML • Graphlabproposesupdates do own scheduling • Baseline: sequential update of each vertex once per iteration • Sparseness allows non-overlapping updates to execute in parallel • Opportunity for smart schedulers to exploit more app properties • Prioritize specific updates over other updates because these communicate more information more quickly

  22. One way to think about scheduling • “Partial” synchronicity • Spread network communication evenly (don’t sync unless needed) • Threads usually shouldn’t wait – but mustn’t drift too far apart! • Straggler tolerance • Slow threads must somehow catch up Make thread 1 catch up Force threads to sync up Thread 1 1 2 3 4 5 6 Thread 2 1 2 3 4 5 6 Thread 3 1 2 3 4 5 6 Thread 4 1 2 3 4 5 6 Time

  23. Staleness Threshold 3 Stale Synchronous Parallel (SSP) Thread 1 waits until Thread 2 has reached iter 4 Thread 1 Thread 2 Thread 3 Thread 4 Iteration Note: x-axis is now iteration count, not time! Allow threads to usually run at own pace: mostly asynch Fastest/slowest not allowed to drift >S iterations apart: bound error Threads cache (stale) versions of parameters, to reduce network traffic 0 1 2 3 4 5 6 7 8 9

  24. Why does SSP converge? Instead of xtrue, SSP sees xstale= xtrue+ error The error caused by staleness is bounded Over many iterations, average error goes to zero

  25. SSP uses networks efficiently BSP SSP balances network and compute time

  26. Advanced Cloud Computing Programming Models • Ref 1: MapReduce: simplified data processing on large clusters.  Jeffrey Dean and Sanjay Ghemawat.  OSDI’04.  2004. http://static.usenix.org/event/osdi04/tech/full_papers/dean/dean.pdf • Ref 2: DyradLinQ: A system for general-purpose distributed data-parallel computing using a high-level language.  Yuan Yu, Michael Isard, Dennis Fetterly, MihaiBudiu, UlfarErlingsson, Pradeep Kumar Gunda, Jon Currey.  OSDI’08.  http://research.microsoft.com/en-us/projects/dryadlinq/dryadlinq.pdf • Ref 3: Yucheng Low, Joseph Gonzalez, AapoKyrola, Danny Bickson, Carlos Guestrin, and Joseph M. Hellerstein (2010). "GraphLab: A New Parallel Framework for Machine Learning." Conference on Uncertainty in Artificial Intelligence (UAI). http://www.select.cs.cmu.edu/publications/scripts/papers.cgi

More Related