1 / 25

Jimmy Lin The iSchool University of Maryland Monday, January 28, 2008

Cloud Computing Lecture #1 Parallel and Distributed Computing. Jimmy Lin The iSchool University of Maryland Monday, January 28, 2008.

jorjanna
Download Presentation

Jimmy Lin The iSchool University of Maryland Monday, January 28, 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cloud Computing Lecture #1Parallel and Distributed Computing Jimmy Lin The iSchool University of Maryland Monday, January 28, 2008 Material adapted from slides by Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar, 2007 (licensed under Creation Commons Attribution 3.0 License) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United StatesSee http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

  2. Today’s Topics • Course overview • Introduction to parallel and distributed processing

  3. What’s the course about? • Integration of research and teaching • Team leaders get help on a tough problem • Team members gain valuable experience • Criteria for success: at the end of the course • Each team will have a publishable result • Each team will have a paper suitable for submission to an appropriate conference/journal • Along the way: • Build a community of hadoopers at Maryland • Generate lots of publicity • Have lots of fun!

  4. Hadoop Zen • Don’t get frustrated (take a deep breath)… • This is bleeding edge technology • Be patient… • This is the first time I’ve taught this course • Be flexible… • Lots of uncertainty along the way • Be constructive… • Tell me how I can make everyone’s experience better

  5. Things to go over… • Course schedule • Course objectives • Assignments and deliverables • Evaluation

  6. My Role • To hack alongside everyone • To substantively contribute ideas where appropriate • To serve as a facilitator and a resource • To make sure everything runs smoothly

  7. Outline • Web-Scale Problems • Parallel vs. Distributed Computing • Flynn's Taxonomy • Programming Patterns

  8. Web-Scale Problems • Characteristics: • Lots of data • Lots of crunching (not necessarily complex itself) • Examples: • Obviously, problems involving the Web • Empirical and data-driven research (e.g., in HLT) • “Post-genomics era” in life sciences • High-quality animation • The serious hobbyist

  9. It all boils down to… • Divide-and-conquer • Throwing more hardware at the problem Simple to understand… a lifetime to master…

  10. Parallel vs. Distributed • Parallel computing generally means: • Vector processing of data • Multiple CPUs in a single computer • Distributed computing generally means: • Multiple CPUs across many computers

  11. Flynn’s Taxonomy Instructions Single (SI) Multiple (MI) Single (SD) Data Multiple (MD)

  12. SISD Processor D D D D D D D Instructions

  13. SIMD Processor D0 D0 D0 D0 D0 D0 D0 D1 D1 D1 D1 D1 D1 D1 D2 D2 D2 D2 D2 D2 D2 D3 D3 D3 D3 D3 D3 D3 D4 D4 D4 D4 D4 D4 D4 … … … … … … … Dn Dn Dn Dn Dn Dn Dn Instructions

  14. MIMD Processor D D D D D D D Instructions Processor D D D D D D D Instructions

  15. Parallel vs. Distributed Processor D D D D D D D Instructions Shared Memory Network connectionfor data transfer Processor D D D D D D D Instructions Parallel: Multiple CPUs within a shared memory machine Distributed: Multiple machines with own memory connected over a network

  16. Divide and Conquer “Work” Partition w1 w2 w3 “worker” “worker” “worker” r1 r2 r3 Combine “Result”

  17. Different Workers • Different threads in the same core • Different cores in the same CPU • Different CPUs in a multi-processor system • Different machines in a distributed system

  18. Parallelization Problems • How do we assign work units to workers? • What if we have more work units than workers? • What if workers need to share partial results? • How do we aggregate partial results? • How do we know all the workers have finished? • What if workers die? What is the common theme of all of these problems?

  19. General Theme? • Parallelization problems arise from: • Communication between workers • Access to shared resources (e.g., data) • Thus, we need a synchronization system! • This is tricky: • Finding bugs is hard • Solving bugs is even harder

  20. Multi-Threaded Programming • Difficult because • Don’t know the order in which threads run • Don’t know when threads interrupt each other • Thus, we need: • Semaphores (lock, unlock) • Conditional variables (wait, notify, broadcast) • Barriers • Still, lots of problems: • Deadlock, livelock • Race conditions • ... • Moral of the story: be careful!

  21. Patterns for Parallelism • Several programming methodologies exist to build parallelism into programs • Here are some…

  22. Master/Workers • The master initially owns all data • The master creates workers and assigns tasks • The master waits for workers to report back master workers

  23. Producer/Consumer Flow • Producers create work items • Consumers process them • Can be daisy-chained P C P C P C P C P C P C

  24. Work Queues • All available consumers should be available to process data from any producer • Work queues divorce 1:1 relationship from producers to consumers P C P C shared queue P C

  25. And finally… • The above solutions represent general patterns • In reality: • Lots of one-off solutions, custom code • Burden on the programmer to manage everything • Can we push the complexity onto the system? • MapReduce…for next time

More Related