1 / 31

Basics of MapReduce Stanford CS341 Project Course Algorithm Theory for MapReduce

Teaching About MapReduce. Basics of MapReduce Stanford CS341 Project Course Algorithm Theory for MapReduce. Jeffrey D. Ullman Stanford University. Free Book – MapReduce and Much More. Mining of Massive Datasets , J. Leskovec , A. Rajaraman , J. D. Ullman.

cliff
Download Presentation

Basics of MapReduce Stanford CS341 Project Course Algorithm Theory for MapReduce

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Teaching About MapReduce Basics of MapReduceStanford CS341 Project CourseAlgorithm Theory for MapReduce Jeffrey D. Ullman Stanford University

  2. Free Book – MapReduce and Much More • Mining of Massive Datasets, J. Leskovec, A. Rajaraman, J. D. Ullman. • Available for free download at i.stanford.edu/~ullman/mmds.html

  3. What Is MapReduce? • A programming system designed by Jeff Dean and Sanjay Ghemawat at Google for easy parallel programming on commodity hardware. • Uses a distributed file system that replicates chunks (typically 64MB) to protect against loss of data. • Architected so hardware failures do not necessitate restart of the entire job. • Apache Hadoop is the most popular open-source implementation of the idea.

  4. Mappers and Reducers • MapReduce job = Map function + Reduce function. • Map function converts a single input element into any number of key-value pairs. • Mapper = application of the Map function to a single input. • Map Task = Map-function execution on a chunk of inputs. • Behind the scenes, the system sorts all generated key-value pairs by key.

  5. Mappers and Reducers – (2) • Reduce function takes a single key and its list of associated values and produces outputs. • Reducer = application of the Reduce function to a single key-(list of values) pair. • Reduce Task = Reduce-function execution on one or more key-(list of values) pairs.

  6. Execution of a MapReduce Job • The Map tasks run in parallel at all the compute nodes where chunks of the file are found. • After all Map tasks complete, the key-value pairs generated are sorted by key and turned into key-(list of values) pairs. • Some number of Reduce tasks execute in parallel and together handle each key and its value list independently, producing output.

  7. CS341 Data-Mining Project Course at Stanford Taught each Spring by Jure Leskovec, AnandRajaraman, and Jeff Ullman

  8. Organization of Course • About 10 days prior to the start, students who took Jure’s CS246 lecture course are invited to attend an information session. • We tell them about the datasets that are available. • Often contributed by startups who are looking for some help. • But we also have Twitter feeds, Wikipedia history, many others.

  9. Organization – (2) • Students, in teams of 3, submit brief proposals for projects. • What data will they use? • What do they hope to achieve? • How will they evaluate their results? • The three faculty evaluate proposals and we select about 12 teams. • About 4 coached by each faculty. • This year, 13/19 teams were selected.

  10. The Drug-Interaction Project – An Example Worth Teaching Initial AttemptImprovement That WorkedThe Theory of MapReduce Algorithms That Resulted

  11. The “Drug Interaction” Problem • Several years ago, one CS341 team used data from Stanford Medical School. • Data consists of records for 3000 drugs. • List of patients taking, dates, diagnoses. • About 1M of data per drug. • Problem is to find drug interactions. • Example: two drugs that when taken together increase the risk of heart attack. • Must examine each pair of drugs and compare their data using a Chi-Squared test.

  12. Initial MapReduce Algorithm • The first attempt used the following plan: • Key = set of two drugs {i, j}. • Value = the record for one of these drugs. • Given drug i and its record Ri, the mapper generates all key-value pairs ({i, j}, Ri), where j is any other drug besides i. • Each reducer receives its key and a list of the two records for that pair: ({i, j}, [Ri,Rj]).

  13. Example: Three Drugs Mapper for drug 1 Reducer for {1,2} Drug 3 data Drug 2 data Drug 2 data Drug 3 data Drug 1 data Drug 1 data {2, 3} {1, 3} {2, 3} {1, 2} {1, 3} {1, 2} Mapper for drug 2 Reducer for {1,3} Mapper for drug 3 Reducer for {2,3}

  14. Example: Three Drugs Mapper for drug 1 Reducer for {1,2} Drug 3 data Drug 2 data Drug 2 data Drug 3 data Drug 1 data Drug 1 data {2, 3} {1, 3} {2, 3} {1, 2} {1, 3} {1, 2} Mapper for drug 2 Reducer for {1,3} Mapper for drug 3 Reducer for {2,3}

  15. Example: Three Drugs Reducer for {1,2} {1, 2} Drug 1 data Drug 2 data Reducer for {1,3} Drug 1 data Drug 3 data {1, 3} Reducer for {2,3} Drug 2 data Drug 3 data {2, 3}

  16. What Went Wrong? • 3000 drugs • times 2999 key-value pairs per drug • times 1,000,000 bytes per key-value pair • = 9 terabytes communicated over a 1Gb Ethernet • = 90,000 seconds of network use.

  17. A Better Approach • The way to handle this problem is to use fewer keys with longer lists of values. • Suppose we group the drugs into 30 groups of 100 drugs each. • Say G1 = drugs 1-100, G2 = drugs 101-200,…, G30 = drugs 2901-3000.

  18. The Map Function • A key is a set of two group numbers. • The mapper for drug i produces 29 key-value pairs. • Each key is the set containing the group of drug i and one of the other group numbers. • The value is a pair consisting of the drug number i and the megabyte-long record for drug i.

  19. The Reduce Function • The reducer for pair of groups {m, n} gets that key and a list of 200 drug records – the drugs belonging to groups m and n. • Its job is to compare each record from group m with each record from group n. • Special case: also compare records in group n with each other, if m = n+1 or if n = 30 and m = 1. • Notice each pair of records is compared at exactly one reducer, so the total computation is not increased.

  20. Why It Works • The big difference is in the communication requirement. • Now, each of 3000 drugs’ 1MB records is replicated 29 times. • Communication cost = 87GB, vs. 9TB.

  21. Theory of Map-Reduce Algorithms Reducer SizeReplication RateMapping SchemasLower Bounds

  22. A Model for Map-Reduce Algorithms • A set of inputs. • Example: the drug records. • A set of outputs. • Example: One output for each pair of drugs. • A many-many relationship between each output and the inputs needed to compute it. • Example: The output for the pair of drugs {i, j} is related to inputs i and j.

  23. Example: Drug Inputs/Outputs Output 1-2 Drug 1 Output 1-3 Drug 2 Output 1-4 Drug 3 Output 2-3 Drug 4 Output 2-4 Output 3-4

  24. Example: Matrix Multiplication j j i i  =

  25. Reducer Size • Reducer size, denoted q, is the maximum number of inputs that a given reducer can have. • I.e., the length of the value list. • Limit might be based on how many inputs can be handled in main memory. • Or: make q low to force lots of parallelism.

  26. Replication Rate • The average number of key-value pairs created by each mapper is the replication rate. • Denoted r. • Represents the communication cost per input. • Often communication cost dominates computation in MapReduce algorithms.

  27. Example: Drug Interaction • Suppose we use g groups and d drugs. • A reducer needs two groups, so q = 2d/g. • Each of the d inputs is sent to g-1 reducers, or approximately r = g. • Replace g by r in q = 2d/g to get r = 2d/q. Tradeoff! The bigger the reducers, the less communication.

  28. Upper and Lower Bounds on r • What we did gives an upper bound on r as a function of q. • When we teach MapReduce algorithms, lower bounds (proofs we cannot do better) are as important as the algorithms themselves. • In this case, proofs that you cannot have lower r for a given q.

  29. Proofs Need Mapping Schemas • A mapping schema for a problem and a reducer size q is an assignment of inputs to sets of reducers, with two conditions: • No reducer is assigned more than q inputs. • For every output, there is some reducer that receives all of the inputs associated with that output. • Say the reducer covers the output.

  30. Mapping Schemas – (2) • Every map-reduce algorithm has a mapping schema. • The requirement that there be a mapping schema is what distinguishes map-reduce algorithms from general parallel algorithms.

  31. Example: Drug Interactions • d drugs, reducer size q. • Each drug has to meet each of the d-1 other drugs at some reducer. • If a drug is sent to a reducer, then at most q-1 other drugs are there. • Thus, each drug is sent to at least (d-1)/(q-1) reducers, and r>(d-1)/(q-1). • Half the r from the algorithm we described. • Better algorithm gives r = d/q + 1, so lower bound is actually tight.

More Related