1 / 58

MapReduce

MapReduce. Costin Raiciu Advanced Topics in Distributed Systems, 2011. Motivating App. Web Search 12PB of Web data Must be able to search it quickly How can we do this?. Web Search Primer. Each document is a collection of words Different frequencies, counts, meaning

vail
Download Presentation

MapReduce

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MapReduce Costin Raiciu Advanced Topics in Distributed Systems, 2011

  2. Motivating App • Web Search • 12PB of Web data • Must be able to search it quickly • How can we do this?

  3. Web Search Primer • Each document is a collection of words • Different frequencies, counts, meaning • Users supply a few words – the query • Task: find all the documents which contain a specified word

  4. Solution: an inverted web index • For each keyword, store a list of the documents that contain it: • Student -> {a,b,c, …} • UPB -> {x,y,z,…} • … • When a query comes: • Lookup all the keywords • Intersect document lists • Order the results according to their importance

  5. How do we build an inverted web index? • Read 12PB of web pages • For each page, find its keywords • Slowly build index: 2TB of data • We could run it on a single machine • 100MB/s hard disk read = 1GB read in 10s • 120.000.000s just to read on a single machine • ~ 4 years!

  6. We need parallelism! • Want to run this task in 1 day • We would need 1400 machines at least • What functionality might we need? • Move data around • Run processing • Check liveness • Deal with failures (certainty!) • Get results

  7. Inspiration: Functional Programming

  8. Functional Programming Review Functional operations do not modify data structures: They always create new ones Original data still exists in unmodified form Data flows are implicit in program design Order of operations does not matter

  9. Functional Programming Review fun foo(l: int list) = sum(l) + mul(l) + length(l) Order of sum() and mul(), etc does not matter – they do not modify l

  10. Map map f list Creates a new list by applying f to each element of the input list; returns output in order.

  11. Fold fold f x0 list Moves across a list, applying f to each element plus an accumulator. f returns the next accumulator value, which is combined with the next element of the list

  12. Implicit Parallelism In map In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements If order of application of f to elements in list is commutative, we can reorder or parallelize execution This is the “secret” that MapReduce exploits

  13. MapReduce

  14. Main Observation • A large fraction of distributed systems code has to do with: • Monitoring • Fault tolerance • Moving data around • Problems • Difficult to get right even if you know what you are doing • Every app implements its own mechanisms • Most of this code is app independent!

  15. MapReduce Automatic parallelization & distribution Fault-tolerant Provides status and monitoring tools Clean abstraction for programmers

  16. Programming Model Borrows from functional programming Users implement interface of two functions: map (in_key, in_value) -> (out_key, intermediate_value) list reduce (out_key, intermediate_value list) -> out_value list

  17. map Records from the data source (lines out of files, rows of a database, etc) are fed into the map function as key*value pairs: e.g., (filename, line). map() produces one or more intermediate values along with an output key from the input.

  18. reduce After the map phase is over, all the intermediate values for a given output key are combined together into a list reduce() combines those intermediate values into one or more final values for that same output key (in practice, usually only one final value per key)

  19. Parallelism map() functions run in parallel, creating different intermediate values from different input data sets reduce() functions also run in parallel, each working on a different output key All values are processed independently Bottleneck: reduce phase can’t start until map phase is completely finished.

  20. How do we place computation? • Master assigns map and reduce jobs to workers • Does this mapping matter?

  21. Data Center Network Architecture Core Switch 10Gbps Aggregation Switches 10Gbps Top of Rack Switches 1Gbps Racks of servers …

  22. Locality Master program divides up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rack map() task inputs are divided into 64 MB blocks: same size as Google File System chunks

  23. Communication • Map output stored to local disk • Shuffle phase: • Reducers need to read data from all mappers • Typically cross-rack and expensive • Need full bisection bandwidth in theory • More about good topologies next time!

  24. Fault Tolerance Master detects worker failures Re-executes completed & in-progress map() tasks Why completed also? Re-executes in-progress reduce() tasks

  25. Fault Tolerance (2) • Master notices particular input key/values cause crashes in map(), and skips those values on re-execution. • Effect: Can work around bugs in third-party libraries!

  26. Optimizations No reduce can start until map is complete: A single slow disk controller can rate-limit the whole process Master redundantly executes stragglers: “slow-moving” map tasks Uses results of first copy to finish Why is it safe to redundantly execute map tasks? Wouldn’t this mess up the total computation?

  27. Optimizations “Combiner” functions can run on same machine as a mapper Causes a mini-reduce phase to occur before the real reduce phase, to save bandwidth Under what conditions is it sound to use a combiner?

  28. More and more mapreduce

  29. Apache An Implementation of MapReduce

  30. http://hadoop.apache.org/ • Open source Java • Scale • Thousands of nodes and • petabytes of data • Still pre-1.0 release • 22 04, 2009: release 0.20.0 • 17 09, 2008: release 0.18.1 • but already used by many

  31. Hadoop MapReduce and Distributed File System framework for large commodity clusters Master/Slave relationship JobTracker handles all scheduling & data flow between TaskTrackers TaskTracker handles all worker tasks on a node Individual worker task runs map or reduce operation Integrates with HDFS for data locality

  32. Hadoop Supported File Systems • HDFS: Hadoop's own file system. • Amazon S3 file system. • Targeted at clusters hosted on the Amazon Elastic Compute Cloud server-on-demand infrastructure • Not rack-aware • CloudStore • previously Kosmos Distributed File System • like HDFS, this is rack-aware. • FTP Filesystem • stored on remote FTP servers. • Read-only HTTP and HTTPS file systems.

  33. "Rack awareness" • optimization which takes into account the geographic clustering of servers • network traffic between servers in different geographic clusters is minimized.

  34. Hadoop scheduler • Runs a few map and reduce tasks in parallel on the same machine • To overlap IO and computation • Whenever there is an empty slot the scheduler chooses: • A failed task, if it exists • An unassigned task, if it exists • A speculative task (also running on another node)

  35. wordCount A Simple Hadoop Examplehttp://wiki.apache.org/hadoop/WordCount

  36. Word Count Example • Read text files and count how often words occur. • The input is text files • The output is a text file • each line: word, tab, count • Map: Produce pairs of (word, count) • Reduce: For each word, sum up the counts.

  37. WordCount Overview 3 import ... 12 public class WordCount { 13 14 public static class Map extends MapReduceBase implements Mapper ... { 17 18 public void map ... 26 } 27 28 public static class Reduce extends MapReduceBase implements Reducer ... { 29 30 public void reduce ... 37 } 38 39 public static void main(String[] args) throws Exception { 40 JobConf conf = new JobConf(WordCount.class); 41 ... 53 FileInputFormat.setInputPaths(conf, new Path(args[0])); 54 FileOutputFormat.setOutputPath(conf, new Path(args[1])); 55 56 JobClient.runJob(conf); 57 } 58 59 }

  38. wordCount Mapper 14 public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { 15 private final static IntWritable one = new IntWritable(1); 16 private Text word = new Text(); 17 18 public void map( LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { 19 String line = value.toString(); 20 StringTokenizer tokenizer = new StringTokenizer(line); 21 while (tokenizer.hasMoreTokens()) { 22 word.set(tokenizer.nextToken()); 23 output.collect(word, one); 24 } 25 } 26 }

  39. wordCount Reducer 28 public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { 29 30 public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { 31 int sum = 0; 32 while (values.hasNext()) { 33 sum += values.next().get(); 34 } 35 output.collect(key, new IntWritable(sum)); 36 } 37 }

  40. wordCount JobConf 40 JobConf conf = new JobConf(WordCount.class); 41 conf.setJobName("wordcount"); 42 43 conf.setOutputKeyClass(Text.class); 44 conf.setOutputValueClass(IntWritable.class); 45 46 conf.setMapperClass(Map.class); 47 conf.setCombinerClass(Reduce.class); 48 conf.setReducerClass(Reduce.class); 49 50 conf.setInputFormat(TextInputFormat.class); 51 conf.setOutputFormat(TextOutputFormat.class);

  41. WordCount main 39 public static void main(String[] args) throws Exception { 40 JobConf conf = new JobConf(WordCount.class); 41 conf.setJobName("wordcount"); 42 43 conf.setOutputKeyClass(Text.class); 44 conf.setOutputValueClass(IntWritable.class); 45 46 conf.setMapperClass(Map.class); 47 conf.setCombinerClass(Reduce.class); 48 conf.setReducerClass(Reduce.class); 49 50 conf.setInputFormat(TextInputFormat.class); 51 conf.setOutputFormat(TextOutputFormat.class); 52 53 FileInputFormat.setInputPaths(conf, new Path(args[0])); 54 FileOutputFormat.setOutputPath(conf, new Path(args[1])); 55 56 JobClient.runJob(conf); 57 }

  42. Invocation of wordcount • /usr/local/bin/hadoop dfs -mkdir <hdfs-dir> • /usr/local/bin/hadoop dfs -copyFromLocal <local-dir> <hdfs-dir> • /usr/local/bin/hadoop jar hadoop-*-examples.jar wordcount <in-dir> <out-dir>

  43. Hadoop At Work

  44. Experimental setup • 12 servers connected to a gigabit switch • Same hardware • Single hard disk per server • Filesystem: HDFS with replication 2 • 128MB block size • 3 Map and 2 Reduce tasks per machine • Data • Crawl of the .uk domain (2009) • 50GB (unreplicated)

  45. Monitoring Task Progress • Hadoop estimates task status • Map: % of input data read from HDFS • Reduce • 33% - progress in copy (shuffle) phase • 33% - sorting keys • 33% - writing output in HDFS • Hadoop computes average progress score for each category

  46. 5 Sep, 2011: release 0.20.204.0 available

  47. Back to Hadoop Overview

More Related