1 / 39

Improving MapReduce Performance Using Smart Speculative Execution Strategy

This paper proposes a smart speculative execution strategy to address stragglers in MapReduce, improving performance and throughput. The strategy selects backup candidates based on per-phase process speed and estimates task remaining time using EWMA. The approach maximizes cost-performance tradeoff, shortening job execution time and increasing cluster throughput.

beand
Download Presentation

Improving MapReduce Performance Using Smart Speculative Execution Strategy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving MapReduce Performance Using Smart Speculative Execution Strategy Qi Chen, Cheng Liu, and Zhen Xiao Oct2013 To appear in IEEE Transactions on Computers

  2. 2. Background 6. Evaluation 3. Previous work 7. Conclusion 4. Pitfalls 5. Our Design 1. Introduction Outlines 0

  3. 2. Background 6. Evaluation 3. Previous work 7. Conclusion 4. Pitfalls 5. Our Design 1. Introduction Outlines 0

  4. Introduction • The new era of Big Data is coming! • – 20 PB per day (2008) • – 30 TB per day (2009) • – 60 TB per day (2010) • –petabytes per day • What does big data mean? • Important user information • significant business value

  5. MapReduce • What is MapReduce? • most popular parallel computing model proposed by Google Select, Join, Group Page rank, Inverted index, Log analysis Clustering, machine translation, Recommendation database operation Search engine Machine learning Applications … Scientific computation Cryptanalysis

  6. Straggler • What is straggler in MapReduce? • Nodes on which tasks take an unusually long time to finish • It will: • Delay the job execution time • Degrade the cluster throughput • How to solve it? • Speculative execution • Slow task is backed up on an alternative machine with the hope that the backup one can finish faster

  7. 2. Background 6. Evaluation 3. Previous work 7. Conclusion 4. Pitfalls 5. Our Design 1. Introduction Outlines 0

  8. Architecture Master Assign Assign Part 1 Map Part 2 Reduce Split 1 Part 1 Output1 Split 2 Map Part 2 … Output2 Split M … Reduce Output files Input files Part 1 Map Part 2 Map Stage Reduce Stage

  9. Programming model • Input : (key, value) pairs • Output : (key*, value*) pairs

  10. Causes of Stragglers

  11. 2. Background 6. Evaluation 3. Previous work 7. Conclusion 4. Pitfalls 5. Our Design 1. Introduction Outlines 0

  12. Previous work • Google and Dryad • When a stage is close to completion • Backup an arbitrary set of the remaining tasks • Hadoop Original • Backup task whose progress falls behind the average by a fixed gap • LATE (OSDI’08) • Backup task: 1) longest remaining time, 2) progress rate below threshold • Identify worker with its performance score below threshold as slow • Mantri (OSDI’10) • Saving cluster computing resource • Backup up outliers when they show up • Kill-restart when cluster is busy, lazy duplicate when cluster is idle

  13. 2. Background 6. Evaluation 3. Previous work 7. Conclusion 4. Pitfalls 5. Our Design 1. Introduction Outlines 0

  14. Pitfalls in Selecting Slow Tasks • Using average progress rate to identify slow tasks and estimate task remaining time • Hadoop and LATE assumes that: • Tasks of the same type process almost the same amount of input data • Progress rate must be either stable or accelerated during a task’s lifetime • There are some scenarios that the assumptions will break down

  15. Input data skew Sort benchmark on 10GB input data following the Zipf distribution ( =1.0)

  16. Phase Percentage varies Speed is varying across different phases Different jobs have different phase duration ratio Job in different environments has different phase duration ratio

  17. Reduce Tasks Start Asynchronously Tasks in different phases can not be compared directly

  18. Take a Long Time to Identify Straggler Cannot identify straggler in time

  19. Pitfalls in Selecting Backup Node • Identifying Slow Worker Node • LATE: Sum of progress of all the completed and running tasks on the node • Hadoop: Average progress rate of all the completed tasks on the node • Some worker nodes may do more time-consuming tasks and get lower performance score unfairly • e.g. doing more tasks with larger amount of data to process or non-local map tasks • Choosing Backup Worker Node • LATE and Hadoop: Ignore data locality • Our observation: a data-local map task can be over three times faster than that of a non-local map task

  20. 2. Background 6. Evaluation 3. Previous work 7. Conclusion 4. Pitfalls 5. Our Design 1. Introduction Outlines 0

  21. Selecting Backup Candidates • Using Per-Phase Process Speed • Dividing each task into multiple phases • Using phase process speed to identify slow tasks and estimate task remaining time Map Task: Map Combine Reduce Task: Copy Sort Reduce compare Map Combine Map Task Map • Combine

  22. Selecting Backup Candidates • Using EWMA to Predict Process Speed

  23. Selecting Backup Candidates • Estimating Task Remaining Time and Backup Time • use the phase average process speed to estimate the remaining time of a phase • To avoid process speed to be fast at the beginning and drop later in copy phase, we estimate the remaining time of copy phase as follows:

  24. Selecting Backup Candidates • Maximizing Cost Performance • Cost: the computing resources occupied by tasks • Performance: the shortening of job execution time and the increase of the cluster throughput • We hope that: • when a cluster is idle, the cost for speculative execution is less a concern • Whenthe cluster is busy, the cost is an important consideration

  25. Selecting Proper Backup Nodes • Assign backup tasks to the fast nodes • How to measure the performance of nodes? • use predicted process bandwidth of data-local map tasks completed on the node to represent its performance • Consider the data-locality • Note: the process speed of data-local map tasks can be 3 times that of non-local map tasks • Therefore, we keep the process speed statistics of data-local, rack-local, and non-local map tasks for each node • For nodes that do not process any map task on a specific locality level, we use the average process speed of all nodeson this level as an estimate • Launch backup on node i ? If remain time > backup time on node i

  26. Summary • A task will be backed up when it meets the following conditions: • it has executed for a certain amount of time (i.e., the speculative lag) • both the progress rate and the process bandwidth in the current phase of the task are sufficiently low • the profit of doing the backup outweighs that of not doing it • its estimated remaining time is longer than the predicted time to finish on a backup node • it has the longest remaining time among all the tasks satisfying the conditions above

  27. 2. Background 6. Evaluation 3. Previous work 7. Conclusion 4. Pitfalls 5. Our Design 1. Introduction Outlines 0

  28. Experiment Environment • Two scale: • Small: 30 virtual machines on 15 physical machines • Large: 100 virtual machines on 30 physical machines • Each physical machine: • dual-Processors (2.4GHz Intel(R) Xeon(R) E5620 processor with 16 logic core), 24GB of RAM and two 150GB disks • Organized in three racks connected by 1Gbps Ethernet • Each virtual machine: • 2 virtual core, 4GB RAM and 40GB of disk space • Benchmark: • Sort, Wordcount, Grep, Girdmix

  29. Scheduling in Heterogeneous Environments • Load of each host in heterogeneous environments

  30. Scheduling in Heterogeneous Environments • Working With Different Workloads

  31. Scheduling in Heterogeneous Environments • Analysis (using Word Count and Grep)

  32. Scheduling in Heterogeneous Environments Execution Speed +17% Execution Speed +37% • Handling Data Skew (Sort) Cluster Throughput +19% Cluster Throughput +44%

  33. Scheduling in Heterogeneous Environments • Competing with other applications • Run some I/O intensive processes on some servers • dd process which creates large files in a loop to write random data on some physical machines • MCP can run 36% faster than Hadoop-LATE and increase the cluster throughput by 34%.

  34. Large scale Experiment • Load distribution • MCP finishes jobs 21% faster than Hadoop-LATE and improves the cluster throughput by 16%

  35. Scheduling in Homogeneous Environments • Small scale cluster with each host running 2 VMs • There is no straggler node in the cluster • MCP finishes jobs 6% faster than Hadoop-LATE and 2% faster than Hadoop-None. • Hadoop-LATE behaves worse than Hadoop-None due to too many unnecessary reduce backups • MCP improves reduce backup precision by 40% • MCP can achieve better data locality for map tasks

  36. Scheduling Cost • We measure the average time that MCP and Hadoop-LATE spend on speculative schedulingin a job with 350 map tasks and 110 reduce tasks • MCP spends about 0.54ms O(n) • LATE spends 0.74ms O(nlogn)

  37. 2. Background 6. Evaluation 3. Previous work 7. Conclusion 4. Pitfalls 5. Our Design 1. Introduction Outlines 0

  38. Conclusion • We provide an analysis of the pitfalls of current speculative execution strategies in MapReduce • Scenarios: data skew, tasks that start asynchronously, improper configuration of phase percentage etc. • We develop a new strategy MCP to handle these scenarios: • Accurate slow task prediction and remaining time estimation • Take the cost performance of computing resources into account • Take both data locality and data skew into consideration when choosing proper worker nodes • MCP fits well in both heterogeneous and homogeneous environments • handle data skew case well, quite scalable, and less overhead

  39. Thank You!

More Related