1 / 18

Stochastic DAG Scheduling using Monte Carlo Approach

Stochastic DAG Scheduling using Monte Carlo Approach. Heterogeneous Computing Workshop (at IPDPS) 2012 Extended version: Elsevier JPDC (accepted July 2013, in Press) Wei Zheng Department of Computer Science, Xiamen University, Xiamen, China Rizos Sakellariou

saniya
Download Presentation

Stochastic DAG Scheduling using Monte Carlo Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stochastic DAG Scheduling using Monte Carlo Approach Heterogeneous Computing Workshop (at IPDPS) 2012 Extended version: Elsevier JPDC (accepted July 2013, in Press) Wei Zheng Department of Computer Science, Xiamen University, Xiamen, China RizosSakellariou SchoolofComputerScience,TheUniversityofManchester,UK

  2. Previous Presentation (9/06/13) • Research Area: Scheduling workflows under heterogeneous environment with variable performance.

  3. This Presentation

  4. Introduction • General DAG Scheduling assumption: • Estimated Execution time for each task is known in advance. • Several techniques of estimation: e.g. average over several runs • Similarly, estimated data transfer time is known in advance. • A study* has shown, there might be significant deviations in observed performance in Grids. • To address this deviations, Two approaches are prevalent • Just-In-Time (high overhead) • RunTime (static schedule + runtime changes) (hypothesis**: might waste resources and increase makespan if static schedule is not very good) • * A. Lastovetsky, J. Twamley, Towards a realistic performance model for networks of heterogeneous computers, in:M.Ng,A.Doncescu,L.Yang,T.Leng (Eds.), High Performance Computational Science and Engineering, in: IFIP InternationalFederationforInformationProcessing,vol.172,Springer,Boston, 2005,pp.39–57. • ** R.Sakellariou,H.Zhao,A low-cost rescheduling policy for efficient mapping of workflows on grid systems, Sci. Program. 12(4) (2004) 253–262

  5. Problem Addressed • Generating a better (minimize makespan) “Static” schedule based on the stochastic model of the variations in the performance (execution time) of individual tasks in the graph.

  6. Background and Related Work • Heterogeneous Earliest Finish Time heuristic (discussed in the previous presentation) • List based scheduling. • Prioritize tasks based on the “bLevel” (essentially, tasks on the critical path get higher priority) • Once task is chosen, map it to “best” available resource. bLevel(i) = wi + max j∈Succ(i){wi→j +bLevel(j)}

  7. Problem Description • G = (N, E) -> DAG with one entry, one exit node. • R -> set of heterogeneous resources • Eti,p-> Random variable for execution time • Assumption: Network bandwidth is constant. • M -> Makespan = finish time of exit node. Goal: Find schedule Ω to minimize makespan (assign N to R, no overlap, no preemption, no migration)

  8. Methodology • Assumption: Analytical methods that solve the probabilistic optimization problem are too expensive. • Use Monte Carlo Sampling (MCS) method. • Define a space comprising possible input values • IG ={ETi,p :i∈N,p∈R}. • Take an independent sample randomly from the space • PG =fsmp(IG) ={ti,p :i∈N,p∈R} • Perform deterministic computation using the sample input (store the result) • ΩG =Static_SchedulingHEFT(G,PG) • Repeat 2 and 3 till some exit condition (no. of repetitions) • Aggregate the stored results of the individual computations into the final result.

  9. MCS Based Scheduling • Complexity: • Depends on the deterministic scheduling algorithm • For HEFT it is O(v + e * r) = O(e*r) • First loop: O(e*r*m) • Second loop: O(e * n * k) • Total = O(e*r*m + e*n*k)

  10. Example

  11. Example 10,000 iterations - production phase (Gaussian Distribution) 200 iterations - selection phase 20% reduction in makespan Absolute increase in algorithm time: 1.2s

  12. Evaluation • Graphs

  13. Threshold Calculation

  14. Convergence (no. of repetitions)

  15. Convergence

  16. Makespan performance evaluation • Static HEFT (baseline) with Mean ET values • Autopsy – Static HEFT With known ET values • MCS - Static • ReStatic • ReMCS • Graph Generation (random generator of given type) • Task Execution Time for different runs • Select “Mean” for each task. • Use a probability distribution to select actual execution time. The variation is bounded by Quality of Estimation (QoE) (0<QoE<1)

  17. Makespan performance evaluation

  18. Summary • It is possible to obtain a good full-ahead static schedule that performs well under prediction inaccuracy, without too much overhead. • MCS, which has a more robust procedure for selecting an initial schedule, generally results in better performance when rescheduling is applied

More Related