1 / 34

A Unified Approach to Approximating Resource Allocation and Scheduling

A Unified Approach to Approximating Resource Allocation and Scheduling. Amotz Bar-Noy.……...AT&T and Tel Aviv University Reuven Bar-Yehuda….Technion IIT Ari Freund……………Technion IIT Seffi Naor…………….Bell Labs and Technion IIT Baruch Schieber…...…IBM T.J. Watson Slides and paper at:

kiaria
Download Presentation

A Unified Approach to Approximating Resource Allocation and Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Unified Approach to Approximating Resource Allocation and Scheduling • Amotz Bar-Noy.……...AT&T and Tel Aviv University • Reuven Bar-Yehuda….Technion IIT • Ari Freund……………Technion IIT • Seffi Naor…………….Bell Labs and Technion IIT • Baruch Schieber…...…IBM T.J. Watson • Slides and paper at: • http://www.cs.technion.ac.il/~reuven

  2. Summery of Results:Discrete • Single Machine Scheduling • Bar-Noy, Guha, Naor and Schieber STOC 99: 1/2 Non Combinatorial* •   Berman, DasGupta, STOC 00: 1/2 •   This Talk, STOC 00(Independent)      1/2 • Bandwidth AllocationAlbers, Arora, Khanna SODA 99: O(1) for |Activityi|=1*Uma, Phillips, Wein SODA 00:        1/4 Non combinatorial.This Talk STOC 00 (Independent)       1/3 for w  1/2 This Talk STOC 00 (Independent)       1/5 for w  1    • Parallel Unrelated Machines: • Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3 •   Berman, DasGupta STOC 00: 1/2 •   This Talk, STOC 00(Independent)      1/2

  3. Summery of Results:Continuous • Single Machine Scheduling • Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3 Non Combinatorial • Berman, DasGupta STOC 00: 1/2·(1-) •   This Talk, STOC 00: (Independent)1/2·(1-) • Bandwidth AllocationUma, Phillips, Wein SODA 00:       1/6 Noncombinatorial • This Talk, STOC 00 (Independent)       1/3 ·(1-) for w  1/2   1/5 ·(1-) for w  1 • Parallel unrelated machines: • Bar-Noy, Guha, Naor and Schieber STOC 99: 1/4 • Berman, DasGupta STOC 00: 1/2·(1-) •   This Talk, STOC 00: (Independent)1/2·(1-)

  4. Summery of Results:and more… • General Off-line Caching Problem •  Albers, Arora, Khanna SODA 99: O(1) Cache_Size += O(Largest_Page)    O(log(Cache_Size+Max_Page_Penalty)) • This Talk, STOC 00: 4 • Ring topology: • Transformation of approx ratio from line to ring topology 1/ 1/(+1+) • Dynamic storage allocation (contiguous allocation): • Previous results: none for throughput maximization • Previous results Kierstead 91 for resource minimization: 6 • This paper: 1/35 for throughput max using the result for resource min.

  5. The Local-Ratio Technique: Basic definitions • Given a profit [penalty] vector p. • Maximize[Minimize]p·x • Subject to: feasibility constraints F(x) • x isr-approximationif F(x) and p·x []r·p·x* • An algorithm is r-approximationif for any p, F • it returns an r-approximation

  6. The Local-Ratio Theorem: • xis an r-approximation with respect to p1 • xis an r-approximation with respect to p- p1 •  • xis an r-approximation with respect to p • Proof: (For maximization) • p1 · x  r ×p1* • p2 · x  r ×p2* •  • p · x  r ×( p1*+ p2*) •  r ×(p1 + p2 )*

  7. Special case: Optimization is 1-approximation • xis an optimum with respect to p1 • xis an optimum with respect to p- p1 • xis an optimum with respect to p

  8. A Local-Ratio Schema for Maximization[Minimization] problems: • Algorithm r-ApproxMax[Min]( Set, p ) • If Set = Φ then returnΦ ; • If  I  Setp(I)  0 then returnr-ApproxMax( Set-{I}, p ) ; • [If I  Setp(I)=0 then return {I} r-ApproxMin( Set-{I}, p ) ;] • Define “good” p1 ; • REC = r-ApproxMax[Min]( S, p- p1) ; • If REC is not an r-approximation w.r.t. p1 then “fix it”; • return REC;

  9. The Local-Ratio Theorem: Applications Applications to some optimization algorithms (r = 1): ( MST) Minimum Spanning Tree (Kruskal) ( SHORTEST-PATH) s-t Shortest Path (Dijkstra) (LONGEST-PATH) s-t DAG Longest Path (Can be done with dynamic programming) (INTERVAL-IS) Independents-Set in Interval Graphs Usually done with dynamic programming) (LONG-SEQ) Longest (weighted) monotone subsequence (Can be done with dynamic programming) ( MIN_CUT) Minimum Capacity s,t Cut (e.g. Ford, Dinitz) Applications to some 2-Approximation algorithms: (r = 2) ( VC) Minimum Vertex Cover (Bar-Yehuda and Even) ( FVS) Vertex Feedback Set (Becker and Geiger) ( GSF) Generalized Steiner Forest (Williamson, Goemans, Mihail, and Vazirani) ( Min 2SAT) Minimum Two-Satisfibility (Gusfield and Pitt) ( 2VIP) Two Variable Integer Programming (Bar-Yehuda and Rawitz) ( PVC) Partial Vertex Cover (Bar-Yehuda) ( GVC) Generalized Vertex Cover (Bar-Yehuda and Rawitz) Applications to some other Approximations: ( SC) Minimum Set Cover (Bar-Yehuda and Even) ( PSC) Partial Set Cover (Bar-Yehuda) ( MSP) Maximum Set Packing (Arkin and Hasin) Applications Resource Allocation and Scheduling: ….

  10. Maximum Independent Set in Interval Graphs • Activity9 • Activity8 • Activity7 • Activity6 • Activity5 • Activity4 • Activity3 • Activity2 • Activity1 • time • Maximize s.t.For each instance I: • For each time t:

  11. Maximum Independent Set in Interval Graphs: How to select P1 to get optimization? P1=0 • Activity9 • Activity8 • Activity7 • Activity6 • Activity5 • Activity4 • Activity3 • Activity2 • Activity1 Î • time • Let Î be an interval that ends first; • 1 if I in conflict with Î • For all intervals I define: p1(I) = • 0 else • For every feasible x: p1 ·x  1 • Every Î-maximal is optimal. • For every Î-maximal x: p1 ·x  1 P1=1 P1=0 P1=0 P1=0 P1=1 P1=0 P1=1 P1=1

  12. Maximum Independent Set in Interval Graphs: An Optimization Algorithm P1=P(Î ) • Activity9 • Activity8 • Activity7 • Activity6 • Activity5 • Activity4 • Activity3 • Activity2 • Activity1 Î • time • Algorithm MaxIS( S, p ) • If S = Φ then returnΦ ; • If ISp(I) 0 then returnMaxIS( S - {I}, p); • Let ÎS that ends first; • IS define: p1(I) = p(Î)  (I in conflict with Î) ; • IS = MaxIS( S, p- p1) ; • If IS is Î-maximal then returnIS else return IS {Î}; P1=0 P1=0 P1=0 P1=0 P1=P(Î ) P1=0 P1=P(Î ) P1=P(Î )

  13. Maximum Independent Set in Interval Graphs: Running Example P(I5) = 3 -4 P(I6) = 6 -4 -2 P(I3) = 5 -5 P(I2) = 3 -5 P(I1) = 5 -5 P(I4) = 9 -5 -4 -4 -5 -2

  14. Bar-Noy, Guha, Naor and Schieber STOC 99: 1/2 LP Berman, DasGupta, STOC 00: 1/2 This Talk, STOC 00(Independent)      1/2 Single Machine Scheduling : • Activity9 • Activity8 • Activity7 • Activity6 • Activity5 • Activity4 • Activity3 • Activity2 • Activity1 ????????????? • time • Maximize s.t.For each instance I: • For each time t: • For each activity A:

  15. Single Machine Scheduling: How to select P1 to get ½-approximation ? P1=1 P1=0 • Activity9 • Activity8 • Activity7 • Activity6 • Activity5 • Activity4 • Activity3 • Activity2 • Activity1 Î • time • Let Î be an interval that ends first; • 1 if I in conflict with Î • For all intervals I define: p1(I) = • 0 else • For every feasible x: p1 ·x  2 • Every Î-maximal is 1/2-approximation • For every Î-maximal x: p1 ·x  1 P1=0 P1=1 P1=0 P1=0 P1=0 P1=0 P1=0 P1=0 P1=1 P1=0 P1=1 P1=0 P1=0 P1=1 P1=1 P1=1 P1=1 P1=1

  16. Single Machine Scheduling: The ½-approximation Algorithm • Activity9 • Activity8 • Activity7 • Activity6 • Activity5 • Activity4 • Activity3 • Activity2 • Activity1 Î • time • Algorithm MaxIS( S, p ) • If S = Φ then returnΦ ; • If ISp(I) 0 then returnMaxIS( S - {I}, p); • Let ÎS that ends first; • IS define: p1(I) = p(Î)  (I in conflict with Î) ; • IS = MaxIS( S, p- p1) ; • If IS is Î-maximal then returnIS else return IS {Î};

  17. Albers, Arora, Khanna SODA 99: O(1) |Ai|=1* Uma, Phillips, Wein SODA 00: 1/4 LP. This Talk 1/3 for w  1/2 and 1/5 for w  1 Bandwidth Allocation • Activity9 • Activity8 • Activity7 • Activity6 • Activity5 • Activity4 • Activity3 • Activity2 • Activity1 I w(I) • s(I) e(I) time • Maximize s.t.For each instance I: • For each time t: • For each activity A:

  18. Bandwidth Allocation for w 1/2 How to select P1 to get 1/3-approximation? • Activity9 • Activity8 Î • Activity7 • Activity6 • Activity5 • Activity4 • Activity3 • Activity2 • Activity1 I w(I) • s(I) e(I) time • 1if I in the same activity of Î • For all intervals I define: p1(I) = 2*w(I)if I in time conflict with Î • 0 else • For every feasible x: p1 ·x 3 • Every Î-maximal is 1/3-approximation • For every Î-maximal x: p1 ·x 1

  19. Bandwidth Allocation The 1/5-approximation for any w  1 • Activity9 • Activity8 w > ½ • Activity7 w > ½ w > ½ • Activity6 • Activity5 w > ½ • Activity4 • Activity3 w > ½ w > ½ • Activity2 • Activity1 w > ½w > ½ w > ½ • Algorithm: • GRAY = Find 1/2-approximation for gray (w>1/2) intervals; • COLORED = Find 1/3-approximation for colored intervals • Return the one with the larger profit • Analysis: • If GRAY*  40%OPT then GRAY 1/2(40%OPT)=20%OPT else • COLORED*  60%OPT thus COLORED 1/3(60%OPT)=20%OPT

  20. Continuous Scheduling { • Single Machine Scheduling (w=1) • Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3 Non Combinatorial •   Berman, DasGupta STOC 00: 1/2·(1-) •   This Talk, STOC 00: (Independent) 1/2·(1-) • Bandwidth AllocationUma, Phillips, Wein SODA 00:       1/6 Noncombinatorial • This Talk, STOC 00 (Independent)       1/3 ·(1-) for w  1/2   1/5 ·(1-) for w  1 w(I)  d(I)  s(I) e(I)

  21. Continuous Scheduling: Split and Round Profit (Loose additional (1-) factor) • If currant p(I1)   original p(I1) then delete I1 • else Split I2=(s2,e2] to I21=(s2, s1+d1] and I22=(s1+d1,e2]  d(I1)   d(I2)  I11 I12  d(I1)  I21 I22  d(I2) 

  22. Minimization problem: • General Off-line Caching Problem

  23. The Demand Scheduling Problem • Resource • 0.9 • 0.8 • 0.7 • 0.6 • 0.5 • 0.4 • 0.3 • 0.2 • 0.1 w(I) • 0.0 • Minimize s.t.For each instance I: • For each time t: • t Demand(t)

  24. Special Case: “Min Knapsack” • Demand = 1 • For all intervals I define: p1(I) = Min {w(I), 1} • For every feasible x: p1 ·x 1 • minimal is 2-approximation • For every minimal x: p1 ·x  2

  25. From Knapsack to Demand Scheduling • max demand=1 • at time t’ • For all intervals I intersecting time t’define: p1(I) = Min {w(I), 1} • p1(others) = 0 • p1 (all “right-minimal”) is at most 2 • p1 (all “left-minimal”) is at most 2 • For every minimal x: p1 ·x  2+2 • For every feasible x: p1 ·x  1 • Every minimal is 4-approximation

  26. Albers, Arora, Khanna SODA 99: O(1) Cache_Size += O(Largest_Page)O(log(Cache_Size+Max_Page_Penalty)) This Talk 4 General Off-line Caching Problem • 0.9 • 0.8 • w(page2)=0.7 • 0.6 • w(page1)=0.5 • 0.4 • w(page3)=0.3 • 0.2 • 0.1 • 0.0 • page1 page2page3page1page3 page2page3 • w(pagei) = size of pagei • p(pagei) = the reload cost of the page Cache size

  27. 4-Approximation for Demand Scheduling • Algorithm MinDemandCover( S, p ) • If S = Φ then returnΦ; • If there exists an interval IS s.t. p(I) = 0 ; • then return {I}+MinDemandCover( S - {I}, p); • Let t’ be the time with maximum demand k; • Let S’ be the set of instances intersects time t ; • Let δ= MIN{p(I)/w(I) : I S’}; • MIN {w(I) ,k} if I S’ • For all intervals I  S define:p1 (I) = δ× • 0 else • C = MinDemandCover( S, p- p1) ; • Remove elements form C until it is minimal and returnC ;

  28. Application: 4-Approximationfor the Loss Minimization Problem • Resource • The cost of a schedule is the sum of profits of instances not in the schedule. • For the special case where Ai is a singleton {Ii} the problem is equivalent • to the Min Demand Scheduling where:

  29. END?

  30. Parallel Unrelated Machines: Continous Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3 1/4   Berman, DasGupta STOC 00: 1/2 1/2·(1-)   This Talk, STOC 00(Independent)      1/2 1/2·(1-) d d d d time jobs d d d d d d

  31. Parallel unrelated machines: i k A c i h c c k d time d h i A machine c h d d

  32. Parallel unrelated machines: 1/5-approximation (not in the paper)Each machine resource 1p1(Red ) = p1(orange d) = 1;p1(Yellowd ) = 2width; p1 (All others) = 0; i k A c i d h c c k d time d h i d A machine c h d d d

  33. END!

  34. Preliminaries • Activity9 • Activity8 • Activity7 • Activity6 • Activity5 • Activity4 • Activity3 • Activity2 • Activity1I w(I) • s(I) e(I) time

More Related