1 / 63

Priority Models

Priority Models. Sashka Davis University of California, San Diego June 1, 2003. Goal. Define priority models, which are a formal framework of greedy algorithms Develop a technique for proving lower bounds for priority algorithms. The big picture.

cael
Download Presentation

Priority Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Priority Models Sashka Davis University of California, San Diego June 1, 2003

  2. Goal • Define priority models, which are a formal framework of greedy algorithms • Develop a technique for proving lower bounds for priority algorithms

  3. The big picture Classify the kinds of problems on which the different heuristics perform well Can we build a formal model for the different algorithmic design paradigms? Are the known algorithms optimal or can they be improved? Evaluate the limitations of each technique? Greedy Dynamic Programming Hill-climbing Polynomial Time Algorithms Divide and Conquer

  4. ShortPath ADAPTIVE FIXED Greedy heuristics • Priority algorithms are a formal model for greedy algorithms. PRIORITY ALGORITHMS

  5. Common structure of greedy algorithms • They sortitems (edges, intervals, etc.) • Consider each item once and either add it to the solution or throw it away

  6. Interval scheduling on a single machine • Instance: Set of intervals I=(i1, i2,…,in), j ij=[rj, dj] • Problem: schedule intervals on a single machine • Solution: S  I • Objective function: maxiS(dj - rj)

  7. A simple solution (LPT) Longest Processing Time algorithm input I=(i1, i2,…,in) • Initialize S ← • Sort the intervals in decreasing order (dj – rj) • while (I is not empty) • Let ik be the next in the sorted order • If ik can be scheduled then S ← S U {ik}; • I ← I \ {ik} • Output S

  8. LPT OPT OPT OPT LPT is a 3-approximation • LPT sorts the intervals in decreasing order according to their length • 3 LPT≥ OPT ri di

  9. The minimum cost spanning tree problem • Instance: Edge weighted graph • Problem: Find a tree of edges that spans V • Objective function Minimize the cost of the tree

  10. A solution for MST problem Kruskal’s algorithm Input (G=(V,E), w: E →R) • Initialize empty solution T • L = Sorted list of edges in increasing order according to their weight • while (L is not empty) • e = next edge in L • Add the edge to T, as long as T remains a forest and remove e from L • Output T

  11. Another solution to the MST problem Prim’s algorithm Input G=(V,E), w: E →R • Initialize an empty tree T ←; S ← • Pick a vertex u; S={u}; • for (i=1 to |V|-1) • (u,v) = min(u,v)cut(S, V-S)w(u,v) • S←S  {v}; T←T{(u,v)} • Output T

  12. Classification of the example algorithms LPT Kruskal Prim ADAPTIVE FIXED PRIORITY ALGORITHMS

  13. Talk outline • History of priority algorithms • Priority algorithm framework for scheduling problems • Priority algorithms for facility location • General framework of priority algorithms • Future research

  14. Results [BNR02] • Defined fixed and adaptive priority algorithms • Proved that fixed priority algorithms are less powerful than adaptive priority algorithms • Considered variety of scheduling problems and proved many non-trivial upper and lower bounds

  15. Results [AB02] Proved tight bounds on performance of • Adaptive priority algorithms for facility location in arbitrary spaces • Fixed priority algorithms for uniform metric facility location • Adaptive priority algorithms for set cover

  16. Results [DI02] • Defined a general model of priority algorithms • Proved a strong separation between the classes of fixed and adaptive priority algorithms • Proved a separation between the class of memoryless adaptive priority algorithms and adaptive priority algorithms with memory • Proved tight bound for performance of adaptivepriority algorithms for weighted vertex cover problem

  17. Talk outline • History of priority algorithms • Priority algorithm framework for scheduling problems [BNR02] • Priority algorithms for facility location • General framework of priority algorithms • Future research

  18. The defining characteristics of greedy algorithms [BNR02] • The order in which data items are considered is determined by a priority function, which orders all possible data items • The algorithm sees one input at a time • Decision made for each data item is irrevocable

  19. Fixed Priority Algorithms Adaptive Priority Algorithms Priority models • Difference: How the next data item is chosen

  20. Fixed priority algorithms [BNR02] Input: a set of jobs S Ordering: Determine without looking at S a total ordering of all possible jobs while (S is not empty) Jnext - next job in S according to the order above Decision: Make irrevocable decision for Jnext S ← S \ {Jnext }

  21. Adaptive priority algorithms [BNR02] Input: a set of jobs S while (not empty S) Ordering: Determine without looking at S a total ordering of all possible jobs Jnext - next job in S according to the order above Decision: Make irrevocable decision for Jnext S ← S \ {Jnext }

  22. Separation between fixed and adaptive priority algorithms [BNR02] • Theorem: No fixed priority algorithm can achieve an approximation ratio better than 3 for the interval scheduling problem on multiple machine configuration • Theorem: CHAIN-2 is an adaptive priority algorithm achieving an approximation ratio 2, for interval scheduling on a two machine configuration

  23. Online algorithms • Must service each request before the next request is received • Several alternatives in servicing each request • Online cost is determined by the options selected

  24. Connection between online and priority algorithms • Similarities • The instance is viewed one input at a time • Decision is irrevocable • Difference • The order of the data items

  25. Competitive analysis of online algorithms t=1; I=∅ • Round t • Adversary picks a data item t; I← I U {t} • Algorithm makes a decision σt for t: A←A U { (t, σt )} • Adversary chooses whether to end the game. If not, the next round begins: t←t+1 • Adversary picks a solution Bfor I offline • Algorithm is awarded payoff value(A) / value(B)

  26. Fixed priority game • Adversary selects a finite set of data items S0; I←;t ←1 • Algorithm picks a total order on S0 • Adversary restricts the remaining data items: S1 S0 Round t • Let t St be next data item in the order • Algorithm makes a decision σt for t : A←A U { (t, σt )} • Adversary restricts the set St+1 St – t; I ← I U {t}; • Adversary chooses whether to end the game. If not, the next round begins: t←t+1 • Adversary picks a solution Bfor I • Algorithm is awarded payoff value (A) / value (B)

  27. Example lower bound [BNR02] • Theorem1: No priority algorithm can achieve an approximation ratio better than 3 for the interval scheduling problem with proportional profit for a single machine configuration

  28. e q 2 q-1 q-1 1 3 2 1 3 Proof of Theorem 1 • Adversary’s move • Algorithm’s move: Algorithm selects an ordering • Let i be the interval with highest priority

  29. 2 1 3 i 2 k j 1 3 Adversary’s strategy • If Algorithm decides not to schedule i • During next round Adversary removes all remaining intervals and schedules interval i i

  30. 2 1 3 i j k i 2 k j 1 3 Adversary’s strategy • If i = and Algorithm schedules i • During next round the Adversary restricts the sequence: i

  31. 2 1 3 m 2 k j 1 3 Adversary’s strategy • If i = and Algorithm schedules i • During next round Adversary restricts the sequence: i m i

  32. Conclusion • Adversary can pick (q, e) so that the advantage gained is arbitrarily close to 3 • No priority algorithm (fixed or adaptive) can achieve an approximation ratio better than 3 • LPT achieves an approximation ratio 3 • LPT is optimal within the class of priority algorithms

  33. Talk outline • History of priority algorithms • Priority algorithm framework for scheduling problems • Priority algorithms for facility location [AB02] • General framework of priority algorithms • Future research

  34. [AB02] work on priority algorithms [AB02] proved lower bounds on performance of adaptive and fixed priority algorithms for the facility location problem in metric and arbitrary spaces, and the set cover problems

  35. [AB02] result • Theorem: No adaptive priority algorithm can achieve an approximation ratio better than log(n) for facility location in arbitrary spaces

  36. Adaptive priority game • Adversary selects a finite set of data items S0; I←;t ←1 • Round t • Algorithm picks a data item t and a decision σt for t : A←A U { (t, σt )} • Adversary restricts the set St+1 St – t; I ← I U {t}; • Adversary chooses whether to end the game, if not next round begins t← t+1 • Adversary picks a solution Bfor I • Algorithm is awarded payoff value(A)/value(B)

  37. Facility location problem • Instance is a set of cities and set of facilities • The set of cities is C={1,2,…,n} • Each facility fi has an opening cost cost(fi) and connection costs for each city: {ci1, ci2,…, cin} • Problem: open a collection of facilities such that each city is connected to at least one facility • Objective function: minimize the opening and connection costs min(ΣfScost(fi) + ΣjCmin fiScij )

  38. Adversary presents the instance: • Cities: C={1,2,…,n}, where n=2k • Facilities: • Each facility has opening cost n • City connection costs are 1 or∞ • Each facility covers exactly n/2 cities • cover(fj) = {i | i  C,cji=1} Cu denotes the set of cities not yet covered by the solution of the Algorithm

  39. Adversary’s strategy At the beginning of each round t • The Adversary chooses St to consist of facilities f such thatfStiff |cover(f)∩ Cu| = n/(2t) • The number of uncovered cities Cu is n/(2t-1) Two facilities are complementary if together they cover all cities in C. For any round t St consists of complementary facilities

  40. The game Uncovered cities Cu

  41. End of the game • Either Algorithm opened log(n) facilities or failed to produce a valid solution • Cost of Algorithm’s solution is n.log(n)+n • Adversary opens two facilities incurs total cost 2n+n

  42. Conlusion • The Adversary has a winning strategy • No adaptive priority algorithm can achieve an approximation ratio better than log(n)

  43. Talk outline • History of priority algorithms • Priority algorithm framework for scheduling problems • Priority algorithms for facility location • General framework of priority algorithms [DI02] • Future research

  44. Fixed priority algorithms [DI02] Input: instance ={1,2,…, n}, Output: solution 1. Determine an ordering function 2. Order I according to 3. Repeat • Let the next data item in the ordering be • Make a decision • Update the partial solution S until (decisions are made for all data items) 4. Output

  45. Adaptive priority algorithms [DI02] Input: instance ={1,2,…, n}, Output: solution 1. Initialization 2. Repeat • Determine an ordering function • Pick the highest priority data item according to • Make an irrevocable decision • Update: until (decisions are made for all data items; ) 3. Output

  46. Strong separation between fixed and adaptive priority algorithms • Theorem: No fixed priority algorithm can achieve any constant approximation ratio for the ShortPath problem • Dijkstra algorithm for the Single source shortest path problem solves the ShortPath problem exactly.

  47. The ShortPath problem Instance: Given an edge-weighted directed graph G=(V,E) and two nodes s and t Problem: Find a directed tree of edges, rooted at s Objective function: Minimize the combined weight of the edges on the path from s to t

  48. u(k) a y(1) v(1) s t z(1) x(1) b w(k) Adversary’s strategy

  49. v(1) w(k) Algorithm selects an order on S0 If then the Adversary presents: u(k) a y(1) s t x(1) z(1) b

  50. Adversary’s strategy • Wait until Algorithm considers edge Y(1). • Y(1) will be considered before Z(1) • Adversary can remove data items not yet considered

More Related