1 / 43

BECO 2004

BECO 2004. When can one develop an FPTAS for a sequential decision problem? with apologies to Gerhard Woeginger James B. Orlin MIT working jointly with Mohamed Mostagir. Fully Polynomial Time Approximation Scheme (FPTAS) . INPUT.

thetis
Download Presentation

BECO 2004

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BECO 2004 When can one develop an FPTAS for a sequential decision problem? with apologies to Gerhard Woeginger James B. Orlin MIT working jointly with Mohamed Mostagir

  2. Fully Polynomial Time Approximation Scheme (FPTAS) • INPUT. • A sequential decision problem with n stages or decisions. Also, a given accuracy . • OUTPUT • A solution that is guaranteed to be within  of optimal. • RUNNING TIME • Polynomial in the size of the problem and in 1/ .

  3. Goal of this talk • Present Branch and Dominate: a generic method for creating FPTASes • Starting point for this research: Woeginger [2000] • applies to all problems considered by Woeginger • Generalizes results to multiple criteria as per Angel, Bampis, and Kononov [2003] • Extends to many dynamic lot-sizing and many new problems.

  4. Overview of what comes next • Two examples • knapsack • a more complex machine scheduling problem • A generalization to many other FPTAS

  5. Example 1. The Knapsack Problem

  6. The Knapsack Problem as a Decision Problem x3 = 1 x2 = 1 x3 = 0 x1 = 1 x3 = 1 Decision tree. x2 = 0 x3 = 0 Enumeration Tree x3 = 1 x2 = 1 x3 = 0 x1 = 0 x3 = 1 x2 = 0 x3 = 0

  7. Domination • Let x denote a decision node at stage j. • its state is v, w, where • v = c1x1 + … + cjxj • w = a1x1 + … + ajxj • x is infeasible if w > b. • Let x’ have state v’, w’ at stage j • Node x dominates node x’ at stage j if • v  v’ and w  w’.

  8. A pseudo-polynomial time algorithm Branch and Dominate (B & D) • Expand the enumeration tree one stage at a time. • Eliminate any infeasible nodes • Whenever one node dominates another, eliminate the dominated node. (Do this sequentially) • The optimum corresponds to the stage n node with greatest value. Theorem. Branch and dominate is pseudo-polynomial for the knapsack problem.

  9. A simple Example Maximize 6x1 + 8x2 + 14x3 + … Subject to 7x1 + 5x2 + 12x3 + …  100

  10. x3 = 1 28,24 x2 = 1 14,12 x3 = 0 14,12 x1 = 1 6,7 x2 = 0 6,7 6,7 x3 = 1 22,17 x2 = 1 8,5 x3 = 0 8,5 x1 = 0 0,0 x3 = 1 14,12 14,12 x2 = 0 0,0 x3 = 0 0,0 The Knapsack Problem as a Decision Problem Max 6x1 + 8x2 + 14x3 + … s.t. 7x1 + 5x2 + 12x3 + …  100 0,0

  11. value weight $28,000 1,201 $27,800 1,200 $28,000 1,201 -domination • Node x -dominates node x’ at stage j if • v  (1- ) v’ and w  w’. • The number of undominated states at each stage is O(-1 log nCmax). • Theorem. Branch and -dominate with  = /n is an FPTAS for the knapsack problem. The running time is O(n2/) • Note: we did not use w  (1+ ) w’ because there is a hard constraint on knapsack weights, and we cannot approximate it.

  12. Outline of Proof of -optimality • Let x = (x1, x2, …, xn) be the optimal solution. • Let yj be a partial solution at stage j for j = 0 to n. • y0 = . • for each j = 1 to n, yj = yj-1,xj or else yj is the solution that -dominates yj-1,xj • Let xj = (x1, …, xj). • Then yjj-dominates xj for each j. This is the standard construction

  13. x5 = 1 A node in the tree, not -dominated w5 x2 part of the opt. solution A -dominated node in the tree x3 =1 y3 w4 x4 = 0 x0 x3 = 1 x3 = 1 x5 = 1 x3 x5 x2 = 1 x2 x4 x4 = 0 x1 = 0 x1 y4 y5 y2 y0 y1 y4-dominates w4. y2-dominates x2. y5-dominates w5. Total accumulated error: at most n.

  14. List Scheduling Problems • Scheduling problems in which jobs are constrained to be assigned in sequential order. 5 6 12 13 15 Machine 1 1 2 3 10 14 Machine 2 4 7 8 9 11 Machine 3 • Finding an optimal list schedule: • finds the optimal solution for some problems • minimize sum of completion times on K machines. • can be used as a heuristic for an NP-hard problem

  15. A 2-machine List Scheduling Problem Cj = completion time Cj …. is defined correctly for j = 1 to n same number of jobs on each machine proc. time bounds on machine 1. x  {0, 1}n

  16. Stage j: after j jobs have been assigned Each partial solution xk has an associated state vector. M1(j) processing time on machine 1: at most bj M2(j) processing time on machine 2 d(j) number of jobs on machine 1 – number of jobs on machine 2 z(j) cumulated objective function

  17. Stage 11 27 M1 35 M2 2 d 500 z On moving from Stage j to Stage j+1 Stage 12 30 35 3 530 x12 = 1 p12 = 3 F11(27, 35, 2, 500, 1) =  30, 35, 3, 530  current state + decision  state at next stage

  18. 2 3 d Polynomially Bounded (PB) Components • A component of the state vector is called polynomially bounded (PB) if the number of different values it can take is polynomially bounded in the size of the input. The component for d is PB.

  19. Stage j Stage j+1 27 30 M1(j) M2(j) 35 35 d(j) 2 3 z(j) 500 530 Monotone Components Monotone Monotone F11(27, 35, 2, 500, 1) =  30, 35, 3, 530  A non-PB component is called monotone if for each stage j replacing its current value i by i’ > i cannot not decrease Fj, and PB components stay unchanged in Fj(S) e.g., components 1, 2, and 4,

  20. Stage j Stage j M1 M’1 M2 M’2 d d’ z z’ Domination at Stage j  Monotone  PB = Monotone  for monotone  dom if = for PB

  21. Theorem. B & D is a pseudo-polynomial time algorithm for a list scheduling problem if • There are a fixed number of components of the state vector • Each component is either monotone or PB • The objective is to minimize • Any constraint on a monotone component is a strict upper bound • All other constraints involve only PB components • Additional technical conditions (e.g., all functions can be computed in polynomial time.) Proof. The number of undominated state vectors at each stage is pseudo-polynomial.

  22. Moving from Pseudo-polynomial to an FPTAS. • We want conditions under which Branch and -dominate leads to an FPTAS. • We need to replace domination by -domination in all except one of the monotone components.

  23. Good and Bad Monotone Components • A monotone component (say component 1) is good if all of the following conditions are satisfied: • It has no strict upper bound, and • It cannot decrease from stage to stage. Non-Example: Suppose we keep track of M1 – M2 • Fj is not overly sensitive to small changes in s1e.g., Fj(s1(1+ ), s2, …, sk, )  (1 + n) Fj(s1, s2, …, sk). • If it is not good, it is bad.

  24. On bad monotone components • Condition 1. Any monotone component on which there are hard upper bounds is bad. • Example. the processing time on machine 1 at stage j at most bj. So, M1 is bad. • Small relative changes in the value of the component can mean the difference between the constraint being satisfied or not.

  25. On bad monotone components Condition 2 Suppose s1 could decrease from stage to stage In this case, a small relative change in the value of s1 at some stage could have a very large impact on states at later stages. And so it is bad. e.g., Suppose we keep track of |M1 – M2|

  26. Stage 11 Stage 11 2700 2795 M1 3501 3421 M2 2 2 d Tardi-ness 50 49 On Bad Monotone States • Condition 3. A state is bad if a small change in its value can cause a large change in the value at the next stage for some monotone state. Suppose that we are minimizing total tardiness. Bad Monotone Bad Monotone PB Good Monotone

  27. Stage 11 Stage 11 27 28 M1 35 34 M2 2 2 d 500 495 z -Domination at Stage j Bad Monotone   (1+ ) Good Monotone = PB  (1+ ) Good Monotone  -domination is the same as domination except for the (1+ ) term on the good monotone states.

  28. Theorem. If a list scheduling problem satisfies conditions 1-6 from before, and if at most one monotone component is bad, then Branch and -dominate can be used to create an FPTAS. Note: use -domination on all good monotone components, and use domination on the remaining monotone component.

  29. A list scheduling problem with outsourcing • Schedule jobs on a single machine • Each job j has a due date dj, and a maximum tardiness Lj • job i can be outsourced at a cost of ci . At most K jobs can be outsourced. • Objective: minimize the weighted sum of the tardinesses of the jobs plus the outsourcing costs. • subject to: job j must be completed before dj + Lj. Processing Time on machine bad-bad States at stage j Number outsourced PB Cumulative Objective Function Good

  30. Theorem. Branch and -dominate can be used to create an FPTAS for the list scheduling problem on the previous slide. Note: if we do not require a list schedule, the previous problem is strongly NP-hard.

  31. References • Contrast with Woeginger [2000] • very similar in essence • fewer abstractions here; more direct focus on the components of the transition function • focus on list scheduling helps to clarify contribution • different in generalizations that follow

  32. Other references Rich history of references in FPTASes • Pioneers (1970s) • Horowitz & Sahni, Ibarra &Kim, Sethi, Garey, Johnson, Babat, Gens, Levner, Lawler, Lenstra, Rinnooy Kan, and more • 61 references in the ACM digital guide for FPTAS • Domination in Branch and Bound is an old idea. But I don’t have early references.

  33. Multi-criteria FPTAS Suppose we have two or more objective criteria. • We say that (c, b) -dominates (c’, b’) if • c  (1 + ) c’ and • b  (1 + ) b’ A pareto set consists of a maximal number of undominated solutions. The size of a pareto set is polynomial in the size of the data, 1/ and the number of objectives.

  34. Multi-criteria FPTAS

  35. Multiple Criteria FPTAS Theorem. If a sequential problem is solvable with an FPTAS for one good objective using the results of Theorem 1, then the multiple criteria version is solvable with an FPTAS for multiple good objectives. Proof. Same argument as in single criterion case.

  36. References • Multi-criteria FPTAS • Hansen [1979] • Orlin [1981] • Warburton [1987] • Safer and Orlin [1995] • Papadimitriou and Yannakakis [2000], [2001] • Angel, Bampis, and Kononov [2003]

  37. Machine Scheduling with “crashing” • 2 machine scheduling problem • Minimize the makespan • Budget B • Processing item j uses up some of the budget • Processing time of item j is pj(b), where $b are used. • if b’ > b, then pj(b’) pj(b). • Note: there are an exponential number of possible decisions to make at stage j. • We will modify B&D.

  38. Machine Scheduling with “crashing” Processing Time on machine 1 good States at stage j Processing Time on machine 2 good Budget used up bad Place job j on machine 1 or 2 Decisions at stage j How much budget to allocate exp

  39. [0, B/4] [0, B/2] [B/4, B/2] xj = 1 [B/2, 3B/4] [B/2, B] [3B/4, B] [0, B/4] [0, B/2] [B/4, B/2] xj = 0 [B/2, 3B/4] [B/2, B] [3B/4, B] Branching at stage j. b = budget allocated Budget allocation as a binary decision process. state Example:b  [0, B/2]

  40. [L, U] Domination Rule Suppose that the state Fj(S, 1, L) -dominates the state Fj(S, 1, U) Then allocate a budget of L, and stop branching. L Stage j, state S, assign job j to machine 1, budget assigned is between L and U During the branching for budget allocation, stop branching at a node denoted as [L, U] if allocating the budget L gives a state that -dominates the state obtained by allocating U.

  41. Lemma and Theorem • Lemma. In the binary expansion of the budget, the number of nodes starting from any state is polynomial in the size of the problem and in 1/. • Theorem. Branch and dominate is an FPTAS whenever it satisfies conditions 1 to 6 from before and whenever the number of bad monotone states is at most 1.

  42. Lot Sizing • The previous analysis extends to dynamic lotsizing • Branch and -dominate gives an FPTAS, even in the multiple criteria case. Extends work by • Dada and Orlin [1981] • Safer and Orlin [1995] • Van Hoesel and Wagelmans [2001] • Safer, Orlin, and Dror [2003]

  43. On FPTASs • List Scheduling Problem • polynomially bounded states • monotone states, both good and bad • constraints • d-domination • Extends and simplifies research by Woeginger

More Related