1 / 53

R u s s i a

Alexander Kononov Sobolev Institute of Mathematics Siberian Branch of Russian Academy of Science Novosibirsk, Russia. R u s s i a. Novosibirsk. How to design a PTAS. adapted from the novel by P. Schuurman and G. Woeginger directed by Alexander Kononov. Garry Potter problem.

mabyn
Download Presentation

R u s s i a

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Alexander Kononov Sobolev Institute of MathematicsSiberian Branch of Russian Academy of ScienceNovosibirsk, Russia

  2. R u s s i a Novosibirsk

  3. How to design a PTAS adapted from the novel by P. Schuurman and G. Woeginger directed by Alexander Kononov

  4. Garry Potter problem

  5. Could you find a schedule for my new project with the minimal cost? We can do that! Real sorcerers can do everything! And we guess the cost of the project will be 1000000 $. Sounds great ! Wonderful ! Go ahead and determine this schedule! Tomorrow we start my new project! We can not do that by tomorrow Real sorcerers can do everything! But finding the schedule is going to take us 23,5 years!

  6. Tomorrow… 2000000 $ But …Iwant … 1000000 $ Then…after 23,5 years The day after tomorrow 1500000 $ Three days from now 1333333 $ 1000000(1+1/X) What if I call you up exactly X days from now

  7. NP-hard problems • Almost all interesting combinatorial problems are NP-hard. • Nobody knows a polynomial time exact algorithm for any NP-hard problem. • If there exists a polynomial time exact algorithm for some NP-hard problem then there exists a polynomial time exact algorithm for many NP-hard problems. • The most researchers guess the a polynomial time exact algorithm for NP-hard problems does not exist. • We have to solve NP-hard problems approximately.

  8. Approximation algorithm An algorithm A is called ρ-approximationalgorithmfor problemΠ, if for all instances IofΠ it delivers a feasible solution with objective value A(I ) such that A(I ) ≤ ρOPT(I ).

  9. Polynomial time approximation scheme (PTAS) • An approximation scheme for problemΠ is a family of (1+ε) –approximation algorithmsAε for problemΠ over all 0< ε <1. • A polynomial time approximation schemefor problemΠ is an approximation scheme whose time complexity is polynomial in the input size.

  10. A Fully polynomial time approximation scheme (FPTAS) • A fully polynomial time approximation schemefor problemΠ is an approximation scheme whose time complexity is polynomial in the input size and also polynomial in 1/ε.

  11. Remarks • Running time • PTAS: | I |2/ε, | I |2/ε10, (| I |2/ε)1/ε. • FPTAS: | I |2/ε, | I |/ε2, | I |7/ε3. • With respect to worse case approximationan FPTAS is the strongest possible result that we can derive foran NP–hard problem.

  12. P2||Cmax • J={1,..., n} – jobs. • {M1, M2} – identical machines. • j : pj > 0 (j =1,…, n). • Each job has to be executed by one of two machines. • All jobs are available at time 0 and preemption is not allowed. • Each machine executes at most one job at time. • The goal is to minimize the maximum job completion time.

  13. How to get a PTAS • Simplification of instance I. • Partitionof output space. • Adding structure to the execution of an algorithm A. Instance I Algorithm A Output A(I)

  14. Simplification of instance I The first idea is to turn a difficult instance into a more primitive instance that is easier to tackle. Then we use the optimal solution for the primitive instance to get a near optimal solution of the original instance. Translate back App OPT # OPT Solve Simplification I I #

  15. Approaches of simplification • Rounding • Merging • Cutting • Aligning

  16. Rounding 3 4 16 29 32 0 12

  17. Rounding 3 4 16 29 32 0 12

  18. Merging 2 4 6 29 32 0

  19. Merging 2 4 6 27 29 32 0

  20. Cutting 2 29 32 0

  21. Cutting 2 29 32 0

  22. Aligning 2 4 6 29 32 0

  23. Aligning 6+6+5+5 = 45.5 5+5+4+4 = 44.5 4+4+3+2+2 = 53 2 4 6 29 32 0

  24. P2||Cmax • J={1,..., n} – jobs. • {M1, M2} – identical machines. • j : pj > 0 (j =1,…, n). • Each job has to be executed by one of two machines. • All jobs are available at time 0 and preemption is not allowed. • Each machine executes at most one job at time. • The goal is to minimize the maximum job completion time.

  25. Lower bound

  26. How to simplify an instance (I I#) • Big = { j J | pj ≥ εL} • New instanceI#contains all the big jobs fromI. • Small = { j  J | pj < εL} • LetX= ΣjSmall pj. • New instanceI# contains X/εL jobsof lengthεL. • The small jobs in I are first glued together to give a long job of length X, and then this long job is cut into lots of chunks of length εL.

  27. I and I# The optimal makespan of I# is fairly close to the optimal makespan of I: OPT(I#)  (1+ ε)OPT(I ).

  28. Proof • Xi– the total size of all small jobs on machineMiin optimal schedule forI. • On Mi, leave every big job where it is in optimal schedule. • Replace the small jobs onMibyXi/εL chunks of lengthεL. • X1/εL + X2/εL   X1/εL + X2/εL  =  X/εL  • Xi/εLεL – Xi (Xi/εL + 1) εL – Xi  εL • OPT(I#) OPT + εL  (1+ ε)OPT(I)

  29. How to solve the simplified instance • How many jobs in instanceI#? • pj ≥ εLfor all jobs inI#. • The total length of all jobsin I#:psum  2L. • The number of jobs inI#  2L/εL= 2/ε. • The number of jobs inI#is independent ofn. • We may simply try all possible schedules. • The number of all possible schedules  22/ε! • Running time is O(22/εn)!

  30. How to translate solution back • Letσ# be an optimal schedule for instanceI#. • Let Li#be the load of machineMiinσ#. • Let Bi#be the total length of the big jobson Miinσ#. • Let Xi be the total size of the small jobson Miinσ#. • Li#= Bi# + Xi#.

  31. σ#(I#) σ(I) • Every big job is put onto the same machine as in scheduleσ#. • Reserve an intervalof length X1#+ 2εLon machineM1and an interval of lengthX2#on machineM2. • Pack small jobs into the reserved interval on machine M1until meet some small job that does not fit in anymore. • Pack remaining unpacked jobs into the reserved interval on machine M2.

  32. PTAS

  33. Structuring the output The main idea is to cut output space (i.e. the set of feasible solutions) into lots of smaller regions over which the optimization problem is easy to approximate. Solve the problem separately for each smaller region and taking the best approximate solution over all region will then yield a globally good approximate solution. • Partition. • Find representatives. • Take the best.

  34. Partition * * - the global optimal solution

  35. Find representatives * - the global optimal solution * - an optimal solution in his district * - a representative in his district * * * * * * * * * * * * * * * * * * * * * *

  36. Take the best * - the global optimal solution * - an optimal solution in his district * - a representative in his district * * * * * * * * * * * * * * * * * * * * * *

  37. P2||Cmax • J={1,..., n} – jobs. • {M1, M2} – identical machines. • j : pj > 0 (j =1,…, n). • Each job has to be executed by one of two machines. • All jobs are available at time 0 and preemption is not allowed. • Each machine executes at most one job at time. • The goal is to minimize the maximum job completion time.

  38. How to define the districts • Big = { j J| pj ≥ εL} • Small = { j  J| pj < εL} • Let Φbe the set of feasible solutions for I. • Every feasible solutionσΦspecifies an assignment of the n jobs to the two machines. • Define the districtsΦ(1), Φ(2),…according to theassignmentof big jobs to the two machines: Two feasible solutionsσ1 иσ2 liein the same district if and only if σ1assigns everybig job to the same machine asσ2 does.

  39. Number of districts • The number of big jobs  2L/εL =2/ε. • The number of different ways for assigning these jobs to two machines  22/ε. • The number of districts  22/ε! • The number of districts depends onε and is independent of the input size!

  40. How to find good representatives • The assignments of big jobs to their machines are fixed inΦ(l). • Let OPT(l) be the makespan of the best schedule inΦ(l). • Let Bi(l)be the total length of big jobs assigned to machineMi. • T := max{Bi(1), Bi(2)}  OPT(l) • The initial workload of machine Mi isBi(l). • We assign the small jobs one by one to the machines;every time a job is assigned to the machine with the currently smaller workload. • The resulting scheduleσ(l) with makespanA(l)is our representative for the districtΦ(l).

  41. How close is A(l)to OPT(l) • IfA(l) =T, thenA(l) = OPT(l). • LetA(l)>T. • Consider the machine with higher workload in the scheduleσ(l). • Then the last job that was assigned to the machine is a small job and it has length at mostεL. • At the moment when this small job was assignedto the machine the workload of this machine was at mostpsum / 2. • A(l) (psum / 2) + εL  (1 + ε)OPT  (1 + ε)OPT(l)

  42. Structuring the execution of an algorithm • The main idea is to take an exact but slow algorithm A, and to interact with it while it is working. • If the algorithm accumulates a lot of auxiliary data during its execution, then we may remove part of this data and clean up the algorithm’s memory. • As a result the algorithm becomes faster.

  43. P2||Cmax • J={1,..., n} – jobs. • {M1, M2} – identical machines. • j : pj > 0 (j =1,…, n). • Each job has to be executed by one of two machines. • All jobs are available at time 0 and preemption is not allowed. • Each machine executes at most one job at time. • The goal is to minimize the maximum job completion time.

  44. Code of feasible solution • Letσkbe a feasible schedule ofk first jobs{1,..., k}. • We encodea feasible schedule σkwith machine loads L1 andL2by the two dimensional vector[L1, L2]. • LetVk bethe vector setcorrespondingto feasible schedulesof k jobs{1,..., k}.

  45. Dynamic programming Input (J={1,..., n},p: J→ Z+) • SetV0={[0,0]}, i=0. • While i  n do: for every vector[x,y] Viput[x+ pi,y] and[x,y + pi]inVi+1; i:= i +1; • Find the vector[x*,y*] Vnthat minimize the valuemax [x,y]Vn{x,y}. Output([x*,y*])

  46. Running time • The coordinates of all vectors are integer in the range from 0 topsum. • The cardinality of every vector setViis bounded from above by (psum)2. • The total number of vectors determined by the algorithm is at mostn(psum)2. • The running time of the algorithm is O(n(psum)2). • The size|I| of the input I satisfies |I| ≥ log(psum) = const · ln(psum). • The running time of the algorithm is not polynomial of the size of the input!

  47. How to simplify the vector sets (psum, psum) psum ΔK Δ = 1+ (ε/2n) K = logΔ(psum) = = ln(psum)/ln Δ ≤ ≤ ((1+2n )/ε) ln(psum) Δ3 Δ2 Δ 1 0 psum 1 Δ Δ2 Δ3 ΔK

  48. Trimmed vector set (psum, psum) psum Δ = 1+ (ε/2n) ΔK K = logΔ(psum) = = ln(psum)/ln Δ ≤ ≤ ((1+2n )/ε) ln(psum) Δ3 Δ2 Δ 1 0 psum 1 Δ Δ2 Δ3 ΔK

  49. Algorithm FPTAS Input (J={1,..., n},p: J→ Z+) • SetV0#={[0,0]}, i=0. • While i  n do: • for every vector [x,y] Vi#put [x+ pi,y] and[x,y + pi]inVi+1; • i:= i +1; • TransformViintoVi#. • Find the vector[x*,y*]Vn#, that minimize the value max [x,y]Vn#{x,y}. Output([x*,y*])

  50. Running time of FPTAS • The trimmed vector setVi#contains at most one vector in each box. • There are K2 boxes. • Running time of FPTAS O(nK2). • nK2= n((1+2n )/ε) ln(psum)2. • Algorithm FPTAShas a time complexity that is polynomial in the input size and in 1/ε.

More Related