1 / 18

8.3.2 Constant Distance Approximations

8.3.2 Constant Distance Approximations. Sai Divya Enni. We start with the strictest of performance guarantees that approximation remains within constant distance of the optimal solution.

kasia
Download Presentation

8.3.2 Constant Distance Approximations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 8.3.2 Constant Distance Approximations Sai Divya Enni

  2. We start with the strictest of performance guarantees that approximation remains within constant distance of the optimal solution. • On the basis of this guarantee we can say that finding optimal solution can become easier that it appears. • By presenting some examples we can say that the value of the optimal solution never exceeds some constant. • Chromatic Number of Planar Graphs is good example, since all planar graphs are four –colorable, any planar graph can be colored in 5 colors in low polynomial time. • This problem is as trivial as the chromatic index problem which asks how many colors are needed for a valid edge coloring which is NP –Complete. proof gives us a complexity of O(|E|.|V|) and edges with D(max) +1.

  3. Definition 8.7: An instance of Maximum 2 Binpacking is given by a set S, a size s: S→N , a bon capacity C , and a positive integer bound k. the question is : “Does there exist a subset S with at least k elements that can be partitioned into 2 subsets , each of which has a total size not exceeding C?” • This problem is obviously NP-complete, since it suffices to set k=|s| and l=⅟₂(Σхєs s(x)) in order to restrict it to the partition problem. Consider a following simple solution let k’ be the largest index such that the sum k’ smallest elements does not exceed C and let k’’ be the largest index such that the sum does not exceed 2C. Pack k’ smallest elements in one bin ignoring the (k’+1)st element now pack the next k’’-k’-1 elements in the second bin there by packing a total of k’-1 elements in all. However our approximation cannot exceed k’’, so that our approximation cannot exceed at most one element from it. The same idea can be extended to Maximun k-Binpacking problem.

  4. Definition 8.8: An instance of safe deposit boxes is given by a collection of deposit boxes (B₁,B₂,..Bn), each containing a certain amount S(i,j), i ranging between 1-m and j ranging between 1-n for each of m currencies and a target currency amount b and a target bound k>0. The question is “ does there exists a sub collection of at most k safe deposit boxes that among themselves contain sufficient currency to meet target b? • This problem is similar to that of resource allocation in operating systems where each process requires a certain amount of each of the different resources. There might arise a deadlock to solve this problem we release some of the resources and complete the task.

  5. But this solution cannot be applied to currencies since currencies are not interchangeable. So ths problem is NP Complete for each fixed number of currencies greater than one but admits at constant distance approximations. • Theorem 8.12: The Minimum Degree Spanning tree problem can be approximated to with in one from the minimum degree. • The approximations proceeds through successive iterations from initial spanning tree. • However NP hard problems cannot be approximated to with in a constant distance unless P =NP.

  6. Approximation Schemes • Ratio approximations: This is another technique in approximation algorithms. These approximations are performed by asking whether the algorithm can provide a fixed ratio guarantee or, for price, any non zero ratio guarantee and if so at what price. • Definition 8.9: An optimization problem ∏ belongs to the class Apx if there exists a precision requirement, ε and a approximation algorithm A, such that A takes as input an instance I for ∏ , runs in time polynomial in |I|, and obeys .

  7. An optimization problem ∏ belongs to the class PTAS (p-approximable) if there exists a polynomial time approximation scheme that is a family of approximation algorithms , {Ai} such that for each fixed precision requirement ε >0, there exists an algorithm in the family, say Ai, that takes as input an instance I of ∏, runs in the time polynomial in |I|, and obeys . • An optimization problem ∏ belongs to the class FPTAS (fully P approximable) if there exists an approximation algorithm , a that takes as input both an instance I of ∏ and a precision requirement ε runs in time polynomial |I| and ⅟ε and obeys .

  8. We know that • Theorem 8.14: Let ∏ be an optimizationproblem if its decision version is strongly NP complete, then ∏ is not fully p-approximable. • Proof: let ∏d be the decision version of ∏. Since the bound introduced in ∏ to make it a decision problem ranges up to the value of the optimal solution we have B less than or equal to max(I), it follows that the value of the optimal solution and since by definition • Now set so that an ε approximation solution should be an exact solution.

  9. If ∏ were fully p approximable the there would exists an ε approximation algorithm A running in time polynomial in ⅟ε. However time polynomial in ⅟ε is time polynomial in max(I), which is pseudo polynomial time. Hence, if ∏ were fully p approximable there would exists a pseudo polynomial time algorithm in solving it, and thus also ∏d, exactly which would contradict the strong NP completeness of ∏d. • This result leaves little room in FPTAS for the optimization versions of NP complete problems, since most NP complete problems since most NP complete problems are strongly NP Complete. It, does however leave room for for the optimization versions of problems that do not appear in P and yet are known to be NP complete.

  10. Example :a fully p approximable problem Knapsack. Given an instance with n items where the largest value item has value V and the item of largest size is S, the dynamic programming solution runs in O(n²Vlog(nSV)) time. With input size O(nlog(SV)). If we scale the item values down by some factor F, the new running time would be O(n²V/F log(nSV/F)) with the right choice for f we can make this expression polynomial in the input size. The value is further scaled to F=(V/(kn)) the approximation algorithm running in O(⅟εn³log(⅟εn²S)) time, which is polynomial to input size. hence, we have derived a fully polynomial time approximation scheme for Knapsack Problem.

  11. Theorem 8.15: Let ∏ be optimization problem with the following properties: • F(i) and max(i) are polynomially related through len(i); that is there exists bivariate polynomials p and q such that we have both • The objective value of any feasible solution varies linearly with the parameters of the instance; and • ∏ can be solved in pseudo polynomial time. Then ∏ is fully P approximable.

  12. Definition 8.10: An optimization problem is simple, if for each set of instances with optimal values not exceeding B is decidable in polynomial time. It is p simple if there exists a fixed bivariate polynomial, q such that the set of instances I with optimal values not exceeding B is decidable in q(|I|,B). • Let ∏ be an optimization problem. • If ∏ is p approximable (∏ Є PTAS), then it is simple. • If ∏ is fully p approximable (∏ Є FPTAS), then it is p-simple. Proof: we can prove the maximization problem by using the same reasoning of reduction . The approximation schemes can be met to the precision requirements ε in polynomial time of the input size I

  13. Hence we have both the terms less than B. • Hence we conclude that since the first inequality id decidable in polynomial time so is the second. Adding uniformity to the running time of approximation in algorithm adds uniformity to the decision procedure for the same instances that does not exceed B and thus proving the second statement of our theorem.

  14. Theorem 8.17: if ∏ is an NPO problem with an NP complete decision version and for each instance I of ∏ f(I) and max(I) are polynomially related through len(I) the ∏ is p simple if and only if it can be solved in Pseudo polynomial time (for proof refer to exercise 8.28). • The class PTAS is much richer than FPTAS. Our first attempt to provide and approximation for Knapsack problem through the exhaustive search of all (K”) subsets provides general technique for building approximation schemes for class of NPO problems.

  15. Definition 8.11: An instance of maximum independent subset problem is given by the collection of items each with a value. The feasible solution of the instance from an independence system, that is every subset of feasible solution is also a feasible solution. The goal is to maximize the sum of the values of the items included in the solution.

  16. Theorem 8.18: a Maximum independent subset problem is in the PTAS if and only if for any k it admits a polynomial time k completion algorithm. • This theorem uses the technique of Shifting. This technique decomposes a problem into suitable adjacent sized sub pieces and the creating a sub problem by grouping a number of sub pieces. • We think of a linear array of kl sub pieces and end up in l groups of k sub pieces. the grouping has no particular bounds the boundaries can be moved to group any elements. For each of the sub problems the approximation algorithm solves the problem and provides a solution to the entire problem.

  17. Consider the Disk covering problem: given n points in the plane and disks of fixed diameter D, cover all points with the smallest number of disks. Our algorithm divides the area in which the n points are located into strips of width D. for some natural number k we can group k strips into a single strip of kD width. We can shift the boundaries of the partition by D and obtain a new partition, this can be repeated k-1 times to obtain k partitions. We can repeat the process k times for each partition, and choose the best of the k approximations.

  18. Questions??? Thank you

More Related