1 / 39

Chapter 15 Approximation Algorithm

Chapter 15 Approximation Algorithm. Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial Approximation Schemes. 15.1 Introduction.

sharis
Download Presentation

Chapter 15 Approximation Algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 15 Approximation Algorithm • Introduction • Basic Definition • Difference Bounds • Relative Performance Bounds • Polynomial approximation Schemes • Fully Polynomial Approximation Schemes

  2. 15.1 Introduction • There are many hard combinatorial optimization problems that cannot be solved efficiently using backtracking or randomization. An alternative in this case for tackling some of these problems is to devise an approximation algorithm, given that we will be content with a “reasonable” solution that approximates an optimal solution. • A marking characteristic of approximation algorithms is that they are fast, as they are mostly greedy heuristics.

  3. 15.2 Basic Definitions • A combinatorial optimization problem II is either a minimization problem or a maximization problem. It consists of three components: • (1) A set D of instance. • (2) For each instance I D , there is a finite set S(I) of candidate solutions for I. • (3) Associated with each solution   S(I) to an instance I in D , there is a value f () called the solution value for .

  4. 15.2 Basic Definitions • If  is a minimization problem, then an optimal solution * for an instance I  D has the property that for all   S(I) , f (* ) f ( ). The same with a maximization problem. • An approximation algorithm A for an optimization problem  is a (polynomial time) algorithm such that given an instance I  D.It outputs some solution   S(I). We will denote by A(I) the value f ( ).

  5. 15.2 Example 15.1 Consider the problem BIN PACKING: Given a collection of items of sizes between 0 and 1, it is required to pack these items into the minimum number bins of unit capacity. Obviously, this is a minimization problem. The set of instances DΠ consists of all sets I={s1, s2, …, sn}, such that for all j, 1≤j≤n, sj is between 0 and 1. The set of solutions SΠconsists of a set of subsetsσ={B1, B2, …,Bk} which is a disjoint of I such that for all j, 1≤j≤k, Given a solutionσ, itsvalue f(σ) is simply | σ|=k. An optimal solution for this problem is that solution σ having the least cardinality. Let A be (the trivial) algorithm that assigns one bin for each item. Then, by definition, A is an approximation algorithm. Clearly, this is not a good approximation algorithm.

  6. 15.3 Difference Bounds • Perhaps, the most we can hope from an approximation algorithm is that the difference between the value of the optimal solution and the value of the solution obtained by the approximation algorithm is always constant. I.e., for all instances I of the problem, the most desirable solution that can be obtained by an approximation algorithm A is such that |A(I)-OPT(I)| K, for some constant K. • There are very few NP-hard optimization problems for which approximation algorithms with difference bounds are known.

  7. 15.3.1 Planar graph coloring • Let G=(V,E) be a planar graph. By the Four Color Theorem, every planar graph is four-colorable. It is fairly easy to determine whether a graph is 2-colorable or not. • Given an instance I of G, an approximation algorithm A may proceed as follows. Assume G is nontrivial, i.e., it has at least one edge. Determine if the graph is2-colorable. If it is, then output 2; otherwise output 4. If G is 2-colorable, then |A(I)-OPT(I)|=0. If it is not 2-colorable, then |A(I)-OPT(I)| 1. (with difference bounds)

  8. 15.3.2 Hardness result: the knapsack problem • Knapsack problem: Given n items {u1,u2,…,un} with integer sizes s1,s2,…,sn and integer values v1,v2,…,vn, and a knapsack capacity C that is a positive integer, the problem s to fill the knapsack with some of these items whose total size is at most C and whose total value is maximum. In other words, find a subset SU such that

  9. 15.3.2 Hardness result: the knapsack problem • Suppose there is an approximation algorithm A to solve the knapsack problem with difference bound K. Given an instance I, we can use algorithm A to output an optimal solution as follows. Construct a new instance I' such that for all j, 1j n, s'j= sj, v'j= (K+1)vj. It is easy to see that any solution to I' is a solution to I and vice versa. So • This means that A always gives the optimal solution, I.e., it solves the knapsack problem.

  10. 15.4 Relative Performance Bounds • Approximation ratio: Let  be a minimization problem and I an instance of . Let A be an approximation algorithm to solve . We define the approximation ratio RA(I) to be • If  is a maximization problem, then we define RA(I) to be

  11. 15.4 Relative Performance Bounds • Absolute performance ratio: the absolute performance ratio RA for the approximation algorithm A is defined by RA=inf{r| RA(I)  r for all instance ID} • Asymptotic performance ratio for the approximation algorithm A is defined by

  12. 15.4.1 The bin packing problem Problem: Given a collection of items u1,u2,…,un of size s1,s2,…,sn, where each sj is between 0 and 1, we are required to pack these items into the minimum number of bins of unit capacity.

  13. 15.4.1 The bin packing problem • heuristics for the bin packing problem: • First Fit(FF). In this method, the bins are indexed as 1,2, …. All bins are initially empty. The items are considered for packing in the order u1,u2,…,un . To pack item ui, find the least index j such that bin j contains at most 1-sj, and add item ui to the items packed in bin j. • Best Fit(BF). This method is the same as the FF method except that when item ui is to be packed, we look for that bin, which is filled to level l<=1-sj and l is as large as possible.

  14. 15.4.1 The bin packing problem • First Fit Decreasing(FFD). In this method, the items are first ordered by decreasing order of size, and then packed using the FF method. • Best Fit Decreasing(BFD). In this method, the items are first ordered by decreasing order of size, and the packed using the BF method.

  15. 15.4.1 Theorem 15.1 For all instances I of the BIN PACKING problem,

  16. 15.4.1 Theorem 15.2 For all instances I of the BIN PACKING problem,

  17. 15.4.2 The Euclidean traveling salesman problem • Problem: Given a set S of n points in the plane, find a tour  on these points of shortest length. Here a tour is a circular path that visits every point exactly once. • This problem is a special case of the traveling salesman problem, and is commonly referred to as the EUCLIDEAN MINIMUM SPANNING TREE(EMST).

  18. 15.4.2 The Euclidean traveling salesman problem • Solution: • Nearest Neighbor(NN): Let p1 be an arbitrary starting point. An intuitive method would proceed in a greedy manner, visiting first that point closest to p1, say p2, and then that point which is closest to p2, and so on. This method is referred to as the nearest neighbor(NN) heuristic, and the performance ratio is RNN(I)=NN(I)/OPT(I)=O(logn)

  19. 15.4.2 The Euclidean traveling salesman problem • Solution: • Minimum matching(MM) heuristic (see algorithm 15.1) • The performance ratio of this algorithm is 3/2, i.e., RMM(I)=MM(I)/OPT(I)<3/2

  20. 15.4.2 Algorithm 15.1 ETSPAPPROX Input: An instance I of EUCLIDEAN MINIMUM SPANNING TREE Output: A tourτfor instance I. 1. Find a minimum spanning tree T of S. 2. Identify the set X of odd degree in T. 3. Find a minimum weight matching M on X. 4. Find an Eulerian tour τe in 5. Traverse τe edge by edge and by pass each previously visited vertex. Let τ be the resulting tour.

  21. 15.4.3 The vertex cover problem • Problem: a Vertex cover C in a graph G=(V,E) is a set of vertices such that each edge in E is incident to at least one vertex in C. • The problem of deciding whether a graph contains a vertex cover of size k, where k is a positive integer, is NP-complete.

  22. 15.4.3 The vertex cover problem • Solution: • Repeat the following step until E becomes empty. Pick an edge e arbitrarily and add one of its endpoints, say v, to the vertex cover. Next, delete e and all other edges incident to v. this is an approximation algorithm that outputs a vertex cover. However, it can be shown that the performance ratio of this algorithm is unbounded. Surprisingly, if when considering an edge e, we add both of its endpoints to the vertex cover, then the performance ratio becomes 2.

  23. 15.4.3 Algorithm 15.2 VCOVERAPPROX Input: An undirected graph G = (V, E). Output: A vertex cover C for G. 1. C {} 2. while E≠{} 3. Let e = (u, v) be any edge in E. 4. C 5. Remove e and all edges incident to u or v from E. 6. end while

  24. 15.4.4 Hardness result: the traveling salesman problem • In thelast sections, we have presented approximation algorithms with reasonable performance ratios. It turns out, that there are many problems that do not admit bounded performance ratios. • For example, the problems COLORING, CLIQUE, INDEPENDENT SET and the general TRAVELING SALESMAN problem have no known approximation algorithm with bounded ratios.

  25. 15.4.4 Theorem 15.3 There is no approximation algorithm A for the problem TRAVELING SALESMAN with RA< ∞ unless NP = P.

  26. 15.5 Polynomial Approximation Schemes • So far we have seen that for some NP-hard problems there exist approximation algorithms with bounded approximation ratio. • On the other hand, for some problems, it is impossible to devise an approximation algorithm with bounded ratio unless NP=P. • On the other extreme, it turns out that there are problems for which there exists a series of approximation algorithms whose performance ratio converges to 1. Examples of these problems include the problems KNAPSACK, SUBSET-SUM and MULTIPROCESS SCHEDULING.

  27. 15.5 Definition 15.1 An approximation scheme for an optimization problem is a family of algorithm {Aε|ε>0} such that RAε≤1+ε

  28. 15.5 Definition 15.2 A polynomial approximation scheme (PAS) is an approximation scheme {Aε}, where each algorithm Aεruns in time that is polynomial in the length of the input instance I.

  29. 15.5.1 The knapsack problem • Problem: Let U={u1,u2,…,un} be a set of items to be packed in a knapsack of size C. for 1<=j<=n, let sj and vj e the size and value of the jth item, respectively. The objective is to fill the knapsack with some items in U whose total size is at most C and such that their total value is maximum.

  30. 15.5.1 The knapsack problem • Solution: Consider the greedy algorithm that first orders the items by decreasing value to size ratio (vj/sj), and then considers the items one by one for packing. If the current item fits in the available space, then it is included, otherwise the next item is considered. The procedure terminates as soon as all items have been considered, or no more items can be included in the knapsack. • The performance ratio of this greedy algorithm is unbounded. • A simple modification of the above algorithm results in a performance ratio of 2.

  31. 15.5.1 Algorithm 15.3 KNAPSACKGREEDY Input: 2n+1 positive integers corresponding to item sizes {s1, s2, …, sn}, item values {v1, v2, …, vn} and the knapsack capacity C. Output: A subset Z of the items whose total sizes is at most C. 1. Renumber the items so that v1/s1≥v2/s2≥…≥vn/sn 2. j  0; K 0; V  0; Z  {} 3. whilej<n and K<C 4. j  j+1 5. ifsj≤C-K then 6. Z 7. K  K+sj 8. V V+vj 9. end if 10. end while 11. Let Z’ = {us}, where us is an item of maximum size. 12. ifV ≥ vs, then returnZ 13. else returnZ’.

  32. 15.5.1 Theorem 15.4 Let ε=1/k, for some k≥1. Then, the running time of Algorithm Aε is O(knk+1), and its performance ratio is 1 +ε.

  33. 15.6 Fully Polynomial Approximation Schemes • The polynomial approximation scheme described in Sec. 15.5 runs in time that is exponential in 1/, the reciprocal of the desired error bound. • In this section, we demonstrate an approximation scheme in which the approximation algorithm runs in time that is also polynomial in 1/.

  34. 15.6 Definition 15.3 A fully polynomial approximation scheme (FPAS) is an approximation scheme {Aε}, where each algorithm Aεruns in time that is polynomial in both the length of the input instance and 1/ε.

  35. 15.6 Definition 15.4 A pseudopolynomial time algorithm is an algorithm that runs in time that is polynomial in the value of L, where L is the largest number in the instance.

  36. 15.6.1 The subset-sum problem • The subset-sum problem is a special case of the knapsack problem in which the item values are identical to their sizes. • Given n items of sizes s1, s2, …, sn, and a positive integer C, the knapsack capacity, the objective is to find a subset of the items that maximizes the total sum of their sizes without exceeding the knapsack capacity C. • It is shown below as algorithm SUMSETSUM.

  37. 15.6.1 Algorithm 15.4 SUBSETSUM Input: A set of items U = {u1, u2, …, un}, with sizes s1, s2, …, sn and a knapsack capacity C. Output: The maximum value of the function subject to for some subset of items 1. fori 0 to n 2. T[i, 0] = 0 3. end for 4. forj  0 to C 5. T[0, j] = 0 6. end for 7. fori1 to n 8. forj 1 to C 9. T[i, j]  T[i-1, j] 10. ifsi≤j then 11. x T[i-1, j-si] + si 12. ifx > T[i, j] thenT[i, j]  x 13. end if 14. end for 15. end for 16. return T[n,C]

  38. 15.6.1 The subset-sum problem • Now, we develop an approximation algorithm Aεwhere ε=1/k for some positive integer k. The algorithm is such that for any instance I, • Let

  39. 15.6.1 The subset-sum problem • First, we set To obtain a new instance I'. Next, we apply algorithm SUBSETSUM on I'. The running time is now reduced to (nC/K)=(kn2) • Now we estimate the error in the approximate solution. • Thus, the algorithm’s performance ratio is 1+, and its running time is (n2/ ).

More Related