1 / 118

Chapter 8 Integer Programming Methods

Chapter 8 Integer Programming Methods. The hard IP problems form a class of their own and are among the most difficult optimization problems to solve. For these IPs, the number of steps required to find optimal solutions in the worst case increases exponentially with problem size .

earaujo
Download Presentation

Chapter 8 Integer Programming Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8 Integer Programming Methods

  2. The hard IP problems form a class of their own and are among the most difficult optimization problems to solve. For these IPs, the number of steps required to find optimal solutions in the worst case increases exponentially with problem size. To appreciate such growth, assume that n is the problem size and that a solution algorithm for a particular class of problems requires on the order of n3 steps [written O(n3)]. Doubling n thus increases the effort by approximately 23 or 8. If the solution algorithm takes O (3n) steps, however, doubling n increases the effort by a factor of 3n, which can be enormous even for relatively small values of n.

  3. The implication is that, although we may be able to solve a problem of a given size, there is some problem perhaps a bit larger for which the O (3n) algorithm will not be practical. Two general types of methods are described: enumerative approaches and cutting plane techniques. These are the most common and form the basis of more advanced methods.

  4. GREEDY ALGORITHMS Some integer programming problems are still easy. Minimal Spanning Tree Problem Consider the undirected network shown in Figure 8.1. The graph consists of 11 nodes and 20 edges, where each edge has an associated length. The problem is to select a set of edges such that there is a path between each two nodes. The sum of the edge lengths is to be minimized. When the edge lengths are all nonnegative, as assumed here, the optimal selection of edges forms a spanning tree. This characteristic gives rise to the name minimal spanning tree, or MST. The problem can be solved with a greedy algorithm attributable to R. Prim.

  5. (Initialization) Let m be the number of nodes in the graph, and let S1 and S2 be two disjoint sets. Arbitrarily select any node i and place it in S1. Place the remaining m – 1 nodes in S2 . (Selection) From all the edges with one end in S1 and the other end in S2, select the edge with the smallest length. Call this edge (i,j), where and . (Construction) Add edge (i,j) to the spanning tree. Remove node j from set S2 and place it in set S1.If set S2 =Ø, stop with the MST; otherwise, repeat Step 2. Step 1: Step 2: Step 3: Greedy Algorithm

  6. Applying the greedy algorithm to the graph in Figure 8.1. Node 2 was arbitrarily selected as the starting point . A crude implementation has complexity O(m2). The optimal solution is found after executing the selection step m –1 times, where at most m –1 edges must be checked at each iteration.

  7. A Machine Sequencing Problem We have n jobs that we wish to sequence through a single machine. The time required to perform job i once it is on the machine is p(i). At completion, a penalty cost is incurred equal to c(i)T(i), where c(i) is a positive quantity and T(i) is the time at which job i is finished. It is assumed that the time associated with setting up the machine for each job is negligible. A sequence of jobs is described by the vector (J1, J2,.., Jn), where Jk=j indicates that job j is the kth job in the sequence.

  8. The job completion times and the total penalty cost are determined by the following equations. Completion time through ithjob: Total penalty cost: The problem is to determine the sequence that minimizes the total penalty cost z. In all, there are n! feasible solutions.

  9. It first computes the c(i)/p(i) ratios for all i and then sequences the jobs in decreasing order of this ratio. Ties may be broken arbitrarily. For the data in Table 8.2, we get Optimal sequence: (2,1, 8,4,9,6,5,10,7,3) The corresponding total penalty cost is 2252. which is the minimum. Because sorting of n items can be done in O(n log n) time and computing the ratios can be done while sorting, this is the complexity of the algorithm.

  10. Heuristic Methods (for hard problems) Consider the following one-constraint 0-1 IP problem in which the coefficients cj and aj are positive for all j. This is called the knapsack problem, because it describes the dilemma of a camper selecting items for a knapsack. The variable xj represents the decision whether or not to include item j. The quantities cj and aj are the benefit and weight of item j . Max Z=cjxj s.t. ajxjb, xj=0 or 1, j=1,2,…,n Maximum weight the camper can carry is b.The problem is to select the set of items that provides the maximum total benefit without exceeding the weight limitation.

  11. A heuristic for finding solutions uses the benefit/weight ratio for each item. The item with the largest ratio appears to be the best from a greedy, myopic point of view, so it is chosen first. Subsequent items are considered in order of decreasing ratio. If the item will fit within the remaining weight limitation, it is included in the knapsack; if it will not, it is excluded. This greedy method does not always find the optimal solution, but it is very easy to execute and often yields a good solution.

  12. SOLUTION BY ENUMERATION Example 1 Consider the following integer linear program (ILP).

  13. As shown in Figure 8.2, there are 10 feasible solutions and that the optimal solution is x1 = 2 , x2 = 2,with z = –2. From the nonnegativity conditions coupled with the second constraint, we see that and . When , the first constraint tells us that and when , the third constraint implies that . Thus, constraints and along with the integrality requirements limit the number of feasible points to no more than 30.

  14. Added the second and third constraints' to arrive at When , and integrality imply that .This reduces the upper bound on the number of feasible points from 30 to 24. Also, multiplying the first constraint by –1 and the second by 4 and adding yields or . This further reduces the number of feasible points to no more than 16.

  15. Next we can pick a feasible solution such as , and evaluate the objective function, yielding z(2,0) = – 4. Thus, every optimal solution to Example 1 must satisfy . Now, implies or . This means that the candidates for optimality have been reduced to the set

  16. Of these eight points, (3,1) and (3,0) yield objective values smaller than – 4. The optimal value of the corresponding LP is at . Since the objective value at optimality for Example 1 must be integral, it follows that . This constraint is not satisfied at (2,3), so the candidates for optimality have been reduced to the set In the analysis, we made use of implied bounds on variables, rounding, linear combinations of constraints, and bounds on the objective function value.

  17. Exhaustive Enumeration The pure 0–1 integer program is written, in general, as (2)

  18. Decisions are made as indicated by the numbers on the branches. A negative number, –j, implies that the variable x, has been set equal to 0; whereas a positive number, + j, implies that x, has been set equal to 1.Node 6 represents the solution x = (1,0,1), whereas node 10 represents x = (0,1,1). Each node of the tree resides at a particular level that indicates the number of decisions that have been made to reach that point . A complete search tree will have 2n+1 – 1 nodes.

  19. For each node k, there is a path Pk leading to it from node 0, which corresponds toanassignment of binary values to a subset of the variables. Such an assignment is called a partial solution. We denote the index set of assigned variables byand let

  20. At any node that is not a leaf, some variable, called the separation variable, is fixed at its two possible values at the next level. Choosing a separation variable and moving to the next level is called branching. In the enumeration process, the most common option is to pursue a "depth-first" search strategy. That is, we first create a direct path from the root node to some leaf of the tree and then backtrack to explore other paths diverging from this first path. The node numbers assigned in Figure 8.3 indicate the order in which the nodes are enumerated under this strategy.

  21. To implement this process, we need a data structure that gives the status of the tree at any point. The vector Pk will be used for this purpose. For node k at level l of the tree,Pkisdefined as follows. • • The length of the vector is l and is written Pk = (j1, j2,... jl). The absolute magnitude of ji is the separation variable at level i. Depth-First Search—Branching to the Left

  22. The sign of ji indicates the value ofthe separation variable on the current path. A negative sign indicates that the variable is set equal to zero, and a positive sign (or no sign) indicates that it is set equal to 1. The component ji can be underlined or not. If it is underlined, the alternative node at level i has already been explored. If it is not underlined, the alternative has yet to be explored. The variables not mentioned in Pk are the free variables. • • •

  23. Table 8.4 lists the Pk vectors for the nodes in Figure 8.3 in the order in which they were generated. Note: for depth-first search, only negative numbers are underlined.

  24. Depth–First Search—Arbitrary Branching If , we let it appear in Pk as For example, if the order of nodes considered in Figure 8.4 had been1, 3, 2, 4,the sequence of Pk vectors would have been .

  25. Algorithm for Exhaustive Enumeration To illustrate this procedure, consider the following knapsack problem.

  26. The nodes of the search tree, along with the path vectors and the partial solutions determined in the course of the iterations, appear in Table 8.5. The notation jk is used here to identify the separation variable at node k. The optimal solution is found, atnode 6, to be xB = (1, 0,1) with zB=10.

  27. BRANCH AND BOUND Depending on the implementation, the terms implicit enumeration, tree search, and strategic partitioning are sometimes used. Regardless of the name, B&B has two appealing qualities. First, it can be applied to the mixed-integer problem and to the pure integer problem in essentially the same way, so a single method works for both problems. Second, it typically yields a succession of feasible integer solutions, so if the computations have to be terminated as a result of time restrictions, the current best solution can be accepted as an approximate solution.

  28. General Ideas To formalize B&B concepts, let zB again denote the objective function value of the incumbent and let zk represent the objective function value of the corresponding LP relaxation at some node k. Then, whenever an IP maximization problem is solved as an LP, one of the following four alternatives arises.

  29. 1. The LP has no feasible solution (in which case the current IP also has no feasible solution). The LP has an optimal solution (in which case the current IP optimal solution and so cannot provide an improvement over the incumbent). The optimal solution to the LP is integer valued and feasible, and yields (in which case the solution is optimal for the current IP and provides an unproved incumbent for the original IP, and thus zB is reset to zk). None of the foregoing occurs—i.e., the optimal LP solution satisfies , but is not integer valued. 2. 3. 4.

  30. In each of the first three cases, the IP at node k is disposed of simply by solving the LP. That is, the IP is fathomed. A problem that is fathomed as a result of case 3 yields particularly useful information because it allows us to update the incumbent. If the problem is not fathomed and hence winds up in case 4, further exploration or branching is required . Once again we note that the relaxed problem associated with each node does not have to be an LP. A second choice could be an IP that is easier to solve than the original. Typical relaxations of the traveling salesman problem, for instance, are the assignment problem and the MST problem.

  31. B&B Subroutines Bound: This procedure examines the relaxed problem at a particular node and tries to establish a bound on the optimal solution. It has two possible outcomes: 1. An indication that there is no feasible solution in the set of integer solutions represented by the node. A value zUB—an upper bound on the objective function for all solutions at the node and its descendent nodes. (for max. problems) 2.

  32. Approximate: This procedure attempts to find a feasibleinteger solution from the solution of the relaxed problem. If one is found, it will have an objective value, callitzUB, that is a upper bound on the optimal solution for a maximization problem. This procedure performs logical tests on the solution found at a node. The goal is to determine if any of the free binary variables are necessarily 0 or 1 in an optimal integer solution at the current node or at its descendents, or whether they must be set to 0 or 1 to ensure feasibility as the computations progress. Variable Fixing:

  33. Branch: A procedure aimed at selecting one of the free variables for separation. Also decided is the first direction (0 or 1) to explore. This is primarily a bookkeeping procedure that determines which node to explore next when the current node is fathomed.It is designed to enumerate systematically all remaining live nodes of the B&B tree while ensuring that the optimal solution to the original IP is not overlooked. Backtrack:

  34. We illustrate some of the concepts using the knapsack problem .

  35. Fathoming by Bounds Assume that the enumeration process has arrived at some node in the search tree and that, in some manner, a feasible solution xB with objective value zB has been obtained. As we have mentioned, this solution is called the incumbent. Because xB is feasible for the original IP, zB is a lower bound on the value of the optimal solution zIP—that is, .

  36. (3) The current node represents some set of feasible solutions that could be enumerated and identified by setting the free variables equal to every possible combination. By solving a relaxed problem (or solving a problem by some other method), we obtain an upper bound zUB on the objective values for all solutions in this set (but not on the solution to the original IP unless we are at node 0). If it happens that

  37. then all the solutions in the set can be judged nonoptimal or atleast no better than the solution currently in hand. All solutions in the tree that are descendants of the node under consideration can be fathomed and the procedure can backtrack to another part of the tree. On the other hand, if the node cannot be fathomed, and so we must continue to branch.

  38. Returning now to the knapsack problem, an upper bound can be obtained by relaxing the integrality restrictions on the variables. The resulting problem is

  39. which is an LP with a single constraint. Computing the benefit/cost ratio for each variable and then using a greedy algorithm will easily solve this problem. Here we are interpreting the problem as having the goal of maximizing the benefit subject to a budget constraint. The benefit/cost ratio for variable j is its objective coefficient cj divided by its constraint coefficient aj For our example, the data are presented in Table 8.6.

  40. (4) The solution is obtained by considering the variables in decreasing order of the benefit/cost ratio, which is (2, 3,1). Each variable is set equal to 1 until the constraint is violated. The variable for which this occurs is then reduced to the fractional value that will just use up the budget. For the example problem, x2 =1, x3=0.667, x1 = 0,with zUB = 13. The optimal objective value for the relaxed problem is the desired upper bound. Since this is a pure integer problem and the objective coefficients are integer valued, we can round this down to the nearest integer. The rounded value is indicated as follows.

  41. For example, at node 4 in Figure 8.6, we have = Ø, = {3}, = {1,2}, and zB = 10. The relaxationis which has the solution x1= 0.8, x2 = 1, with = 12.2. . Since . the node cannot be fathomed.

  42. Now, examining node 8, we have = Ø,= {1,3}, and = {2}. The corresponding relaxation is with solution x2 = 1 and = 9. Because this upper bound is less than the current best solution zB = 10, we can fathom the node.

More Related