1 / 21

4.4 Geometry and solution of Lagrangean duals

4.4 Geometry and solution of Lagrangean duals. Solving Lagrangean dual: (unless Let and . Then (piecewise linear and concave) is nondifferentiable , but mimic the idea for differentiable function

nili
Download Presentation

4.4 Geometry and solution of Lagrangean duals

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 4.4 Geometry and solution of Lagrangean duals • Solving Lagrangean dual: (unless Let and . Then (piecewise linear and concave) • is nondifferentiable, but mimic the idea for differentiable function • Prop 4.5: A function is concave if and only if for any there exists a vector such that for all • The vector is called the gradient of at (if differentiable). Generalize the above property to nondifferentiable functions.

  2. Def 4.3: Let be a concave function. A vector such that for all is called a subgradientof at . The set of all subgradients of at is denoted as and is called the subdifferential of at . • Prop: The subdifferential is closed and convex. Pf) is the intersection of closed half spaces. • If is differentiable at then For convex functions, subgradient is defined as vector which satisfies . • Geometric intuition: Think in space

  3. Picture:

  4. For convex functions:

  5. Prop 4.6: Let be a concave function. A vector maximizes over if and only if Pf) A vector maximizes over if and only if for all which is equivalent to • Prop 4.7: Let Then, for every the following relations hold: (a) For every is a subgradient of the function () at . (b) i.e., a vector is a subgradient of the function () at if and only if is a convex combination of the vectors

  6. Pf) (a) By definition of we have for every (b) We have shown that and since a convex combination of two subgradients is also a subgradient , it follows that Assume that there is such that and derive a contradiction. By the separating hyperplane theorem, there exists a vector and a scalar , such that , for all (4.29) Since for all then for sufficiently small we have, for some Since we obtain that for some k  E(*)

  7. (continued) From (4.29), we have (4.30) Since we have (def’n of subgradient) From (4.29), we have ( ) contradicting (4.30). 

  8. Subgradient algorithm • For differentiable functions, the direction is the direction of steepest ascent of at . But, for nondifferentiable functions, the direction of subgradient may not be an ascent direction. However, moving along the direction of a subgradient, we can move closer to the maximum point.

  9. Ex: : concave function direction of subgradient may not be an ascent direction for , but we can move closer to the maximum point.

  10. The subgradient optimization algorithm Input: A nondifferentiable concave function . Output: A maximizer of subject to. Algorithm: • Choose a starting point ; let . • Given check whether . If so, then is optimal and the algorithm terminates. Else, choose a subgradientof the function . () • Let (projection), where is a positive step size parameter. Increment and go to Step 2.

  11. Choosing the step lengths: • Thm : (a) If and then the optimal value of . (b) If for some parameter then if and are sufficiently large. (c) If , where , and then or the algorithm finds with for some finite • (a) guarantees convergence, but slow (e.g., ) (b) In practice, may use halving the value of after iterations (c) typically unknown, may use a good primal upper bound in place of . If not converge, decrease . • In practice, is rarely met. Usually, only find approximate optimal solution fast and resort to B-and-B. Convergence is not monotone. If solving LP relaxation may give better results (monotone convergence).

  12. Ex: Traveling salesman problem: For TSP, we dualize degree constraints for nodes except node 1. ( ) step direction is ( ) step size using rule (c) is ( ) • Note that th coordinate of subgradient direction is two minus the number of edges incident to node in the optimal one-tree. We do not have here since the dualized constraints are equalities.

  13. Lagrangean heuristic and variable fixing: • The solution obtained from solving the Lagrangian relaxation may not be feasible to IP, but it can be close to a feasible solution to IP. Obtain a feasible solution to IP using heuristic procedures. Then we may obtain a good upper bound on optimal value. Also called ‘primal heuristic’ • May fix values of some variables using information from Lagrangian relaxation (refer to W,177-178).

  14. Choosing a Lagrangean dual: How to determine the constraints to relax? • Strength of the Lagrangean dual bound • Ease of solution of • Ease of solution of Lagrangean dual problem Z() • Ex: Generalized Assignment Problem (max problem)(refer to W p. 179): for for

  15. Dualize both sets of constraints: where • Dualize first set of assignment constraints: where for • Dualize the knapsack constraints: where for

  16. For each for for can be solved by inspection. Calculating look easier than calculating since there are dual variables compared to for . • To find we need to solve 0-1 knapsack problems. Also note that the information that we may obtain while solving the knapsack cannot be stored and used for subsequent optimization.

  17. May solve the Lagrangean dual by constraint generation (NW p411-412): • Recall that • Given with calculate (min If stop is an optimal solution If an inequality is violated • ray s.t. hence is violated • extreme point s.t. Since is violated. • Note that max/min is interchanged in NW.

  18. Nonlinear Optimization Problems • Geometric understanding of the strength of the bounds provided by the Lagrangean dual • Let be continuous functions. Let . minimize subject to (4.31) . Let be the optimal cost. Consider Lagrangean function . • For all , and is a concave function. Lagrangean dual: . • Let . Problem (4.31) can be restated as minimize subject to .

  19. Figure 4.7: The geometric interpretation of the Lagrangean dual

  20. Given that , we have for a fixed  and for all that , . Geometrically, this means that the hyperplane lies below the set Y. For , we obtain , that is is the intercept of the hyperplane with the vertical axis. To maximize , this corresponds to finding a hyperplane, which lies below the set Y, such that the intercept of the hyperplane with the vertical axis is the largest. • Thm 4.10: The value of the Lagrangean dual is equal to the value of the following optimization problem: minimize subject to .

  21. Ex 4.7: Ex 4.3 revisited and , {(1, 0), (2,0), (1, 1), (2, 1), (0, 2), (1, 2), (2, 2), (1, 3), (2, 3)} {(3, -2), (6, -3), (2, -1), (5, -2), (-2, 1), (1, 0), (4, -1), (0, 1), (3, 0)} duality gap

More Related