1 / 72

EXTREMAL VALUES OF TOLERANCES IN COMBINATORIAL OPTIMIZATION

EXTREMAL VALUES OF TOLERANCES IN COMBINATORIAL OPTIMIZATION. Boris Goldengorin Nizhny Novgorod Branch of Higher School of Economics, Russian Federation and University of Groningen, The Netherlands. Outline of this Talk. Elementary introduction Sensitivity analysis and tolerances

misu
Download Presentation

EXTREMAL VALUES OF TOLERANCES IN COMBINATORIAL OPTIMIZATION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EXTREMAL VALUES OF TOLERANCES IN COMBINATORIAL OPTIMIZATION Boris Goldengorin Nizhny Novgorod Branch of Higher School of Economics, Russian Federation and University of Groningen, The Netherlands

  2. Outline of this Talk Elementary introduction Sensitivity analysis and tolerances Non-trivial tolerances are invariants for the set of optimal soltuions Why extremal values of tolerances Partial enumeration by means of tolerances Computational Study Concluding Remarks

  3. Sensitivity analysis • After an optimal solution to a combinatorial optimization problem has been determined, a natural next step is to apply sensitivity analysis, sometimes also referred to as post-optimality analysis or what-if analysis.

  4. Elementary Introduction(upper tolerances for elements in an optimal solution) Example. We are given an ordered set of numbers, say E = {3, 5, 4, 12, 7, 10}. Problem 1. Find a smallest single element in E. Solution 1. It is 3 and attained at the first place, (1). Related question 1. How much we are allowed to increase the optimal value 3 such that the element (1) will remain optimal keeping all other entries unchanged. It is just the difference between an optimal solution in E\{3} (second optimal solution) and first optimal value 3, i.e. u(1) = 4 – 3 =1. We call u(1) an upper tolerance of the given optimal solution (1) with its value 1.

  5. Elementary Introduction (Problem 1 cont.) (lower tolerances for elements in an optimal solution) Example. We are given an ordered set of numbers, say E = {3, 5, 4, 12, 7, 10}. Problem 1. Find a smallest single element in E. Solution 1. It is 3 and attained at the first place, (1). Related question 2. How much we are allowed to decrease the optimal value 3 such that the element (1) will remain optimal. It is just as much as we would like, i.e. infinity. We call l(1) = ∞ a lower tolerance of the given optimal solution (1) with its value = ∞.

  6. Elementary Introduction(lower tolerances for elements outside of an optimal solution) Example. We are given an ordered set of numbers, say E = {3, 5, 4, 12, 7, 10}. Problem 1. Find a smallest single element in E. Solution 1. It is 3 and attained at the first place, (1). Related question 3. How much we are allowed to decrease an element in E\{3} such that the element (1) will remain optimal keeping all other entries unchanged. It is just the difference between such an element, say 5 and first optimal solution 3, i.e. l(2) = 5 – 3 =2. We call l(2) a lower tolerance for element (2) w.r.t. the given optimal solution (1). Similarly, lower tolerances for all other elements in E\{3} are l(3) = 1, l(4) = 9, l(5) = 4, l(6) = 7.

  7. Elementary Introduction(upper tolerances for elements in an optimal solution) Example. We are given an ordered set of numbers, say E = {3, 5, 4, 12, 7, 10}. Problem 2. Find a smallest sum of two elements in E. Solution 1. It is 7 and attained at the optimal solution (1,3). Related question 1. How much we are allowed to increase the weight of the element 3 such that the optimal solution (1,3) will remain optimal keeping all other entries unchanged. To find another (second) optimal solution it is enough just to replace either the element (1) or (3) by any other element with the smallest weight in E\{3,4}, i.e. c(2) = 5. Hence, u(1) = 5 – 3 = 2, and u(3) = 4 – 3 = 1 w.r.t. the opt. solution (1,3). We call u(1), u(3) upper tolerances for the given optimal solution (1,3).

  8. Elementary Introduction(lower tolerances for elements in an optimal solution) Example. We are given an ordered set of numbers, say E = {3, 5, 4, 12, 7, 10}. Problem 2. Find the smallest sum of two elements in E. Solution 1. It is 7 and attained at the optimal solution (1,3). Related question 2. How much we are allowed to decrease the weights of elements outside of the optimal solution (1,3) such that (1,3) will remain optimal keeping all other entries unchanged. As easy to see any element outside (1,3) might be reduced up to the largest value among the elements in the given opt. solution, i.e., c(3) = 4. Hence, the lower tolerances are l(2)=5 – 4 =1, l(4)=12 – 4 =8, l(5)=7 – 4 =3, l(6)=10 – 4 =6, and upper tolerances are u(2) = u(4) = u(5) = u(6) = ∞.

  9. Elementary Introduction(extremal values for upper and lower tolerances) Example. We are given an ordered set of numbers, say E = {3, 5, 4, 12, 7, 10}. Problem 2. Find the smallest sum of two elements in E. Solution 1. It is 7 and attained at the optimal solution (1,3). Related questions 3. - Is the smallest value of among all upper tolerances equal to the smallest value among all lower tolerances. - Is the largest value of among all upper tolerances equal to the largest value among all lower tolerances. As easy to see min{u(1), u(3)} = u(3) = 1, and min{l(2), l(4), l(5), l(6)} = l(2) = 1. Hence, the smallest upper and lower are equal ! Is such an equality valid for any other combinatorial optimization problem?

  10. Sensitivity analysis • After an optimal solution to a combinatorial optimization problem has been determined, a natural next step is to apply sensitivity analysis, sometimes also referred to as post-optimality analysis or what-if analysis. • Sensitivity analysis is also a well-established topic in linear programming and mixed integer programming. The purpose of sensitivity analysis is to determine how the optimality of the given optimal solution depends on the input data.

  11. Why Sensitivity Analysis • In many cases the data used are inexact or uncertain. In such cases sensitivity analysis is necessary to determine the credibility of the optimal solution and conclusions based on that solution.

  12. Why Sensitivity Analysis • In many cases the data used are inexact or uncertain. In such cases sensitivity analysis is necessary to determine the credibility of the optimal solution and conclusions based on that solution. • Another reason for performing sensitivity analysis is that sometimes rather significant considerations have not been built into the model due to the difficulty of formulating them. Having solved the simplified model, the decision maker wants to know how well the optimal solution fits in with the other considerations.

  13. Tolerances • The most interesting topic of sensitivity analysis is the special case when the value of a single element in the optimal solution is subject to change.

  14. Tolerances • The most interesting topic of sensitivity analysis is the special case when the value of a single element in the optimal solution is subject to change. • The goal of making such perturbations is to determine the tolerances being defined as the maximum amount of changes of a given individual cost (weight, distance, time etc.) preserving all other entries of the input file and optimality of the given optimal solution.

  15. Tolerances vs. Algorithms Today we are going to discuss how to apply tolerances of easily solvable relaxations in order to improve the efficiency of either exact or approximation algorithms for either Pollynomially Solvable or NP-hard problems in Combinatorial Optimization.

  16. Upper Tolerances • An upper tolerance isthe largest increase in the cost of an element such that our current optimal solution remains optimal keeping all other elements unchanged.

  17. Upper Tolerances • Upper tolerance: the largest increase in the cost of an element such that our current optimal solution remains optimal keeping all other elements unchanged. • Corresponding to the increment of a cheapest solution after removing a single element from the optimal solution.  

  18. Lower Tolerances • A lower tolerance is the largest decrease in the cost of an element such that our current optimal solution remains optimal keeping all other elements unchanged.

  19. Our class of combinatorial optimization problems • Additive linear functions are objective functions; • A feasible solution will be infeasible after deletion (insertion) a single element from (to) the feasible solution (non-embedded set of feasible solutions). • Examples of problems defined on simple weighted graphs with non-negative weights: - Polynomially solvable: Shortest Path , Minimum Spanning Tree (1-Tree), Assignment (Matching), Max-Flow-Min-Cut; - NP-hard: Traveling Salesman, Maximum Weighted Independent Set, Linear Ordering Problem, Max-Cut, Capacitated Vehicle Routing.

  20. Libura’s Theorem for the TSP, Discrete Applied Mathematics, 1991 Let f(S*) = min{f(S):SϵD} be an optimal value with an optimal solution S*. For e ϵ S*, upper tolerance u(e) = min{f(S):SϵD-(e)} - f(S*), and for enot inS*, u(e) = ∞; For enot inS*, lower tolerance l(e) = min{f(S):SϵD+(e)} - f(S*), and for e ϵ S*, l(e) = ∞;

  21. Upper and Lower Basic Solutions min{f(S):SϵD-(e)} = f[S*-(e)] => Upper Basic Solutions D*-(e) ={S*-(e) : e ϵS*} min{f(S):SϵD+(e)} = f[S*+(e)] => Lower Basic Solutions D*+(e) ={S*+(e) : e not inS*}

  22. (Non-)Trivial Tolerances • Upper tolerance for an element outside of the given optimal solution is either zero or infinity. • Lower tolerance for an element of the given optimal solution is either zero or infinity. • The above mentioned infinity values of upper and lower tolerances are called trivial, and all other tolerances are called non-trivial.

  23. Non-trivial tolerances are invariants for the set of optimal solutions Non-trivial tolerances are independent from the chosen optimal solution

  24. Application of Lower Tolerances The first application of lower tolerances (called alpha-values) is done by Helsgaun (EJOR 2000) in the Modified Lin-Kernighan heuristic for the Symmetric TSP (STSP).

  25. Computing Tolerances for the Assignment, 1-Tree Problems In case of Assignment Problem (AP) or 1-Tree Problem (1-TP), additional problem should be solved (Libura, 1991) to compute a single non-trivial tolerance value.

  26. Helsgaun’s observation for the STSP (EJOR 2000) • An optimal tour normally contains between 70% and 80% of the edges of a minimum 1-tree • 5 alpha-nearest (with smallest lower tolerances) edges are used within Lin-Kernighan heuristic

  27. How to compute the five smallest alpha-values (lower tolerances) • Helsgaun has designed an algorithm with quadratic time and linear space complexities. • Note that Kruskal’s algorithm returns all upper and lower tolerances together with an optimal Minimum Spanning Tree. • We have suggested to study properties of upper and lower tolerances with the purpose to reduce Helsgaun’s computational complexity.

  28. Assignment Problem (1) Find set of cycles such that each location is in one cycle and the sum of the lengths of the arcs in the cycles is as short as possible.

  29. How many elements are in and outside of an optimal solution • For the following problems the number of elements in an optimal (feasible) solution is just linear and the number of elements outside of an optimal solution is quadratic • TSP, AP, Minimum Spanning Tree, 1-Tree, Shortest Path Problems

  30. Smallest upper and smallest lower tolerances • The smallest value of non-trivial upper tolerances us is equal to the smallest value of non-trivial lower tolerances ls , i.e. us = ls . • If us > 0, then for any f: fs < f < fs+ us there is no feasible solution to the original COP

  31. Upper and Lower Basic Solutions Sets for All Optimal Solutions G is the set of elements with the largest upper tolerance defined on the intersection of all optimal solutions. H is the set of elements with the largest lower tolerance defined on the complement to the intersection of all optimal solutions. A is the set of upper basic solutions for all optimal solutions. B is the set of all complements to lower basic solutions for all optimal solutions.

  32. Connected Feasible Solutions The set of feasiblesolutions is calledconnectedifbothconditions a) and b) hold: • A ∩ H is non-empty; • B ∩ G is non-empty.

  33. Sufficient Conditions of Equality for Largest Upper and Lower Tolerances (Goldengorin and Sierksma, 2003) If the set of feasible solutions is connected, then the largest values of upper and lower tolerances are equal (se e.g., Goldengorin, Jager and Molitor. Journal of Computer Science, 2006)

  34. Necessary and Sufficient Conditions of Equality for Largest upper and largest lower tolerances Definition. An upper covering is the union of all upper basic solutions without the chosen optimal solution. Main Theorem. The largest values of upper and lower tolerances are equal iff an upper covering is equal to the complement of the ground set to the chosen optimal solution.

  35. Enumeration Algorithms for the AP How to design and improve the Hungarian Algorithm based on the non-trivial upper and lower tolerances

  36. Hungarian Algorithm 1. Reduce rows and columns by its smllest entry (create at least one zero in each row and column, i.e. reduced matrix) 2. Cover all zeros in the reduced matrix by the minimum number k of lines (horizontal and vertical), and apply Konig-Egervary’s theorem 3. If k=n, then output an optimal AP solution, otherwise reduced all uncovered entries by its smallest uncovered entry and return to 2.

  37. Cost matrix for Machineco with row minima Cost matrix after row minima are subtracted

  38. Cost matrix after column minimum is subtracted Four lines required; optimal solution is available

  39. M1 M2 M3 M4 J1 J2 J3 J4 Tolerance Based Algorithm for the AP

  40. M1 M2 M3 M4 M4 M3 M1 M2 J2 J3 J4 J4 J3 J2 J1 J1 Tolerance Based Algorithm for the AP 3 3 2 good bad Optimal solution to the AP

  41. Asymmetric Traveling Salesman Problem (ATSP) Suppose locations are given, and the distance between each pair too.

  42. Asymmetric Traveling Salesman Problem (2) Objective: find a shortest tour visiting all locations exactly once.

  43. Asymmetric Traveling Salesman Problem (3) • In the worst case exponentially many solutions have to be searched through. • Problem is hard to solve to optimality • Frequently occurring in practice.

  44. Enumeration of AP solutions w.r.t. an optimal ATSP solution no. feasible solutions AP solutions

  45. 1 3 7 5 8 9 4 6 22 13 2 3 1 4 5 2 12 16 10 18 15 21 20 19 17 14 11 1 6 5 8 3 4 7 2 9 16 2 1 10 22 15 18 13 21 12 11 19 14 17 20 Bottleneck Tolerances I II

  46. 1 3 7 5 8 9 4 6 22 13 2 I 3 1 4 5 2 12 16 10 18 15 21 20 19 17 14 11 1 6 5 8 3 4 7 2 9 II 16 2 1 10 22 15 18 13 21 12 11 19 14 17 20 A Largest Jump AP solutions number of feasible solutions

  47. Branch and Bound • A method to solve the NP-hard problems (such as ATSP) is Branch and Bound (BnB) • BnB: solve easy relaxation of the problem. If not feasible for the original problem, partition the problem into subproblems (branching). • Problem can be discarded if lower bound higher than current best solution. • Subproblems listed in a search tree.

  48. State-of-the-art BnB algorithms for the ATSP • Miller & Pekny (1991) • Carpaneto et al. (1995) • Fischetti & Toth (1997) Best First Search algorithms: search through most promising subproblem first.

  49. Depth First Search (DFS) • Search strategy in BnB. • Search through most recently generated subproblem first. • Used for very difficult problems because: • Only linear memory space required; • Good solution likely if terminated premature-ly (Truncated BnB; see for example Zhang, 2000).

  50. Branching rule (Carpaneto, 1980) Arc a Delete the red arc (subproblem 1)

More Related