1 / 38

GRASP: A Sampling Meta-Heuristic

GRASP: A Sampling Meta-Heuristic. Topics What is GRASP The Procedure Applications Merit. What is GRASP. GRASP : Greedy Randomized Adaptive Search Procedure Random Construction : TSP: randomly select next city to add High Solution Variance Low Solution Quality

Download Presentation

GRASP: A Sampling Meta-Heuristic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GRASP: A Sampling Meta-Heuristic Topics • What is GRASP • The Procedure • Applications • Merit

  2. What is GRASP GRASP : Greedy Randomized Adaptive Search Procedure Random Construction: TSP: randomly select next city to add High Solution Variance Low Solution Quality TSP: randomly select next city to add Greedy Construction: TSP: select nearest city to add High Solution Quality Low Solution Variance GRASP: Tries to Combine the Advantages of Random and Greedy Solution Construction Together.

  3. The Knapsack Example • Knapsack problem • Backpack: 8 units of space, 4 items to pick • Item Value in terms of dollars: 2,5,7,9 • Item Cost in terms of space units: 1,3,5,7 • Construction Heuristic • Pick the Most Valuable Item • Pick the Most Valuable Per Unit

  4. Solution Quality • Solution Quality • For Heuristic 1: (1,4) , Value 11 • For Heuristic 2: (1,4), Value 11. • Optimal Solution: (2,3), Value 12 • None of them gives the Optimal solution • This is true for any heuristic • Theoretically, for a NP-Hard problem, there is no polynomial algorithm

  5. Semi-Greedy Heuristics • Add at each step, not necessarily the highest rated solution components • Do the following • Put high (not only the highest) solution components into a restricted candidate list (RCL) • Choose one element of the RCL randomly and add it to the partial solution • Adaptive element: The greedy function depends on the partial solution constructed so far. • Until a full solution is constructed.

  6. Mechanism of RCL • Size of the Restricted Candidate List • 1) If we set size of the RCL to be really big, then the semi-greedy heuristic turns into a pure random heuristic • 2) If we set the size of RCL to be 1, the sem-greedy heuristic turns into the pure greedy heuristic • Typically, this size is set between 3~5.

  7. GRASP • Do the following • Phase I: Construct the current solution according to a greedy myopic measure of goodness (GMMOG) with random selection from a restricted candidate list • Phase II: Using a local search improvement heuristic to get better solutions • While the stopping criteria unsatisfied

  8. GRASP • GRASP is a combination of semi-greedy heuristic with a local search procedure • Local search from a Random Construction: • Best solution often better than greedy, if not too large prob. • Average solution quality worse than greedy heuristic • High variance • Local Search from Greedy Construction: • Average solution quality better than random • Low (No Variance)

  9. The Knapsack Example • Knapsack problem • Backpack: 8 units of space, 4 items to pick • Item Value in terms of dollars: 2,5,7,9 • Item Cost in terms of space units: 1,3,5,7 • Two Greedy Functions • Pick the Most Valuable Item • Pick the Most Valuable Per Unit

  10. GRASP • The Most Valuable Item with RCL=2 • Items 4 and 3 with values 9,7 are in the RCL • Flip a coin, we select …. • The Most Valuable Per Unit with RCL = 2 • Items 1 and 2 are selected with values 2/1 =2 and 5/3 = 1.7, • Flip a coin, we select ….

  11. GRASP extensions • Merits • Fast • High Quality Solution • Time Critical Decision • Few Parameters to tune • Extension • Reactive GRASP – The RCL Size • The use of Elite Solutions found • Long term memory, Path relinking

  12. Literature • T.A.Feo and M.G.C. Resende, “A probabilistic Heuristic for a computational Difficult Set covering Problem,” Operations Research Letters, 8:67-71, 1989 • P. Festa and M.G.C. Resende, “GRASP: An annotated Biblograph” in P. Hansen and C.C. Ribeiro, editors, “Essays and Surveys on Metaheuristics, Kluwer Academic Publishers, 2001 • M.G.C.Resende and C.C.Ribeiro, “Greedy Randomized Adaptive Search Procedure”, in Handbook of Metaheuristics, F. Glover and G. Kochenberger, eds, Kluwer Academic Publishers, 219-249, 2002

  13. Neighbourhood • For each solution S S,N(S) Sis a neighbourhood • In some sense each TN(S) is in some sense “close” to S • Defined in terms of some operation • Very like the “action” in search

  14. Neighbourhood Exchange neighbourhood:Exchange k things in a sequence or partition Examples: • Knapsack problem: exchange k1 things inside the bag with k2 not in. (for ki, k2 = {0, 1, 2, 3}) • Matching problem: exchange one marriage for another

  15. 2-opt Exchange

  16. 2-opt Exchange

  17. 2-opt Exchange

  18. 2-opt Exchange

  19. 2-opt Exchange

  20. 2-opt Exchange

  21. 3-opt exchange • Select three arcs • Replace with three others • 2 orientations possible

  22. 3-opt exchange

  23. 3-opt exchange

  24. 3-opt exchange

  25. 3-opt exchange

  26. 3-opt exchange

  27. 3-opt exchange

  28. 3-opt exchange

  29. 3-opt exchange

  30. 3-opt exchange

  31. 3-opt exchange

  32. 3-opt exchange

  33. Neighbourhood Strongly connected: • Any solution can be reached from any other(e.g. 2-opt) Weakly optimally connected • The optimum can be reached from any starting solution

  34. Neighbourhood • Hard constraints create solution impenetrable mountain ranges • Soft constraints allow passes through the mountains • E.g. Map Colouring (k-colouring) • Colour a map (graph) so that no two adjacent countries (nodes) are the same colour • Use at most k colours • Minimize number of colours

  35. Map Colouring   ? Starting sol Two optimal solutions Define neighbourhood as: Change the colour of at most one vertex Make k-colour constraint soft…

  36. Variable Neighbourhood Search • Large Neighbourhoods are expensive • Small neighbourhoods are less effective Only search larger neighbourhood when smaller is exhausted

  37. Variable Neighbourhood Search • m Neighbourhoods Ni • |N1| < |N2| < |N3| < … < |Nm| • Find initial sol S ; best = z (S) • k = 1; • Search Nk(S) to find best sol T • If z(T) < z(S) S = T k = 1 else k = k+1

  38. VNS does not follow a trajectory • Like SA, tabu search

More Related