1 / 55

*Djamila Ouelhadj, *Simon Martin, **Patrick Beullens and ***Ender Ö zcan

Cooperative search and hyper-heuristics in combinatorial optimisation. *Djamila Ouelhadj, *Simon Martin, **Patrick Beullens and ***Ender Ö zcan *Logistics and Management Mathematics Group, Department of Mathematics University of Portsmouth

zoey
Download Presentation

*Djamila Ouelhadj, *Simon Martin, **Patrick Beullens and ***Ender Ö zcan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cooperative search and hyper-heuristics in combinatorial optimisation *Djamila Ouelhadj, *Simon Martin, **Patrick Beullens and ***Ender Özcan *Logistics and Management Mathematics Group, Department of Mathematics University of Portsmouth **School of Mathematics - School of Management,University of Southampton ***Automated Scheduling, optimisAtion And Planning (ASAP) School of Computer Science University of Nottingham

  2. Outline of the talk • Introduction and aims • Hyper-heuristics • Agent based framework for cooperative search • Case Studies: Permutation flow shop, nurse rostering • Computational results • Conclusions • Future work

  3. Motivations • Meta-heuristics have been successfully used to solve a wide range of optimisation problems. • Meta-heuristics developed for a specific problem domain cannot necessarily achieve the same performance on the same instances or instances from another problem domain. • This frequently requires probably parameter tuning and/or design of new neighbourhood operators for the new problem domain.

  4. Motivations • Which heuristic or meta-heuristic is best for my problem? • There is almost no guidance available in choosing the best meta-heuristic for solving a problem in hand?

  5. Motivations • To build a generic framework where cooperating agents can use different heuristics and meta-heuristics to solve complex OR problems and increase the level of generality.

  6. Motivations • The key ideas of the generic framework: • Automated selection of (meta) heuristics: Propose a hyper-heuristic framework to automatically find the best combination of heuristics/meta-heuristics and parameters that best solve a particular problem. • Use cooperative search to combine the strengths of different meta-heuristics to balance intensification and diversification and direct the search towards promising regions of the search space.

  7. Hyper-heuristics • The aim of hyper-heuristics is to develop general domain-independent search methodologies that are capable of performing well-enough, soon enough, and cheap-enough across a wide-range of optimisation problems.

  8. Hyper-heuristics • Research work on hyper-heuristics was dated back to the • 1960’s (Fisher and Thompson, 1963). • The term hyper-heuristic has only been introduced recently • (Cowling et al., 2000).

  9. Hyper-heuristics • Research work on hyper-heuristics was dated back to the • 1960’s (Fisher and Thompson, 1963). • The term hyper-heuristic has only been introduced recently • (Cowling et al., 2000).

  10. Hyper-heuristics • A hyper-heuristic is a high-level heuristic which, when given a particular problem instance and a number of low level heuristics, selects and applies an appropriate low level heuristic at each decision point (Cowling et al., 2000; Soubeiga, 2003; Burke et al., 2003). • Low level heuristics are simple local search operators, domain dependent heuristics, or meta-heuristics.

  11. Properties • Take advantage of strengths and avoid weaknesses of each low level heuristic. • No problem specific knowledge is required. • Problem independent. • Goal: increase the level of generality and reusability.

  12. Hyper-heuristic framework Heuristic Selection Hyper-heuristic level Non-domain data flow Domain Barrier Acceptance Criteria Non-domain data flow (Cowling et al., 2000) Low level heuristics …… h2 h1 hk

  13. Hyper-heuristic framework Hyper-heuristic Acceptance criteria Selection strategy All Moves (AM) Improving Only (IO) Monte Carlo (MC) Great Deluge (GD) Random (R) Choice function (CF) Greedy (GR) Tabu Search based (TS) Simulated Annealing (SA)

  14. Random hyper-heuristics • Simple Random (SR) Do Select a low level heuristic uniformly at random and apply it once. Until stopping condition is met NUS October 2008

  15. Random hyper-heuristics • Random Descent (RD) Do Select a low level heuristic uniformly at random and apply it until no further improvement is possible. Until stopping condition is met NUS October 2008

  16. Random hyper-heuristics • Random Permutation Heuristic (RP) Create a random permutation of all low level heuristics. Do Select the next low level heuristic in the sequence and apply it once. Until stopping condition is met NUS October 2008

  17. Random hyper-heuristics • Random Permutation Descent (RPD) Create a random permutation of all low level heuristics Do Select the next low level heuristic in the sequence and apply it in a steepest descent fashion. Until stopping condition is met NUS October 2008

  18. Greedy hyper-heuristic Do Apply all the low level heuristics and select the heuristic providing the best improvement. Until stopping condition is met NUS October 2008

  19. Cooperative hyper-heuristics • Different heuristics have different strengths and weaknesses. • Cooperation allows the strengths of heuristics to compensate for the weaknesses of others. • Undertake a novel investigation into the role of asynchronous cooperation in a hyper-heuristic framework.

  20. Cooperative search Cooperative optimisation consists of a search performed by agents that exchange information about states, models, entire sub-problems, solutions or other search space characteristics (Blum and Roli, 2003).

  21. Agent-based cooperation Parallelism v Agents • Parallelism in combinatorial optimisation is based on speed-up. • An Agent-based approach is focussed on cooperation.

  22. What are agents? Wooldridge and Jennings (1995) “ An agent is a computer system that is situated in some environment and that is capable of autonomous action in the environment in order to meet its design objectives”. 1. Woodridge, M and Jennings, N.R (1995) Intelligent Agents: Theory and Practice. The Knowledge Engineering review, 10(2), 115-152

  23. The agent-based framework meta-heuristic agents

  24. JADE Platform Java Agent Development Framework (JADE) Is an Open Source platform for peer-to-peer agent-based applications. It provides all the infrastructure necessary to develop and run agent-based applications.

  25. Structure of an agent Configfile Readin parameters Goodpatterns Andsolutions 1,3 4, … 2,7 9, .... Storegood patterns Generate patterns Send Neighbourhood search Receive Goodpatterns Andsolutions Simulated Annealing Meta- heuristics Meta- heuristics Create new Solutions Meta- heuristics

  26. Intensification and diversification Meta-heuristics Diversification Search Space Heuristic Search Intensification Heuristic Search Intensification Heuristic Search Intensification Heuristic Search Intensification

  27. Cooperation protocol • Agents cooperate by looking for good patterns that • are the constituent parts of a good solution. • Use of reinforcement learning to score good patterns. • Good patterns are shared amongst the agents • which will be used to generate new solutions. .

  28. Cooperation protocol • Any combinatorial optimisation problem that involves manipulating a non repeating list of size n can be split into a list of n pairs. • Take the permutation where n = 10: 2,4,7,6,5,8,9,0,1,3. The following n pairs can be generated: (2,4)(4,7)(7,6)(6,5)(5,8)(8,9)(9,0)(0,1)(1,3)(3,2). .

  29. Reinforcement learning • Reward/penalty values are: • Selection function: Roulette wheel

  30. Flexibility using ontologies • The system is able to solve problems in different domains by using ontologies. • Ontologies are generic conceptualisations of combinatorial optimisation models. • The heuristics and messaging use the same generic representation of the model. • Agents cooperate by communication through ontologies.

  31. Ontologies • JADE has good support for them. • Ontologies are parsed into XML. In this way they can communicate to other agents across the web if required.

  32. Ontologies

  33. Testing and results • Flexibility: To test if the system is flexible enough to work on different problem domains. • To test if agents cooperating produced better results than the equivalent meta-heuristic heuristic combinations running as stand alone programmes. • Scalability: To see if more agents cooperating produced better results than fewer agents.

  34. Case studies • Permutation Flow Shop Problem (PFSP) • Nurse Rostering (NR)

  35. Testing and results Meta-heuristic agents: • Tabu search, • Simulated annealing, • Variable neighbourhood search

  36. Testing and results For the experiments, we have considered the following scenarios: • 1 agent in stand alone mode. • 5 agents, 1 launcher and 4 meta-heuristic agents. • 9 agents, 1 launcher and 8 meta-heuristic agents. • 13 agents, 1 launcher and 12 meta-heuristic agents.

  37. PFSP • Process n jobs on m machines. • Jobs are processed in the same sequence on all machines. • Operations are not preemptable and set-up times of operations are included in the processing time. • Objective function: minimisation of the makespan. • Bench mark problems: Taillard (1990, 1993) • 120 instances of 12 different sizes, n × m: 20x5, 20x10, 20x20, 50x5, 50x10, 50x20, 100x5, 100x10, 100x20, 200x10, 200x20, 500x20.

  38. Results

  39. Results

  40. Results

  41. The Nurse Rostering problem Nurse rostering problem consists of the assignment of shifts to nurses subject to several constraints such as workload, legal and contractual restrictions, personal preferences, etc. (Burke et al., 2004) • There are different staffing needs on different days and shifts • Staff work in shifts • Healthcare institutions work around the clock: need for day and night shifts • The correct staff mix for each ward • Many different employment contracts: Part-time, full time, etc.

  42. The Nurse Rostering problem Hard constraints: Over cover and under cover is not permitted, a nurse may not work more than one of the same shift type on the same day, a shift which requires a certain skill can only be assigned to a nurse that has that skill, etc. Soft constraints: Time related constraints, rest times, weekend shifts, etc.

  43. Fairness in Nurse Rostering Traditionally, solutions are evaluated using a weighted sum of soft constraint violations. How can we guarantee fairness?

  44. Models of fairness The standard objective function Let C be the set of constraints. wc is weight associated with a given constraint and N the number of nurses

  45. Models of fairness New Fairness objective functions MinMax = minimise the worst nurse violation MinDev = minimise the sum of deviations from the average MinError = minimise the differences between the best and the worst rosters

  46. Fairness evaluation Fairness evaluation using Jain’s index function (Jain et al., 1984; Muhlenthaler and Wanka, 2012) Its values range from the worse case 1/N to 1 where the roster is completely fair Jain’s index

  47. Testing and results • Instance Nr/nurses Nr/shifts Planning period • Emergency 27 27 28 days • Geriatrics 21 9 28 days • Psychiatry 19 14 31 days • Reception 19 19 42 days Instances from two Belgium hospitals.

  48. Testing and results For the experiments, we have considered the following scenarios: • 1 meta-heuristic agent in stand alone mode. • 13 agents, 1 launcher and 12 meta-heuristic agents.

  49. Results Table : The average Jains fairness index over 20 runs of a given fairness-based objective function for each benchmark instance, where the bold entries indicate the best one for a given instance.

  50. Results

More Related