1 / 22

An automatic algorithm selection approach for nurse rostering

An automatic algorithm selection approach for nurse rostering. Tommy Messelis, Patrick De Causmaecker CODeS research group, member of ITEC-IBBT- K.U.Leuven. outline. introduction automatic algorithm selection our case: nurse rostering experimental setup results conclusions

crevan
Download Presentation

An automatic algorithm selection approach for nurse rostering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An automatic algorithm selection approach for nurse rostering Tommy Messelis, Patrick De Causmaecker CODeS research group, member of ITEC-IBBT-K.U.Leuven

  2. outline • introduction • automatic algorithm selection • our case: nurse rostering • experimental setup • results • conclusions • future work

  3. observation • Many different algorithms exist that tackle the same problem class • Mostof them perform good on some instances, while on other instances, their performance is worse • There is no single best algorithm that outperforms al others on all instances

  4. how to pick the best algorithm? • It would be great to know in advance which algorithm to run on a given instance • minimize the total cost over all instances • use resources as efficiently as possible Automatic algorithm selection in a portfolio

  5. empirical hardness • hardness or complexity is linked to the solution method that is used • a hard instance for one algorithm can be easy to solve by another algorithm • empirical hardness models • map problem instance features onto performance measures of an algorithm • such models are used for performance prediction

  6. automatic algorithm selection • learn an empirical hardness model for each algorithm in a portfolio • when presented with a new, unseen instance: • predict the performance of each algorithm • run the algorithm with best prospective • hopefully achieve a better overall performance than any of the components individually !

  7. outline • introduction • automatic algorithm selection • our case: nurse rostering • experimental setup • results • conclusions • future work

  8. nurse rostering problem • the problem of finding an assignment of nurses to a set of shifts during a scheduling period that satisfies: • all hard constraints • e.g. minimal coverage on every day • as many soft constraints as possible • e.g. nurse may want to be free on Wednesdays, but it might not always be possible • hard combinatoraloptimisation problem • too complex to solve to optimality • use approximation methods (metaheuristics)

  9. INRC 2010 • first International Nurse Rostering Competition 2010 • co-organized by our research group (CODeS) • well-specified format for generic nurse rostering (NRP) instances • set of competitive algorithms, working on the same instance specification • ideal sandbox for an automatic algorithm selection tool

  10. experiments • building empirical hardness models for a set of algorithms • six-step procedure, as first introduced by K. Leyton-Brown, E. Nudelman, Y. Shoham. Learning the empirical hardness of optimisation problems: The case of combinatorial auctions. In Principles and Practice of Constraint Programming, 2002. step 1: instance distribution step 2: algorithm selection step 3: feature selection step 4: data generation step 5: feature elimination step 6: model construction • using the models to predict performance of the algorithms • allows for automatic algorithm selection

  11. empirical hardness models • instance distribution • use of an instance generator that produces real world like instances, similar to the competition instances • algorithms & performance criteria • two competitors of the INRC 2010 • alg. A: variable neighbourhoodsearch • alg. B: tabu search • quality of the solutions • measured as the accumulated cost of constraint violations (the lower the better)

  12. empirical hardness models • feature set • 305 features: easily computable properties of the instances • size • scheduling period • workforce structure • contract regulations • nurses’ requests • described in detail byT. Messelis, P. De Causmaecker. An NRP feature set. Technical report, 2010. http://www.kuleuven-kortrijk.be/codes/

  13. empirical hardness models • data generation • instance set: 500 instances • 400 training instances • 100 test instances • all feature values are computed • algorithms are run on all instances and the quality of the solutions is determined • using the computer cluster of the Flemish Supercomputer Center (VSC) • feature elimination • 201 useless (univalued) or correlated features are eliminated

  14. empirical hardness models • Model learning • using several learning methods provided by the Weka-tool • tree learning techniques were most accurate • example: Alg. A • R2 = 0,93

  15. automatic algorihm selection • given an unseen instance: • use the empirical hardness models to predict the performance of both algorithms • run the algorithm with best predicted performance • unfortunately, the results were not good • performance of the portfolio was worse than ‘always Alg. A’

  16. automatic algorihm selection • performance of Alg. A and Alg. B is very similar • in most cases, the difference is small • performance predictions include a certain error • comparing these predictions does not produce an accurate outcome • other approaches are also possible!

  17. automatic algorihm selection • building a classifier • predicts either ‘Alg. A’ or ‘Alg. B’ for a given instance • 67% of the test instances are correctly classified • for wrongly classified instances, the difference between both algorithms is not large • automatic algorithm selection tool • use the classifier to predict the algorithm that will perform best • run this algorithm

  18. results • portfolio performs better than any of the components individually • measuring the sum of the costs of obtained solutions for the test instances

  19. outline • introduction • automatic algorithm selection • our case: nurse rostering • experimental setup • results • conclusions • future work

  20. conclusions • it is possible • to build a portfolio • containing state-of-the-art algorithms • to construct an automatic algorithm selection tool • that accurately selects the best algorithm to run • improving the performance of good algorithms • using simple machine learning techniques • considering the existing algorithms as black-boxes • good strategy to overcome the weaknesses of certain algorithms with the strengths of other algorithms

  21. future work • improving the performance even more: • adding more algorithms • different variants of the current algorithms • learn other things • difference in performance • probability that a certain algorithm will perform best • applying this to other domains • currently working on project scheduling problems

  22. thank you any questions?

More Related