1 / 22

Hydra-MIP: Automated Algorithm Configuration and Selection for Mixed Integer Programming

Hydra-MIP: Automated Algorithm Configuration and Selection for Mixed Integer Programming. Lin Xu, Frank Hutter , Holger H. Hoos , and Kevin Leyton-Brown Department of Computer Science University of British Columbia. Solving MIP more effectively.

decker
Download Presentation

Hydra-MIP: Automated Algorithm Configuration and Selection for Mixed Integer Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hydra-MIP: Automated Algorithm Configuration and Selection for Mixed Integer Programming Lin Xu, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown Department of Computer Science University of British Columbia

  2. Solving MIP more effectively Portfolio-based algorithm selection (SATzilla) [Xu et al., 2007;2008;2009] Where are the solvers? Parameter settings of a single solver (e.g. CPLEX) How to find good settings? Automated algorithm configuration tool[Hutter et al., 2007;2009] How to find good candidates for algorithm selection? Algorithm configuration with dynamic performance metric [Xu et al., 2010] Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  3. Hydra • Portfolio-based algorithm selection: • Automated algorithm configuration: NEW MODELS Some particularly related work: [Rice, 1976]; [Leyton-Brown, Nudelman & Shoham, 2003; 2009]; [Guerri & Milano, 2004]; [Nudelman, Leyton-Brown, Shoham & Hoos, 2004] Better use Some particularly related work: [Gratch & Dejong, 1992]; [Balaprakash, Birattari & Stuetzle, 2007]; [Hutter, Babic, Hoos & Hu, 2007]; [Hutter, Hoos, Stuetzle & Leyton-Brown, 2009] Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  4. Outline • Improve algorithm selection • SATzilla • Drawback of SATzilla • New SATzilla with cost sensitive classification • Results • Reduce the construction cost • Hydra • The cost • Make full use of configuration • Results • Conclusion Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  5. SATzilla: Portfolio-Based Algorithm Selection [Xu, Hutter, Hoos, Leyton-Brown, 2007; 2008] NovelInstance Metric Portfolio Builder • Given: • training set of instances • performance metric • candidate solvers • portfolio builder (incl. instance features) • Training: • collect performance data • portfolio builder learns predictive models • At Runtime: • predict performance • select solver Candidate Solvers Training Set SelectedSolver Portfolio-BasedAlgorithm Selector 5 Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  6. Drawback of SATzilla Algorithm selectionin SATzilla based on regression: • Predict each solver performance independently • Select best predicted solver • Classification based on regression Goal of regression: Accurately predict each solver’s performance Algorithm selection: Pick solvers on a per-instance basis in order to minimize some overall performance metric Better regression Better algorithm selection Algorithm Selector Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  7. Cost sensitive classification for SATzilla Loss function: the performance difference • Punish misclassifications in direct proportion to their impact on portfolio performance • No need for predicting runtime Implementation: Binary cost sensitive classifier: decision forest (DF) • Build DF for each pair of candidate solvers • one vote for the better solver • Most votes -> Best solver Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  8. SATzillaDF performance LR: linear regression as used in previous SATzilla; DF: cost sensitive decision forest Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  9. SATzillaDF performance LR: linear regression as used in previous SATzilla; DF: cost sensitive decision forest Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  10. MIPzillaDF performance Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  11. MIPzillaDF performance Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  12. Hydra Procedure: Iteration 1 Metric PortfolioBuilder Candidate Solver Set Training Set Algorithm Configurator CandidateSolver Portfolio-BasedAlgorithm Selector ParameterizedAlgorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  13. Hydra Procedure: Iteration 2 Metric PortfolioBuilder Candidate Solver Set Training Set Algorithm Configurator CandidateSolver Portfolio-BasedAlgorithm Selector ParameterizedAlgorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  14. Hydra Procedure: Iteration 3 Metric PortfolioBuilder Candidate Solver Set Training Set Algorithm Configurator CandidateSolver Portfolio-BasedAlgorithm Selector ParameterizedAlgorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  15. Hydra Procedure: After Termination NovelInstance Output: SelectedSolver Portfolio-BasedAlgorithm Selector Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  16. We are wasting configuration results! Metric Training Set Algorithm Configurator CandidateSolver ParameterizedAlgorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  17. Make full use of configurations Metric k Candidate Solvers Training Set Algorithm Configurator ParameterizedAlgorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  18. Make full use of configurations Advantage: Add k solvers instead of 1 in each iteration (good for algorithm selection) No need for validation step in configuration (SAVE time) Disadvantage: Need to collect runtime data for more solvers (COST time) In our experiment, we found SAVE = COST(k=4) Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  19. Experimental Setup: Hydra’s Inputs Portfolio Builder: MIPzillaLR (SATzilla for MIP) [Xu et al., 2008] MIPzillaDF (MIPzilla using cost sensitive DF) Parameterized Solver: CPLEX12.1 Algorithm Configurator:FocusedILS 2.4.3 [Hutter, Hoos, Leyton-Brown, 2009] Performance Metric: Penalized average runtime (PAR) Instance Sets: 4 heterogeneous sets by combining homogeneous subsets [Hutter et al., 2010];[Kadioglu et al., 2010]; [Ahmadizadeh et al., 2010] Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  20. Three versions of Hydra for MIP HydraLR,1: Original Hydra for MIP [Xu et al., 2010] HydraDF,1: Hydra for MIP with Improvement I HydraDF,4: Hydra for MIP with Improvement I and II Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  21. MIP-Hydra performance on MIX • HydraDF,* performs better than HydraLR,1 • HydraDF,4 performs similar to HydraDF,1 , but converge faster • Performance close to Oracle and MIPzillaDF Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

  22. Conclusion Cost sensitive classification based SATzilla outperforms original SATzilla New Hydra-MIP outperforms CPLEX default, algorithm configuration alone, and original Hydra on four heterogeneous MIP sets Technical contributions: Cost sensitive classification results better algorithm selection for SAT and MIP Using multiple configurations speeds up the convergence of Hydra Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP

More Related