experimental algorithmics reading group ubc cs n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Experimental Algorithmics Reading Group, UBC, CS PowerPoint Presentation
Download Presentation
Experimental Algorithmics Reading Group, UBC, CS

Loading in 2 Seconds...

play fullscreen
1 / 16

Experimental Algorithmics Reading Group, UBC, CS - PowerPoint PPT Presentation


  • 145 Views
  • Uploaded on

Experimental Algorithmics Reading Group, UBC, CS. Presented paper: Fine-tuning of Algorithms Using Fractional Experimental Designs and Local Search by Belarmino Adenso-Díaz (Barcelona) and Manuel Laguna (Colorado) OR Journal 2006 Presenter: Frank Hutter, 23 Aug 2006. Motivation.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Experimental Algorithmics Reading Group, UBC, CS' - jadzia


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
experimental algorithmics reading group ubc cs

Experimental Algorithmics Reading Group, UBC, CS

Presented paper:Fine-tuning of Algorithms Using Fractional Experimental Designs and Local Search by Belarmino Adenso-Díaz (Barcelona) andManuel Laguna (Colorado)OR Journal 2006

Presenter: Frank Hutter, 23Aug 2006

motivation
Motivation
  • Anecdotal evidence that of the total time for designing and testing a new (meta) heuristic
    • 10% is spent on development
    • 90% is spent on fine-tuning parameter
    • (In my opinion, 90% is maybe a little bit too high)
    • If you see any real stats about this sometime, please let me know !

Fine-tuning of algorithms

motivation 2
Motivation (2)
  • Barr et al (1995) (we read this Nov 2004)

“The selection of parameter values that drive heuristics is itself a scientific endeavor and deserves more attention than it has received in the operations research literature. This is an area where the scientific method and statistical analysis could and should be employed.”

Fine-tuning of algorithms

motivation 3
Motivation (3)
  • Parameter tuning in the OR literature
    • 1) “Parameter values have been established experimentally” (without stating the procedure)
    • 2) Just give parameters without explanation, often different for problem classes or even for each instance
    • 3) Use parameter values that were previously determined to be effective (simulated annealing, Guided Local Search for MPE)
    • 4) Sometimes employed experimental design is stated

Fine-tuning of algorithms

objective function to be minimized
Objective function to be minimized
  • Runtime for solving a training set of decision problems
    • They only do one run per instance and (wrongly?) refer to runs on different instances as replication
  • Average deviation from optimal solution in optimization algorithms
  • In general: some combination of speed and accuracy

Fine-tuning of algorithms

design of experiments
Design of experiments
  • Includes
    • 1) The set of treatments included in the study
    • 2) The set of experimental units included in the study
    • 3) The rules and procedures by which treatments are assigned to experimental units
    • 4) Analysis (measurements that are made on the experimental units after the treatments have been applied)

Fine-tuning of algorithms

different designs
Different designs
  • Full factorial experimental design
    • 2k factorial – 2 levels (critical values) per variable
    • 3k factorial
  • Fractional factorial experiment
    • Orthogonal array with n=8 runs, k=5 factors, s=2 levels and strength t=2
    • n x k array with entries 0 to s-1 and property that in any t columns the st possible combinations appear equally often (the projections to lower dimensions are balanced)

Fine-tuning of algorithms

aside taguchi design of experiments
Aside: Taguchi design of experiments
  • Genichi Taguchi
    • Robust parameter design
  • Set controllable parameters to achieve maximum output with low variance

Fine-tuning of algorithms

taguchi design applied here
Taguchi design applied here
  • L9(34) is a design with nine runs,4 variables with 3 values eachand strength 2 (for each combination of variables, each of the 9 value combinations occurs exactly once)
  • Based on this, you can estimate the “optimal condition”, even if it’s not one of the 9 runs performed (how ?  separate topic)

Fine-tuning of algorithms

the calibra software
The CALIBRA software
  • Limited to 5 parameters
  • Starts with full factorial bi-level design (32 runs)
    • 25% and 75% “quantiles” of each parameter as levels
    • Fix parameter with least significant main effect to its best value
  • From then on, do “local search”
    • Choose 3 levels around last best setting
    • Do L9(34) Taguchi design
    • Narrow down the levels around a the best predicted solution
  • When local optimum is reached
    • Build new starting point for local search by combining previous local optima/previous worst solutions
    • This is meant to trade off exploration and exploitation but seem fairly ad-hoc

Fine-tuning of algorithms

the calibra software 2
The CALIBRA software (2)
  • Only in Windows
  • Requires the algorithm to be tuned as a .exe file
    • Just write a .exe wrapper

Fine-tuning of algorithms

the calibra software 3
The CALIBRA software (3)
  • Objective function can be based on multiple instances
  • Deal with that inside the algorithm
    • Well, in the wrapper

Fine-tuning of algorithms

the calibra software live demo
The CALIBRA software - Live demo
  • Let’s hope it works …
  • They do some caching that’s not mentioned in the paper

Fine-tuning of algorithms

backup in case it doesn t work
Backup in case it doesn’t work

Fine-tuning of algorithms

experimental analysis
Experimental analysis
  • Pretty straight-forward
  • MAXEX is a parameter of major importance !!
  • They do a little bit better than the manually found parameter settings (or those found with Taguchi designs)
  • For these domains, not too much promise in per-instance tuning (Table 5 compared to Table 2)
  • Figure 9 vs 10 probably only shows that their performance metric means different things for different domains

Fine-tuning of algorithms

points for improvements
Points for improvements
  • Objective function evaluation requires solving many instances (possibly many times)
    • Takes lots of time even if results are abysmal
    • Can stop evaluation if it’s (statistically) clear that the result won’t be better than the best one we already have
  • “CALIBRA should be more effectice in situations when the interactions among parameters are negligible”
    • But then you really don’t need anything like this !
    • Related work (DACE) builds a model of the whole surface - I expect that to work better

Fine-tuning of algorithms