heuristic optimization athens 2004 l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Heuristic Optimization Athens 2004 PowerPoint Presentation
Download Presentation
Heuristic Optimization Athens 2004

Loading in 2 Seconds...

play fullscreen
1 / 43

Heuristic Optimization Athens 2004 - PowerPoint PPT Presentation


  • 201 Views
  • Uploaded on

Heuristic Optimization Athens 2004. Department of Architecture and Technology Universidad Politécnica de Madrid Víctor Robles vrobles@fi.upm.es. Teachers. Universidad Politécnica de Madrid Víctor Robles (coordinator) María S. Pérez Vanessa Herves Universidad del País Vasco

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

Heuristic Optimization Athens 2004


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
heuristic optimization athens 2004

Heuristic OptimizationAthens 2004

Department of Architecture and Technology

Universidad Politécnica de Madrid

Víctor Robles

vrobles@fi.upm.es

teachers
Teachers
  • Universidad Politécnica de Madrid
    • Víctor Robles (coordinator)
    • María S. Pérez
    • Vanessa Herves
  • Universidad del País Vasco
    • Pedro Larrañaga
course outline and class hours
Course outline and Class hours
  • Day 1 / 10:00-14:00 / Víctor
    • Introduction to optimization
    • Some optimization problems
    • About heuristics
      • Greedy algorithms
      • Hill climbing
    • Simulated Annealing
course outline and class hours4
Course outline and Class hours
  • Day 2 / 9:30-13:30 / Víctor, María
    • Learn by practice: Simulated Annealing
    • Genetic Algorithms
  • Day 3 / 9:30-13:30 / Vanessa
    • Learn by practice: Genetic Algorihtms
  • Day 4 / 10:00-13:30 / Pedro
    • Estimation of Distribution Algorithms
introduce yourself
Introduce yourself
  • Name
  • University / Country
  • Optimization experience
  • Expectations of the course
optimization
Optimization
  • A optimization problem is a par

being the search space (all the possible solutions), and a function,

  • The solution is optime if,
  • Combinatorial optimization
  • Systematic search
state space landscape
State space landscape
  • Objective function defines state space landscape
some optimization problems
Some optimization problems
  • TSP – Travel Salesman Problem
  • The assignment problem
  • SAT – Satistiability problem
  • The 0-1 knapsack problem

Important tasks

  • Find a representation of possible solutions
  • To be able to evaluate each of the possible solutions  “fitness function” or “objective function”
slide9
TSP
  • A salesman has to find a route which visits each of n cities, and which minimizes the total distance travelled
  • Given an integer and a n x n matrix

where each is a nonnegative integer. Which cyclic permutation of integers from 1 to n minimizes the sum ?

tsp representations
TSP representations
  • Binary representation
    • Each city is encoded as a string of [log2n] bits.

Example 8 cities  3 bits

  • Path representation
    • A list is represented as a list of n cities. If city i is the j-th element of the list, city i is the j-th city to be visited
  • Adjancecy representation
    • City j is listed in position i if the tour leads from city i to city j

(3 5 7 6 4 8 2 1)  tour: 1-3-7-2-5-4-6-8

the assignment problem
The assignment problem
  • A set of n resources is available to carry out n tasks. If resource i is assigned to task j, it cost

units.

Find an assignment that minimizes

  • Solution: permutition of the numbers
slide12
SAT
  • The satisfiability problem consists on finding a truth assignment that satisfied a well-formed Boolean expression
  • Many applications: VLSI test and verification, consistency maintenance, fault diagnosis, etc
  • MAX-SAT: Find an assignment which satisfied the maximum number of clauses
slide13
SAT
  • Data sets in conjunctive normal form (cnf)
  • Example
    • Literals:
    • Clauses:
  • Representation? Fitness function?

...

the 0 1 knapsack problem
The 0-1 knapsack problem
  • A set of n items is available to be packed into a knapsack with capacity C units. Item i has value vi and uses up ci units of capacity. Determine the subset of items which should be packed to maximize the total value without exceding the capacity
  • Representation? Fitness function?
heuristics
Heuristics
  • Faster than mathematical optimization (branch & bound, simplex, etc)
  • Well developed  good solutions for some problems
  • Special heuristics:
    • Greedy algorithms – systematic
    • Hill-climbing (based on neighbourhood search) – randomized
greedy algorithms
Greedy algorithms
  • Step-by-step algorithms
  • Sometimes works well for optimization problems
  • A greedy algorithm works in phases. At each phase:
    • You take the best you can get right now, without regard for future consequences
    • You hope that by choosing a local optimum at each step, you will end up at a global optimum
example counting money
Example: Counting money
  • Suppose you want to count out a certain amount of money, using the fewest possible bills and coins
  • A greedy algorithm would do this would be:At each step, take the largest possible bill or coin that does not overshoot
    • Example: To make $6.39, you can choose:
      • a $5 bill
      • a $1 bill, to make $6
      • a 25¢ coin, to make $6.25
      • A 10¢ coin, to make $6.35
      • four 1¢ coins, to make $6.39
  • For US money, the greedy algorithm always gives the optimum solution
a failure of the greedy algorithm
A failure of the greedy algorithm
  • In some (fictional) monetary system, “krons” come in 1 kron, 7 kron, and 10 kron coins
  • Using a greedy algorithm to count out 15 krons, you would get
    • A 10 kron piece
    • Five 1 kron pieces, for a total of 15 krons
    • This requires six coins
  • A better solution would be to use two 7 kron pieces and one 1 kron piece
    • This only requires three coins
  • The greedy algorithm results in a solution, but not in an optimal solution
practice
Practice
  • Develop a greedy algorithm for the knapsack problem. Groups of 2 persons
local search algorithms
Local search algorithms
  • Based on neighbourhood system
  • Neighbourhood system:

being X the search space, we define the neighbourhood system N in X

  • Examples: TSP (2-opt), SAT, assignment and knap
local search algorithms21
Local search algorithms
  • Basic principles:
    • Keep only a single (complete) state in memory
    • Generate only the neighbours of that state
    • Keep one of the neighbours and discard others
  • Key features:
    • No search paths
    • Neither systematic nor incremental
  • Key advantages:
    • Use very little memory (constant amount)
    • Find solutions in search spaces too large for systematic algorithms
tsp 2 opt
TSP – 2 opt

A

B

C

D

New distance = Old dist – dist(A-D) – dist(B-C) + dist(A-B) + dist (C-D)

neighbourhood search reeves93
Neighbourhood search (Reeves93)
  • (Initialization)
    • Select a starting solution
    • Current best and
  • (Choice and termination)
    • Choice . If choice criteria cannot be satisfied or if other termination criteria apply, then the method stops
  • (Update)
    • Re-set , and if , perform step 1.ii. Return to Step 2
hill climbing
Hill climbing
  • Diferent procedures depending on choice criteria and termination criteria
  • Hill climbing: only permit moves to neighbours that improve the current
  • (Choice and termination)
  • Choose such that

and terminate if no such can be found

hill climbing 8 queens problem
Hill-climbing: 8-Queens problem
  • Complete state formulation:
    • All 8 queens on the board,one per column
    • Neighbourhood: move one queen to a different place in the same column
    • Fitness function: number of pairs of queens that are attacking each other
problematic landscapes
Problematic landscapes
  • Local maximum: a peek that is higher than all its neighbours, but not a global maximum
  • Plateau: an area where the elevation is constant
    • Local maximum
    • Shoulder
  • Ridge: a long, narrow, almost plateau-like landscape
random restart hill climbing
Random-Restart Hill-Climbing
  • Method:
    • Conduct a series of hill-climbing searches from randomly generated initial states
    • Stop when a goal is found
  • Analysis:
    • Requires 1/p restarts where p is the probability of success

(1 success + 1/p failures)

hill climbing performance on the 8 queen problem
Hill-Climbing: Performance on the 8-Queen Problem
  • From randomly generated start state
  • Success rate:
    • 86% - gets stuck
    • 14% - solves problem
  • Average cost:
    • 4 steps to success
    • 3 steps to get stuck
hill climbing with sideways moves
Hill-Climbing with Sideways Moves
  • Sideways moves: moves at same fitness
  • Must limit number of sideways moves!
  • Performance on the 8-Queen problem:
    • Success rate:
      • 6% - get stucks
      • 94% - solves problem
    • Average cost:
      • 21 steps to succeed
      • 64 steps to get stuck
hill climbing further variants
Hill-climbing: Further variants
  • Stochastic hill-climbing:
    • Choose at random from among uphill moves
  • First-choice hill-climbing:
    • Generate neighbourhood in random order
    • Move to first generated that represents an uphill move
practice33
Practice
  • Develop a hill-climbing algorithm for the knapsack problem. Groups of 2 persons
shape of state space landscape
Shape of State Space Landscape
  • Success of hill-climbing depends on shape of landscape
  • Shape of landscape depends on problem formulation and fitness function
  • Landscapes for realistic problems often look like a worst-case scenario
  • NP-hard problems typically have exponential number of local-maxima
  • Despite the above, hill-climbers tend to have good performance
simulated annealing
Simulated annealing
  • Failing on neighbourhood search
    • Propensity to deliver solutions which are only local optima
    • Solutions depend on the initial solution
  • Reduce the chance of getting stuck in a local optimum by allowing moves to inferior solutions
  • Developed by Kirkpatrick ’83: Simulation of the cooling of material in a heat bath could be used to search the feasible solutions of an optimization problem
simulated annealing36
Simulated annealing
  • If a move from one solution to another neighbouring but inferior solution

results in a change in value , the move to is still accepted if

T (temperature) – control parameter

– uniform random number

simulated annealing intuition
Simulated annealing: Intuition
  • Minimization problem; imagine a state space landscape on table
  • Let ping-pong ball from random point  local minimum
  • Shake table ball tends to find different minimum
  • Shake hard at first (high temperature) but gradually reduce intensity (lower temperature)
simulated annealing algorithm
Simulated annealing: Algorithm

current = problem.initialSate

for t=1 to

T = schedule(t)

if T=0 then return

= a random neighbour of

if then

else with probability

simulated annealing simple example
Simulated annealing: Simple example
  • Maximize

x coded as a 5-bit binary integer in [0,31]

maximum (01010)  f=4100

  • With ‘greedy’ we can find 3 local maxima
simulated annealing simple example40
Simulated annealing: Simple example

The temperature is not high enough to move out of this

local optimum

simulated annealing generic decisions
Simulated annealing: Generic decisions
  • Initial temperature

Should be ‘suitable high’. Most of the initial moves must be accepted (> 60%)

  • Cooling schedule

Temperature is reduced after every move

Two methods:

a close to 1

b close to 0

simulated annealing generic decisions43
Simulated annealing: Generic decisions
  • Number of iterations
  • Other factors:
    • Reannealing
    • Restricted neighbourhoods
    • Order of searching