PSO and ASO Variants/Hybrids/Example Applications & Results

1 / 18

PSO and ASO Variants/Hybrids/Example Applications & Results - PowerPoint PPT Presentation

PSO and ASO Variants/Hybrids/Example Applications & Results. Lecture 12 of Biologically Inspired Computing. Purpose: Not just to show variants/etc … for these specific algorithms, but to indicate these as examples of typical ways in which

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

PSO and ASO Variants/Hybrids/Example Applications & Results

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
PSO and ASO Variants/Hybrids/Example Applications & Results

Lecture 12 of Biologically Inspired Computing

Purpose:

Not just to show variants/etc … for these specific algorithms,

but to indicate these as examples of typical ways in which

these and similar methods (especially EAs) are engineered,

on a given problem.

Hybridisation
• A very common algorithm engineering method.
• It simply means: creating an algorithm that combines aspects of two or more different algorithms
• Very many hybridisations have been researched.
• E.g.:
• ACO + Local Search (e.g. hillclimbing)
• 1. Run 1 generation of ACO with m ants
• 2. Run LS on each of the m ants
• 3. Until a termination condition is
• EA + Local Search
• The same story, but these are called Memetic Algorithms
Hybridisation II
• Hybridisation with appropriate constructive heuristics is
• common, as you know. E.g.
• Generate Initial Population with several runs of a stochastic constructive heuristic.
• Carry on from there using ACO, PSO, or whatever.
• Another very common hybrid approach:
• Run your standard, single method (e.g. an EA).
• Take the best solution from 1, and run Local Search (e.g. hillclimbing) on it for a while to try to find improvements.
• All the algorithms we have looked at have parameters. E.g. Mutation rate, crossover rate, population size, number of ants, C1 and C2 in PSO, and so on.
• Instead of (say) keeping the mutation strength at 1 gene per chromosome, why not vary it during the algorithm run? Why?
• ALL search-based optimisation algorithms balance two things:
• Exploration: to boldly go where no chromosome/particle/ant/… has gone before. Reaching out to new areas of the search space.
• Exploitation: focussing narrowly on investigating the region close to the best so far point, in search of better points close by.
• Too much exploration?
• Too much exploitation?

Too often stuck in suboptimal areas

Much too slow progress – high risk/low reward

• In PSO, let d be the average distance that a particle moves, at the beginning of the run.
• How does d change with time?
• What is d like after running the algorithm for a very long time? Is this a good thing?
• How important is C2 (influence of global best) at the beginning of the run, and at the end of the run?
• In an EA with both crossover and mutation? How important is mutation at the end and at the beginning of the run.

In EAs, it may seem sensible for mutation strength to start low, say at mS and gradually increase linearly, say to mF. (e.g. perhaps 1/L and 5/L). Given a limit of 1,000 generations, say, the rate will increase by a simply pre-calculated amount per generation.

But this is not really adapting to the current situation in the search process. It may be fine for some problems, but on others the population may converge at 100 generations, or it may still be very diverse at 1,000 generations, or (of course), the rates themselves may be very unsuitable for the problem.

So it is common to use `truly’ adaptive techniques that react to the current state of the population. The mutation rate may be based directly on measures of diversity in the current population, measures of success in the last few mutation events, and so on.

Useful metrics

Distance between two individuals:

Hamming distance: no. of genes that are different

Euclidean distance: only suitable for …. ?

Edge distance:

Permutations 1. ABCDEF and 2. EDFABC

are how different?

Edge-sets: 1. AB, BC, CD, DE, EF, FA

2. AB, BC, CE, ED, DF, FA

There are various ways to base a distance metric on edge sets

Distance metric for grouping problems?

Population diversity:

Mean values and variances for each gene.

Mean distance between pairs of individuals

Others?

Ants:

Population diversity measures applied to set of ants’ solution

Diversity measures for the amount of pheromome per linkures

Mutation strength =

(where D varies from 0 (all the same) to 1 (maximally different)

Or: a form of genewise mutation:

Prob of mutating gene rises as the variance of that gene in the population falls.

Or: Run EA as normal, but:

pop diversity > Dthresh: only use crossover

pop diversity < Dthresh – only use mutation.

Or: Run PSO as normal, but:

pop diversity < Dthresh: do one iteration where the particle velocities are randomly chosen, then move on.

The Vehicle Routing Problem

Depot

Customer

A number of vehicles are at the depot. Each customer has to be visited precisely once. Each vehicle needs a route that takes it from the depot and back to the depot. What is the least cost

(distance/fuel/vehicles) solution?

Several vehicles normally needed, since (e.g. visiting 3 or 4 customers may take the whole day, but all customers need to be serviced today. Also, vehicles have capacity constraints.

2-opt mutation for TSP/VRP etc…

Choose two edges at random

2-opt mutation for TSP/VRP etc…

Choose two edges at random

2-opt mutation for TSP/VRP etc…

Reconnect in a different way (there is only one valid new way)

“2-Opt heuristic”

2-opt turns out to be an excellent mutation operator for the TSP

and similar problems.

The “2-opt heuristic” is an algorithm based on it. There are variants,

But it comes down to Hillclimbing for n trials, until no further

improvement its possible, using 2-opt as the mutation operator.

Bullnheimer et al (97) Ant System for the VRP

The main difference

Standard transition rule

Extra pheromone is laid by

the ant with best tour

Deviation from best-known solution (smaller is better)

Benchmark problems

Run times

AS (ant) is pretty near state of the art, with good run times.

The [rectangular] Cutting Stock problem

You have a number of shapes to cut out from a large sheet.

How to do it, so as to minimise waste?

The [rectangular] Cutting Stock problem

You have a number of shapes to cut out from a large sheet.

How to do it, so as to minimise waste?