Particle swarm optimization algorithms to continuous problem
1 / 17

Particle Swarm Optimization Algorithms to Continuous Problem - PowerPoint PPT Presentation

  • Updated On :

Particle Swarm Optimization Algorithms to Continuous Problem. Monday, March 10, 2014 by. Yoon-Teck Bau, Hong-Tat Ewe, Chin-Kuan Ho Faculty of Information Technology Multimedia University, Malaysia {ytbau, htewe, ckho} Talk Outlines.

Related searches for Particle Swarm Optimization Algorithms to Continuous Problem

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Particle Swarm Optimization Algorithms to Continuous Problem' - KeelyKia

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Particle swarm optimization algorithms to continuous problem l.jpg

Particle Swarm Optimization Algorithms to Continuous Problem

Monday, March 10, 2014


Yoon-Teck Bau, Hong-Tat Ewe, Chin-Kuan Ho

Faculty of Information Technology

Multimedia University, Malaysia

{ytbau, htewe, ckho}

Talk outlines l.jpg
Talk Outlines

  • Research Objective

  • Particle Swarm Optimization (PSO) Algorithms Overview

  • PSO to Continuous Problem

  • PSO and Non-linear Maximization Problem

  • Experiments and Results

  • Conclusions

  • References

Research objective l.jpg
Research Objective

  • To study PSO in continuous problem

  • To compare the performance of genetic algorithms with PSO in maximization problem

  • To share and exchange knowledges related to PSO and swarm intelligence

Pso algorithms overview l.jpg
PSO Algorithms Overview

  • Introduced by Russel Ebenhart (an Electrical Engineer) and James Kennedy (a Social Psychologist) in 1995

  • Belongs to the categories of Swarm Intelligence techniques and Evolutionary Algorithms for optimization

  • Inspired by the social behavior of birds, which was studied by Craig Reynolds (a biologist) in late 80s and early 90s

  • Optimization problem representation is similar to the genes encoding methods used in GAs but for PSO the variables are called dimensions, that create a multi-dimensional hyperspace.

  • "Particles" fly in this hyperspace and try to find the global minima/maxima, their movement being governed by a simple mathematical equation.

Pso basic mathematical equations l.jpg






PSO Basic Mathematical Equations

  • Basic mathematical equations in PSO:

particle’s personal best

particle’s neighbours best


particle’s itself

Repulsive pso 1 l.jpg

  • R1, R2, R3 : random numbers between 0 and 1ω : inertia weight between 0.01 and 0.7 : best position of a particleŷ : best position of a randomly chosen other particle from within the swarmz : a random velocity vectora, b, c : constants

Repulsive PSO (1)

  • RPSO is a global optimization algorithm, belongs to the class of stochastic evolutionary global optimizers, a variant of particle swarm optimization (PSO).

Repulsive pso 2 l.jpg
Repulsive PSO (2)

  • The different realizations of PSO, where there is a repulsion between particles that can prevent the swarm being trapped in local minima (which would cause a premature convergence and would lead the optimization algorithm to fail to find the global optimum).

  • The main difference between PSO and RPSO is the propagation mechanism (vt+1) to determine new positions for a particle in the search space.

  • RPSO is capable of finding global optima in more complex search spaces. On the other hand, compared to PSO it may be slower on certain types of optimization problems.

Pso pseudocode l.jpg
PSO Pseudocode

fori = 1 to number of particles n

forj =1 to number of dimensions m

C2 = uniform random number

C3 = uniform random number

V[ i ][ j ] = C1*V[ i ][ j ] + C2*(P[ i ][ j ]-X[ i ][ j ])

+ C3*(G[ i ][ j ]-X[ i ][ j ])

X[ i ][ j ] = X[ i ][ j ] + V[ i ][ j ]

Pso algorithms common parameter l.jpg
PSO Algorithms Common Parameter

  • c1/ω is an inertial constant. Good values are usually slightly less than 1.

  • c2 and c3 are two random vectors with each component generally a uniform random number between 0 and 1.

  • Very frequently the value of c1/ω is taken to decrease over time; e.g., one might have the PSO run for a certain number of iterations and decrease linearly from a starting value (0.9, say) to a final value (0.4, say) in order to facilitate exploitation over exploration in later states of the search.

Pso to continuous problem11 l.jpg
PSO to Continuous Problem

  • Continuous optimization problem as opposed to discrete optimization, the variables used in the objective function can assume real values, e.g., values fromintervals of the real line.

  • The particles "communicate" information they find about each other by updating their velocities in terms of local and global bests; when a new best is found, the particles will change their positions accordingly so that the new information is "broadcast" to the swarm.

  • The particles are always drawn back both to their own personal best positions and also to the best position of the entire swarm.

  • They also have stochastic exploration capability via the use of the random multipliers c2, and c3.

  • Typical convergence conditions include reaching a certain number of iterations, reaching a certain fitness value, and so on.

Pso and non linear maximization problem l.jpg
PSO and Non-linear Maximization Problem

Non-linear Maximization problem:

f(x1,x2,x3) is maximum if

0 <= x1, x2, x3 <= 10

x1 = 10

x2 = 0

x3 = 10

f(x1,x2,x3) = 110

Experiments and results 1 l.jpg
Experiments and Results (1)

  • Both the PSO's and GA's approaches are implemented in Java v6.0 on Pentium4-1.80GHz CPU, 512M RAM, WinXP OS.

  • GA’s uses roulette wheel selection scheme, elitist model, one point crossover and uniform mutation.

Ga s parameter l.jpg

Experiments and Results (2)

GA’s Parameter

PSO’s Parameter

Experiments and results 3 l.jpg
Experiments and Results (3)


Best max fitness value = 109.78

Best member:

x1 = 9.9931

x2 = 0.0075

x3 = 9.9949

Total time (ms) = 3469


Best max fitness value = 110.00

Best member:

x1 = 10.0

x2 = 0.0

x3 = 10.0

Total time (ms) = 344


Mean # of iteration = 72.540000

Mean fn val = 110.000000

Std. dev. fn val = 0.000000

Success rate = 100.00%

Conclusion l.jpg

  • PSO has proven both very effective and quick when applied to a diverse set of optimization problems.

  • GA’s results can be much better if uniform mutation, MU(x) := U([a,b]), is replaced by a Gaussian mutation, where x [a,b], m is mean, s is variance, andRi is sum of 12 random numbers from the range [0..1].

  • In future, it will be interesting to study and to compare the performance of PSO’s with GA’s and also ACO’s to solve discrete type of problem.

References l.jpg

  • Kennedy J, Eberhart R. C., and Shi Y. (2001). Swarm Intelligence. USA: Academic Press.

  • Michalewicz Z. (1996). Genetic Algorithms + Data Structures = Evolution Programs. 3rd, Revised and Extended Edition. USA: Springer.