1 / 18

A review of Particle Swarm Optimization

A review of Particle Swarm Optimization. Jeonghwa Moon Advisor :Andreas A. Linninger 6/21/2006 Laboratory for Product and Process Design, Department of Chemical Engineering, University of Illinois, Chicago, IL 60607, U.S.A. Contents. Particle swarm Optimization Basic Concept

daire
Download Presentation

A review of Particle Swarm Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A review of Particle Swarm Optimization Jeonghwa Moon Advisor :Andreas A. Linninger 6/21/2006 Laboratory for Product and Process Design, Department of Chemical Engineering, University of Illinois, Chicago, IL 60607, U.S.A.

  2. Contents • Particle swarm Optimization • Basic Concept • definition • Equations of PSO • Comparison of PSO and GA • Particle Swarm for Multiobjective Optimization • Weighted aggression approach • Vector Evaluated Particle Swarm Optimization (VEPSO) • Conclusion

  3. Introduction to particle swarm optimization • Definition and characteristics • PSO Is a swarm Intelligence method formulated by Edward and Kennedy ,1995 • It models the social behavior of animals such as bird flocking and fish schooling (move synchronized, without colliding) • Particle moves with a velocity and updates velocity based on local and global best solutions • With Genetic Algorithm • It is Similar to GA in terms of using population • But has no evolution algorithm – natural selection, crossover, mutation • Evolutionary algorithm that does not use “survival of fitness” • Advantages • Easy to implement , computationally inexpensive. • It does not require gradient-behavior. • Well suited for unconstrained global optimization problem i=1 This particle knows • Position, velocity, • best historical position • best present position Swarm is set of particles Search area

  4. X1,V1 (?) X2,V2 (?) X3,V3 (?) X4,V4 (?) x1 (9) x1 (9) x1 (9) x1 (9) x2 (8) x2 (8) x2 (8) x2 (8) x1’ (?) x1’ (?) x3 (5) C (5) x2’ (?) x5 (?) D (2) x4 (2) x1 (?) x2 (?) x3 (?) x4 (?) X1,V1 (9) X2,V2 (8) X3,V3 (5) X4,V4 (2) X1,V’1 P’1 (9) X2,V’2 P’2 (8) X3,V’3 P3 (5) X4,V’4 P’4 (2) g X’1,V’1 P’1 (?) X’2,V’2 P2 (?) X’3,V’3 P’3 (?) X’4,V’4 P’4 (?) g X1,V1 P’1 (9) X2,V2 P’2 (8) X3,V3 P’3 (5) X4,V4 P’4 (2) g Comparison of GA & PSO Initialize swarm Initial population fitness calculation fitness calculation Best particle update Selection Velocity update Mating Position update Mutation PARTICLE SWARM OPTIMIZATION GENETIC ALGORITHM

  5. Equations of PSO • Notation • The search space is D-dimensional • Particle: xi=(x1i, x2i,..,xdi), velocity : vi=(v1i,v2i,..,vdi) • The best particle index: g • The best previous position of i-th particle : Pi=(p1i,p2i,..,,pdi) (1) (2) (3) (1) : The particle’s previous velocity (2): Distance between best previous position of the particle and its current position (3): Distance between globally best particle and current position

  6. Particle Swarm for Multiobjective Optimization • Problem formulation of Mulitobjective optimization • Objective functions have to be considered and minimized simultaneously. • The goal of MO is to provide a set of solutions that are Pareto Optimal. • Traditional gradient-based optimization technique can be used to detect pareto optimal solutions but, • The objectives have to be aggregated in one single objective function • Only one solution can be detected per optimization run. • PSO is suited to MO because they search for multiple pareto optimal solutions in a single run. Objective functions fi(x) i=1..k Inequality Constraints gi(x)< 0, i=1..m with

  7. MO : Weighted aggression approach • The most common approach for coping with MO • Definition • wi=1…k Three types of Weighted aggression approach

  8. Comparison of results of each Weighted aggression approach-1 • Test problems

  9. Comparison of results of each Weighted aggression approach-2

  10. Comparison of results of each Weighted aggression approach-3

  11. Vector Evaluated Particle Swarm Optimization (VEPSO) • The main idea of VEGA[schaffer,1985] is adopted and modified to fit the PSO framework. • N swarms are used for solving N multi-objective functions. • Each swarm is evaluated according to one the objective, • But information coming from the other swarm is used to determine the change of velocities. Equations for i–th particle in the j–th swarm The ring migration scheme.

  12. Results of VEPSO_1 The left :the Pareto Fronts obtained using only the best particle of the other swarm The right : obtained using only both the best particle and the best previous positions of other swarm

  13. Results of VEPSO_2

  14. Conclusion • PSO is very a useful technique for solving global optimization and a good alternative in cases where other case fail. • 3 types of weighted aggression methods are presented and DWA makes best result when Pareto Front is convex. • Modified version of PSO that resembles the VEGA ideas was also developed and solved efficiently well known test problems. • Future work • Comparisons with another MO genetic algorithms are needed • VEGA (Schaffer 1984), Niche- method (Goldberg and Richadson 1987), MOGA (Fonesca and Flemming 1993), NGSA,NGSA-II • Parallelization of PSO • PSO is also easy to parallelize

  15. References • Parallelization • Nowostawski, M and R. Poli(1999) Parallel genetic algorithm taxonomy. KES’99 Adelaide, South Australia. • Gordon V.S., D.Whitley Serial and Parallel Genetic Algorithms as Function Optimizers ICGA-90:5th Int. Conf. on Genetic Algorithms • Zdenk Konfst(2004) Parallel genetic algorithms :advances, computing trends, Application and Perspectives IPDPS’04 • Erick Cantu-Paz A survey of parallel genetic algorithms Illigal report 97003, The university of Illinois 1997 . • PSO • K.E. Parsopoulos, M.N. Vrahatis(2003) Particle swarm Optimization Method in Multiobjective Problems • K.E. Parsopoulos, M.N. Vrahatis(2002) Recent approaches to global optimization problems through Particle Swarm Optimization • K.E. Parsopoulos, D.K. Tasoulis, M.N. Vrahatis(?) Multiobjective optimization using parallel vector evaluated particle swarm optimzation • OTHERS • Carlos M. Fonseca and Peter J. Fleming (2003) Genetic Algorithms for Muitiobjective Optimization :formulation,Discussion and Generalizaion

  16. Swarm intelligence (SI) • Swarm intelligence (SI) is an artificial intelligence technique based around the study of collective behaviour in decentralised, self-organised systems. The expression "swarm intelligence" was introduced by Beni & Wang in 1989, in the context of cellular robotic systems (see also cellular automata, evolutionary computation). • SI systems are typically made up of a population of simple agents interacting locally with one another and with their environment. Although there is normally no centralised control structure dictating how individual agents should behave, local interactions between such agents often lead to the emergence of global behaviour. Examples of systems like this can be found in nature, including ant colonies, bird flocking, animal herding, bacteria molding and fish schooling. • Three interesting swarm intelligence techniques currently in existence are Ant Colony Optimization (ACO), Stochastic Diffusion Search (SDS) and Particle Swarm Optimization (PSO). • ACO is a metaheuristic optimization algorithm that can be used to find approximate solutions to difficult combinatorial optimization problems. In ACO artificial ants build solutions by moving on the problem graph and they, mimicking real ants, deposit artificial pheromone on the graph in such a way that future artificial ants can build better solutions. ACO has been successfully applied to an impressive number of optimization problems. • SDS is an agent based probabilistic global search and optimization technique best suited to problems where the objective function can be decomposed into many simpler functions. Unlike the stigmergetic communication used in ACO, in SDS agents communicate hypotheses via a one-to-one communication strategy analogous to the tandem running procedure observed in some species of ant. A positive feedback mechanism ensures that, over time, a population of agents stabilise around the global-best solution. SDS is both an efficient and robust search and optimisation algorithm, which has been extensively mathematically described. • PSO is a global minimisation technique for dealing with problems in which a best solution can be represented as a point or surface in an n-dimensional space. Hypotheses are plotted in this space and seeded with an initial velocity, as well as a communication channel between the particles. Particles then move through the solution space, and are evaluated according to some fitness criterion after each timestep. Over time, particles are accelerated towards those particles within their communication grouping which have better fitness values. The main advantage of such an approach over other global minimisation strategies such as simulated annealing is that the large number of members that make up the particle swarm make the technique impressively resilient to the problem of local minima. • Swarm robotics is the application of swarm intelligence principles to large numbers of cheap robots. A particularly interesting application of swarm robotics principles can be found in the SWARM-BOTS project. GO BACK

  17. Pareto optimal point Non-dominated solution Dominated-solution F2(x) Unattainable solution corresponding to the optimal of both objectives F1(x) Pareto Front Let Vector u dominate v if only if for all i and for any i GO BACK

  18. Flowchart of PSO Initialize variables Initialize swarm and velocities START EVALUTAION LOOP Evaluate initial Population Update Velocity Find best positions, best particle Update SWARM Evaluate New SWARM Cost Update the best position for each particle, index of best particle Convergence Check Done

More Related