1 / 16

PGA – Parallel Genetic Algorithm

PGA – Parallel Genetic Algorithm. Hsuan Lee. Reference. E Cantú-Paz, A Survey on Parallel Genetic Algorithm , Calculateurs Paralleles, Reseaux et Systems Repartis, 1998. Classes of Parallel Genetic Algorithm. 3 major classes of PGA Global Single-Population Master-Slave PGA

shlomo
Download Presentation

PGA – Parallel Genetic Algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PGA – Parallel Genetic Algorithm Hsuan Lee

  2. Reference • E Cantú-Paz, A Survey on Parallel Genetic Algorithm, Calculateurs Paralleles, Reseaux et Systems Repartis, 1998 Hsuan Lee @ NTUEE

  3. Classes of Parallel Genetic Algorithm • 3 major classes of PGA • Global Single-Population Master-Slave PGA • Single-Population Fine-Grained PGA • Multiple-Population Coarse-Grained PGA • Hybrid of the above PGAs • Hierarchical PGA Hsuan Lee @ NTUEE

  4. Classes of Parallel Genetic Algorithm • Global Single-Population Master-Slave PGA • Lowest level of parallelism • Parallelize the calculation of fitness, Selection, Crossover… • Also known as Global PGA. Hsuan Lee @ NTUEE

  5. Classes of Parallel Genetic Algorithm • Single-Population Fine-Grained PGA • Consists of one spatially structured population • Selection and Crossover are restricted to a small neighborhood, but neighborhoods overlap, permitting some interaction among all the individuals • Similar to the idea of niching • Suited for massively parallel computers Hsuan Lee @ NTUEE

  6. Classes of Parallel Genetic Algorithm • Multiple-Population Coarse-Grained PGA • Consists of several subpopulations • Exchange individuals occasionally. The exchange operation is called migration. • Also known as multiple-deme PGA, distributed GA, coarse-grained PGA or “island” PGA • Most popular PGA • Most difficult to analyze • Suited for fewer but strongerparallel computers Hsuan Lee @ NTUEE

  7. Classes of Parallel Genetic Algorithm • 3 major classes of PGA • Global Single-Population Master-Slave PGA • Single-Population Fine-Grained PGA • Multiple-Population Coarse-Grained PGA The first one does not affect the behavior of GA, but the latter 2 do. • Hybrid of the above PGAs • Hierarchical PGA Hsuan Lee @ NTUEE

  8. Classes of Parallel Genetic Algorithm • Hierarchical PGA • Combines multiple-population PGA (at higher level) with master-slave PGA or fine-grained PGA (at lower level) • Combines the advantages of its components Hsuan Lee @ NTUEE

  9. Master-Slave Parallelization Master does the global work that involves population-wise computation and assign local tasks to its slaves • What to be parallelized? • Evaluation of fitness • Selection • Some selections require population-wise calculation. Therefore it cannot be parallelized • Selections that don’t require global computation are usually too simple to be parallelized. e.g. tournament selection • Crossover • Usually too simple to parallelize • But for complex crossover that involves finding min-cut, parallelization may be an option Hsuan Lee @ NTUEE

  10. Master-Slave Parallelization • Computer architecture makes a difference • Shared memory Simpler. The population may be stored in shared memory and each slave processor can process on these individuals without conflict • Distributed memory The individuals to be processed are sent to slave processors, creating a communication overhead. This inhibits the tendency to parallelize too easy tasks. Hsuan Lee @ NTUEE

  11. Fine-Grained Parallel GAs • Neighborhood size • The performance of the algorithm degrades as the size of the neighborhood increases • The ratio of the radius of the neighborhood to the radius of the whole grid is a critical parameter Hsuan Lee @ NTUEE

  12. Fine-Grained Parallel GAs • Topology Different individual placing topology can result in different performances • 2D meshMost commonly used because this is usually the physical topology of processors • Ring • Cube • Torus (doughnut)Converges the fastest in some problems, due to the high connectivity of the structure • Hypercube Hsuan Lee @ NTUEE

  13. Multiple-Deme Parallel GAs • Subpopulation size • It is obvious that small population converges faster, but is more likely to converge to a local optimum rather than a global optimum • The Idea is to use many small subpopulations that communicates occasionally to speed up GA while preventing from converging at local optimum Hsuan Lee @ NTUEE

  14. Multiple-Deme Parallel GAs • Migration Timing • Synchronous What’s the optimum frequency of migration? Is the communication cost small enough to make this PGA a good alternative of traditional GA? • Asynchronous When is the right time to migrate? Hsuan Lee @ NTUEE

  15. Multiple-Deme Parallel GAs • Topology – Migration destination • Static • Any topology with “high connectivity and small diameter” • Random destination • Dynamic According to destination subpopulation’s diversity? Hsuan Lee @ NTUEE

  16. Conclusion • There are still a lot to be investigated in the field of PGA. • Theoretical work is scarce. Hsuan Lee @ NTUEE

More Related