1 / 94

Sisteme de programe pentru timp real

Sisteme de programe pentru timp real. Universitatea “Politehnica” din Bucuresti 2003-2004 Adina Magda Florea http://turing.cs.pub.ro/sptr_2004. Curs Nr. 7 si 8. Genetic Algorithms Introduction Basic schema GA Functioning An example Selection Recombination Mutation

deva
Download Presentation

Sisteme de programe pentru timp real

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sisteme de programepentru timp real Universitatea “Politehnica” din Bucuresti 2003-2004 Adina Magda Florea http://turing.cs.pub.ro/sptr_2004

  2. Curs Nr. 7 si 8 Genetic Algorithms • Introduction • Basic schema • GA Functioning • An example • Selection • Recombination • Mutation • Using GAs to solve the TSP • Parallel implementations of Gas • Co-evolution 2

  3. 1. Introduction • Genetic Algorithms (GAs) are adaptive heuristic search algorithms • Basic concept = simulate processes in natural systems necessary for evolution - principles first laid down by Charles Darwin of survival of the fittest. • GAs provide an alternative method to solving optimization problems (but not only) GAs are useful and efficient when: • The search space is large, complex or poorly understood. • Domain knowledge is scarce or expert knowledge is difficult to encode to narrow the search space. • No mathematical analysis is available. • Traditional search methods fail 3

  4. Introduction - cont Main schools of evolutionary algorithms: • genetic algorithms, mainly developed in the USA by J. H. Holland (1975) • evolutionary strategies, developed in Germany by I. Rechenberg and H.-P. Schwefel (1975-1980) • evolutionary programming (1960-1980) There are many ways to view genetic algorithms. • Optimisation tasks:numerical optimisation, and combinatorial optimisation problems such as traveling salesman problem (TSP), circuit design, job shop scheduling and video & sound quality optimisation. • Automatic Programming: evolve computer programs for specific tasks or design other computational structures, for example cellular automata. 4

  5. Introduction - cont • Machine and robot learning: classification, prediction, protein structure prediction, evolve rules for classifier systems or symbolic production systems, and to design and control robots. • Economic models: model processes of innovation, the development of bidding strategies, and the emergence of economic markets. • Ecological models: biological arms races, host-parasite co-evolutions, symbiosis and resource flow in ecologies. • Population genetics models: study questions in population genetics, such as “under what conditions will a gene for recombination be evolutionarily viable?” • Models of social systems: study evolutionary aspects of social systems, such as the evolution of cooperation, the evolution of communication, and trail-following behaviour in ants. 5

  6. Introduction - cont • GAs operate on a population of potential solutions applying the principle of survival of the fittest to produce better and better approximations to a solution. • At each generation, a new set of approximations is created by the process of selecting individuals according to their level of fitness in the problem domain and breeding them together using operators borrowed from natural genetics. • This process leads to the evolution of populations of individuals that are better suited to their environment than the individuals that they were created from, just as in natural adaptation. • GAs model natural processes, such as selection, recombination, mutation, migration, locality and neighbourhood. • GAs work on populations of individuals instead of single solutions. • In this way the search is performed in a parallel manner. 6

  7. 2. Basic schema Problem representation Fitness function Generate initial population of individuals (genes) Evaluate objective function Are optimization criteria met? yes no best individuals Selection result start Crossover/Mate Generate new population offsprings Mutation generations 7

  8. 2.1 Problem solving using GAs 8

  9. Problem solving using GAs - cont • Better results can be obtained by introducing many populations, called subpopulations. • Every subpopulation evolves for a few generations isolated (like the single population evolutionary algorithm). • Then one or more individuals are exchanged between the subpopulations. • The Multipopulation GAs models the evolution of a species in a way more similar to nature than the single population evolutionary algorithm. 9

  10. 3. GA Functioning Initial Population • A gene (individual) is a string of bits; some other representations can be used • The initial population of genes (bitstrings) is usually created randomly. • The length of the individual depends on the problem to be solved Selection (1) • Selection means to extract a subset of genes from an existing population, according to any definition of quality. • Selection determines, which individuals are chosen for mating (recombination) and how many offsprings each selected individual produces. 10

  11. 3.1 Selection The first step is fitness assignment by: • proportional fitness assignment or • rank-based fitness assignment • Actual selection: parents are selected according to their fitness by means of one of the following algorithms: • roulette-wheel selection • stochastic universal sampling • local selection • truncation selection • tournament selection[BT95]. 11

  12. Selection - cont Proportional fitness assignment • Consider the population being rated, that means: each gene has a related fitness. The higher the value of the fitness, the better. • The mean-fitness of the population is calculated. • Every individual will be copied as often to the new population, the better it fitness is, compared to the average fitness. E.g.: the average fitness is 5.76, the fitness of one individuum is 20.21. This individuum will be copied 3 times. All genes with a fitness at the average and below will be removed. • Following this steps, in many cases the new population will be a little smaller than the old one. So the new population will be filled up with randomly chosen individuals from the old population to the size of the old one. 12

  13. 3.2 Reinsertion • If less offsprings are produced than the size of the original population, the offsprings have to be reinserted into the old population. • Similarly, if not all offsprings are to be used at each generation or if more offsprings are generated than needed then a reinsertion scheme must be used to determine which individuals should be inserted into the new population. The selection algorithm used determines the reinsertion scheme: • global reinsertion for all population based on the selection algorithm (roulette-wheel selection, stochastic universal sampling, truncation selection), • local reinsertion for local selection. 13

  14. 3.3 Crossover/Recombination • Recombination produces new individuals in combining the information contained in the parents (parents - mating population). • Several recombination schemes exist • The b_nX crossover - random mating with a defined probability This type is described most often as the parallel to the Crossing Overin genetics • PM percent of the individuals of the new population will be selected randomly and mated in pairs. • A crossover point will be chosen for each pair • The information after the crossover point will be exchanged between the two individuals of each pair. 14

  15. Crossover 15

  16. 3.4 Mutation • After recombination every offspring undergoes mutation. • Offspring variables are mutated by small perturbations (size of the mutation step), with low probability. • The representation of the variables determines the used algorithm. • Mutation - add some effect of exploration to the algorithm. • One simple mutation scheme: • Each bit in every gene has a defined probability P to get inverted 16

  17. Mutation 17

  18. The effect of mutation is in some way antagonist to selection 18

  19. 4 An example Compute the maximum of a function • f(x1, x2, ... xn) • The task is to calculate x1, x2, ... xnfor which the function is maximum • Using GA´s can be a good solution. 19

  20. Example- Representation • Scale the variables to integer variables by multiplying them with 10n, where n is the the desired precision New Variable = integer(Old Variable ×10 n) • Represent the variables in binary form • Connect all binary representations of these variables to form an individual of the population • If the sign is necessary two ways are possible: • Add the lowest allowed value to each variable and transform the variables to positive ones • Reserve one bit per variable for the sign Usually not the binary representation is used, but the Gray-code representation 20

  21. Example- Representation 21

  22. Example- Computation • Make random initial population • Perform selection • Perform crossover • Perform mutation • Transform the bitstring of each individual back to the model-variables: x1, x2, ... xn • Test the quality of fit for each individual, e.g., the f(x1, x2, ... xn) • Check if the quality of the best individual is good enough (not significantly improved over last steps) • If yes then stop iterations else go to 2 22

  23. 5. Selection • The first step is fitness assignment. • Each individual in the selection pool receives a reproduction probability depending on the own objective value and the objective value of all other individuals in the selection pool. • This fitness is used for the actual selection step afterwards. 23

  24. Terms • selective pressure: probability of the best individual being selected compared to the average probability of selection of all individuals • bias: absolute difference between an individual's normalized fitness and its expected probability of reproduction • spread: range of possible values for the number of offspring of an individual • loss of diversity: proportion of individuals of a population that is not selected during the selection phase • selection intensity: expected average fitness value of the population after applying a selection method to the normalized Gaussian distribution • selection variance: expected variance of the fitness distribution of the population after applying a selection method to the normalized Gaussian distribution 24

  25. 5.1 Rank-based fitness assignment • The population is sorted according to the objective values (fitness). • The fitness assigned to each individual depends only on its position in the individuals rank and not on the actual objective value. • Rank-based fitness assignment overcomes the scaling problems of the proportional fitness assignment = Stagnation in the case where the selective pressure is too small or premature convergence where selection has caused the search to narrow down too quickly. • The reproductive range is limited, so that no individuals generate an excessive number of offsprings. • Ranking introduces a uniform scaling across the population and provides a simple and effective way of controlling selective pressure. • Rank-based fitness assignment behaves in a more robust manner than proportional fitness assignment. 25

  26. Rank-based fitness assignment - cont • Nind - the number of individuals in the population • Pos - the position of an individual in this population (least fit individual has Pos=1, the fittest individual Pos=Nind) • SP - the selective pressure. • The fitness value for an individual is calculated as: Linear ranking Fitness(Pos) = 2 - SP + 2*(SP - 1)*(Pos - 1) / (Nind - 1) • Linear ranking allows values of SP in [1.0, 2.0]. 26

  27. Rank-based fitness assignment - cont Non-linear ranking: Fitness(Pos) = Nind*X(Pos - 1) / i = 1,Nind (X(i - 1)) • X is computed as the root of the polynomial: 0 = (SP - 1)*X (Nind - 1) + SP*X (Nind - 2) + ... + SP*X + SP • Non-linear ranking allows values of SP in [1.0, Nind - 2.0] • The use of non-linear ranking permits higher selective pressures than the linear ranking method. • The probability of each individual being selected for mating is its fitness normalized by the total fitness of the population. 27

  28. Rank-based fitness assignment - cont Fitness assignment for linear and non-linear ranking 28

  29. Rank-based fitness assignment - cont Properties of linear ranking Selection intensity: SelIntRank(SP) = (SP-1)*(1/sqrt()). Loss of diversity: LossDivRank(SP) = (SP-1)/4. Selection variance: SelVarRank(SP) = 1-((SP-1)2/ ) = 1-SelIntRank(SP)2. 29

  30. Number of individual 1 2 3 4 5 6 7 8 9 10 11 Fitness value 2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 Selection probability 0.18 0.16 0.15 0.13 0.11 0.09 0.07 0.06 0.03 0.02 0.0 5.2 Roulette wheel selection The simplest selection scheme is roulette-wheel selection, also called stochastic sampling with replacement 11 individuals, linear ranking and selective pressure of 2 sample of 6 random numbers (uniformly distributed between 0.0 and 1.0): • 0.81, 0.32, 0.96, 0.01, 0.65, 0.42. 30

  31. Roulette wheel selection - cont • After selection the mating population consists of the individuals: 1, 2, 3, 5, 6, 9. • The roulette-wheel selection algorithm provides a zero bias but does not guarantee minimum spread 31

  32. 5.3 Stochastic universal sampling • Equally spaced pointers are placed over the line as many as there are individuals to be selected. • NPointer - the number of individuals to be selected • The distance between the pointers are 1/Npointer • The position of the first pointer is given by a randomly generated number in the range [0, 1/NPointer]. • For 6 individuals to be selected, the distance between the pointers is 1/6=0.167. • sample of 1 random number in the range [0, 0.167]: 0.1. 32

  33. Stochastic universal sampling - cont • After selection the mating population consists of the individuals: 1, 2, 3, 4, 6, 8. • Stochastic universal sampling ensures a selection of offspring which is closer to what is deserved then roulette wheel selection 33

  34. 5.4 Local selection • Every individual resides inside a constrained environment called the local neighbourhood • The neighbourhood is defined by the structure in which the population is distributed. • The neighbourhood can be seen as the group of potential mating partners. • Linear neighbourhood: full and half ring 34

  35. Local selection - cont The structure of the neighbourhood can be: • Linear: full ring, half ring • Two-dimensional: • full cross, half cross • full star, half star • Three-dimensional and more complex with any combination of the above structures. 35

  36. Local selection - cont • The distance between possible neighbours together with the structure determines the size of the neighbourhood. • The first step is the selection of the first half of the mating population uniform at random (or using one of the other mentioned selection algorithms, for example, stochastic universal sampling or truncation selection). • Then a local neighbourhood is defined for every selected individual. Inside this neighbourhood the mating partner is selected (best, fitness proportional, or uniform at random). 36

  37. Local selection - cont • Between individuals of a population an 'isolation by distance' exists. • The smaller the neighbourhood, the bigger the isolation distance. • However, because of overlapping neighbourhoods, propagation of new variants takes place. • This assures the exchange of information between all individuals. • The size of the neighbourhood determines the speed of propagation of information between the individuals of a population, thus deciding between rapid propagation or maintenance of a high diversity/variability in the population. • A higher variability is often desired, thus preventing problems such as premature convergence to a local minimum. 37

  38. tournament size 1 2 3 5 10 30 selection intensity 0 0.56 0.85 1.15 1.53 2.04 5.5 Tournament selection • A number Tour of individuals is chosen randomly from the population and the best individual from this group is selected as parent. • This process is repeated as often as individuals to choose. • The parameter for tournament selection is the tournament size Tour. • Tour takes values ranging from 2 - Nind (number of individuals in population). • Relation between tournament size and selection intensity 38

  39. Tournament selection - cont • Selection intensity: SelIntTour(Tour) = sqrt(2*(log(Tour)-log(sqrt(4.14*log(Tour))))) • Loss of diversity: LossDivTour(Tour) = Tour -(1/(Tour-1)) - Tour -(Tour/(Tour-1)) (About 50% of the population are lost at tournament size Tour=5). • Selection variance: SelVarTour(Tour) = 1- 0.096*log(1+7.11*(Tour-1)), SelVarTour(2) = 1-1/ 39

  40. 6. Crossover / recombination • Recombination produces new individuals in combining the information contained in the parents 40

  41. 6.1 Binary valued recombination (crossover) • One crossover position k is selected uniformly at random and the corrsponding parts (or variables) are exchanged between the individuals about this point  two new offsprings are produced. 41

  42. Binary valued recombination - cont • Consider the following two individuals with 11 bits each: • individual 1:0 1 1 1 0 0 1 1 0 1 0 • individual 2:1 0 1 0 1 1 0 0 1 0 1 • crossover position is 5 • After crossover the new individuals are created: • offspring 1:0 1 1 1 0| 1 0 0 1 0 1 • offspring 2: 1 0 1 0 1| 0 1 1 0 1 0 42

  43. 2.2 Multi-point crossover • m crossover positions ki • individual 1:0 1 1 1 0 0 1 1 0 1 0 • individual 2:1 0 1 0 1 1 0 0 1 0 1 • cross pos. (m=3) 2 6 10 • offspring 1: 0 1| 1 0 1 1| 0 1 1 1| 1 • offspring 2:1 0| 1 1 0 0| 0 0 1 0| 0 43

  44. 2.3 Uniform crossover • Uniform crossover generalizes single and multi-point crossover to make every locus a potential crossover point. • A crossover mask, the same length as the individual structure, is created at random and the parity of the bits in the mask indicate which parent will supply the offspring with which bits. • individual 1:0 1 1 1 0 0 1 1 0 1 0 • individual 2: 1 0 1 0 1 1 0 0 1 0 1 mask 1: 0 1 1 0 0 0 1 1 0 1 0 mask 2: 1 0 0 1 1 1 0 0 1 0 1 (inverse of mask 1) • offspring 1: 1 1 1 0 1 1 1 1 1 1 1 • offspring 2:0 0 1 1 0 0 0 0 0 0 0 • Spears and De Jong (1991) have poposed that uniform crossover may be parameterised by applying a probability to the swapping of bits. 44

  45. 2.4 Real valued recombination 2.4.1 Discrete recombination • Performs an exchange of variable values between the individuals. • individual 1: 12 25 5 • individual 2: 123 4 34 • For each variable the parent who contributes its variable to the offspring is chosen randomly with equal probability. • sample 1: 2 2 1 • sample 2: 1 2 1 • After recombination the new individuals are created: • offspring 1: 123 45 • offspring 2: 1245 45

  46. Real valued recombination - cont Discrete recombination - cont • Possible positions of the offspring after discrete recombination • Discrete recombination can be used with any kind of variables (binary, real or symbols). 46

  47. Real valued recombination - cont 2.4.2 Intermediate recombination • A method only applicable to real variables (and not binary variables). • The variable values of the offsprings are chosen somewhere around and between the variable values of the parents. • Offspring are produced according to the rule: offspring = parent 1 + Alpha (parent 2 - parent 1) where Alpha is a scaling factor chosen uniformly at random over an interval [-d, 1 + d]. • In intermediate recombination d = 0, for extended intermediate recombination d > 0. • A good choice is d = 0.25. • Each variable in the offspring is the result of combining the variables according to the above expression with a new Alpha chosen for each variable. 47

  48. Real valued recombination - cont Intermediate recombination - cont • Area for variable value of offspring compared to parents in intermediate recombination 48

  49. Real valued recombination - cont Intermediate recombination - cont • individual 1: 12 25 5 • individual 2: 123 4 34 Alpha are: • sample 1: 0.5 1.1 -0.1 • sample 2: 0.1 0.8 0.5 • The new individuals are calculated (offspring = parent 1 + Alpha (parent 2 - parent 1) • offspring 1: 67.5 1.9 2.1 • offspring 2: 23.1 8.2 19.5 49

  50. Real valued recombination - cont • Intermediate recombination is capable of producing any point within a hypercube slightly larger than that defined by the parents • Possible area of the offspring after intermediate recombination 50

More Related