1 / 34

: affirouzigmail firouziaut.ac.ir

Definition:Optimization is the process of adjusting the inputs to or characteristics of a device, mathematical process, or experiment to find the minimum or maximum output or result. In the mathematical approach, optimization finds zeros of the function derivative. We Like the SIMPLICITY of root finding. Many times the derivative does not exist or is very difficult to find. Another difficulty With optimization is determining if a given minimum is the best (global) minimum or a suboptimal (local) minimum..

shaun
Download Presentation

: affirouzigmail firouziaut.ac.ir

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. ????: ????? ?????? affirouzi@gmail.com firouzi@aut.ac.ir

    2. Definition: Optimization is the process of adjusting the inputs to or characteristics of a device, mathematical process, or experiment to find the minimum or maximum output or result

    4. Trial and Error vs. Function : Adjusting the rabbit ears on a TV to get the best picture and audio reception. Experimentalists prefer this approach. Discovery and refinement of penicillin. Mathematical formula describes the objective function in function optimization. Theoreticians love this theoretical approach. Single Variable vs. Multi Variable: If there is only one variable, the optimization is one- dimensional. A problem having more than one variable requires multidimensional optimization. Optimization becomes increasingly difficult as the number of dimensions increases. Many multidimensional optimization approaches generalize to a series of one-dimensional approaches. Dynamic vs. Static: Dynamic optimization means that the output is a function of time, while static means that the output is independent of time. Finding the rest route. Discrete vs. Continuous Variable: Optimum solution consists of a certain combination of variables from the finite pool of all possible variables. Constraints vs. Unconstraint: Constrained optimization incorporates variable equalities and inequalities into the cost function. Unconstrained optimization allows the variables to take any value.

    11. Typically, these initial Population are held as binary encodings (or strings) of the true variables, although an increasing number of GAs use "real-valued" (i.e. base-10) encodings. This initial population is then processed by the three main operators. Selection attempts to apply pressure upon the population in a manner similar to that of natural selection found in biological systems. Poorer performing individuals are weeded out and better performing, or fitter, individuals have a greater than average chance of promoting the information they contain within the next generation. Crossover allows solutions to exchange information in a way similar to that used by a natural organism undergoing sexual reproduction. One method (termed single point crossover) is to choose pairs of individuals promoted by the selection operator, randomly choose a single locus (point) within the binary strings and swap all the information (digits) to the right of this locus between the two individuals. Mutation is used to randomly change (flip) the value of single bits within individual strings. Mutation is typically used very sparingly. After selection, crossover and mutation have been applied to the initial population, a new population will have been formed and the generational counter is increased by one. This process of selection, crossover and mutation is continued until a fixed number of generations have elapsed or some form of convergence criterion has been met.

    12. A trivial problem might be to maximize a function, f(x), where: f(x) = x2 ; for integer X and 0 < X < 4095. First Step: Binary GA

    16. string = chromosome =genotype. Solution Vector = genotype within the environment = organism= phenotype

    20. MULTIPARAMETER PROBLEMS Extending the representation to problems with more than one unknown proves to be particularly simple. The M unknowns are each represented as sub-strings of length /, These sub-strings are then concatenated (joined together) to form an individual population member of length L, where: For example, given a problem with two unknowns a and b, then if a = 10110 and b = 11000 for one guess at the solution, then by concatenation, the genotype is

    24. The Simple Genetic Algorithm uses single point crossover as the recombination operator (in the natural world, between one and eight crossover points have been reported . The pairs of individuals selected undergo crossover with probability Pc, A random number Rc is generated in the range 0-1 and the individuals undergo crossover if and only if Rc < Pc, otherwise the pair proceed without crossover. Typical values of Pc are 0.4 to 0.9. (If Pc = 0.5 then half the new population will be formed by selection and crossover, and half by selection alone.)

    25. Elitism Fitness-proportional selection does not guarantee the selection of any particular individual, including the fittest. Unless the fittest individual is much, much fitter than any other it will occasionally not be selected. To not be selected is to die. Thus with fitness-proportional selection the best solution to the problem discovered so far can be regularly thrown away. Ensuring the propagation of the elite member is termed elitism and requires that not only is the elite member selected, but a copy of it does not become disrupted by crossover or mutation. Here, the use of elitism is indicated by e (which can only take the value 0 or 1); if e = 1 then elitism is being applied, if e = 0 then elitism is not applied.

    26. A single point mutation changes a 1 to a 0, and visa versa. Mutation points are randomly selected from the Npop * Nbits total number of bits in the population matrix. Increasing the number of mutations increases the algorithm’s freedom to search outside the current region of variable space. The probability of mutation, Pm is typically of the order 0.001, i.e. one bit in every thousand will be mutated. However, just like everything else about GAs, the correct setting for Pm will be problem dependent. (Many have used Pm=1/L, others Pm= 1/(N*sqrt(L) ), where N is the population size).

    27. Algorithm 1- Generate an initial (g = 1) population of random binary strings of length ?Mk=1 (lk) where M is the number of unknowns and /k the length of binary string required by any unknown k. In general lk?lj for k ?j. 2- Decode each individual, /, within the population to integers zi,k and then to real numbers ri,k to obtain the unknown parameters. 3- Test each individual in turn on the problem at hand and convert the objective function or performance, Oi, of each individual to a fitness fi, where a better solution implies a higher fitness. 4- Select, by using fitness proportional selection, pairs of individuals and apply with probability Pc single point crossover. Repeat until a new temporary population of N individuals is formed.

    28. 5- Apply the mutation operator to every individual in the temporary population, by stepping bit-wise through each string, occasionally flipping a 0 to a 1 or vice versa. The probability of any bit mutating is given by Pm and is typically very small (for example, 0.001). 6- If elitism is required, and the temporary population does not contain a copy of an individual with at least the fitness of the elite member, replace (at random) one member of the temporary population with the elite member. 7- Replace the old population by the new temporary generation. 8- Increment, by 1, the generational counter (i.e. g = g + 1) and repeat from Step 2 until G generations have elapsed.

    29. Continuous GA

More Related