1 / 18

Imagine that I am in a good mood Imagine that I am going to give you some money !

Imagine that I am in a good mood Imagine that I am going to give you some money ! In particular I am going to give you z dollars, after you tell me the value of x and y. x and y in the range of 0 to 10. z. x. y.

peta
Download Presentation

Imagine that I am in a good mood Imagine that I am going to give you some money !

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Imagine that I am in a good mood Imagine that I am going to give you some money! In particular I am going to give you z dollars, after you tell me the value of x and y x and y in the range of 0 to 10

  2. z x y

  3. Optimizing Search(Iterative Improvement Algorithms)I.e Hill climbing, Simulated Annealing Genetic Algorithms • Optimizing search is different to the path finding search we have studied in many ways. • The problems are ones for which exhaustive and heuristic search are NP-hard. • The path is not important (for that reason we typically don’t bother to keep a tree around) (thus we are CPU bound, not memory bound). • Every state is a “solution”. • The search space is (often) continuous. • Usually we abandon hope of finding the best solution, and settle for a very good solution. • The task is usually to find the minimum (or maximum) of a function.

  4. Example Problem I(Continuous) y = f(x) Finding the maximum (minimum) of some function (within a defined range).

  5. Example Problem II(Discrete) The Traveling Salesman Problem (TSP) A salesman spends his time visiting n cities. In one tour he visits each city just once, and finishes up where he started. In what order should he visit them to minimize the distance traveled? There are (n-1)!/2 possible tours.

  6. Example Problem III(Continuous and/or discrete) Function Fitting Depending on the way the problem is setup this, could be continuous and/or discrete. Discrete part Finding the form of the function is it X2or X4or ABS(log(X)) + 75 Continuous part Finding the value for X is it X= 3.1 or X= 3.2

  7. Assume that we can • Represent a state. • Quickly evaluate the quality of a state. • Define operators to change from one state to another. Traveling Salesman FunctionOptimizing y = log(x) + sin(tan(y-x)) x = 2; y = 7; log(2) + sin(tan(7-2)) = 2.00305 x = add_10_percent(x) y = subtract_10_percent(y) …. A C F K W…..Q A A to C = 234 C to F = 142 … Total 10,231 A C F K W…..Q A A C K F W…..Q A

  8. Hill-Climbing I • function Hill-Climbing (problem) returns a solution state • inputs : problem // a problem. • localvariables : current // a node. • next // a node. • current Make-Node ( Initial-State [ problem ]) // make random • loop do // initial state. • next a highest-valued successor of current • if Value [next] < Value [current] then returncurrent • currentnext • end

  9. How would Hill-Climbing do on the following problems? How can we improve Hill-Climbing? Random restarts! Intuition: call hill-climbing as many times as you can afford, choose the best answer.

  10. function Simulated-Annealing ( problem, schedule ) returns a solution state • inputs : problem // a problem • schedule // a mapping from time to "temperature" • local variables : current // a node • next // a node • T // a "temperature" controlling the probability of downward steps • current Make-Node ( Initial-State [ problem ]) • fort 1 todo • T schedule [ t ] • ifT = 0 thenreturncurrent • next a randomly selected successor of current • E  Value [ next ] - Value [ current ] • ifE > 0 then currentnext • elsecurrentnext only with probability eE/T

  11. Genetic Algorithms I (R and N, pages 619-621) • Variation (members of the same species are differ in some ways). • Heritability (some of variability is inherited). • Finiteresources (not every individual will live to reproductive age). • Given the above, the basic idea of natural selection is this. • Some of the characteristics that are variable will be advantageous to survival. Thus, the individuals with the desirable traits are more likely to reproduce and have offspring with similar traits ... • And therefore the species evolve over time… Richard Dawkins Since natural selection is known to have solved many important optimizations problems it is natural to ask can we exploit the power of natural selection?

  12. Genetic Algorithms II • The basic idea of genetic algorithms (evolutionary programming). • Initialize a population of n states (randomly) • While time allows • Measure the quality of the states using some fitness function. • “kill off” some of the states. • Allow the surviving states to reproduce (sexually or asexually or..) • end • Report best state as answer. All we need do is ...(A) Figure out how to represent the states. (B) Figure out a fitness function. (C) Figure out how to allow our states to reproduce.

  13. + x y x - y tan log sin pow Genetic Algorithms III log(xy) + sin(tan(y-x)) One possible representation of the states is a tree structure… Another is a bitstring… 100111010101001 For problems where we are trying to find the best order to do some thing (TSP), a linked list might work...

  14. + x y x - y tan log sin pow 23 12 56 77 36 83 Genetic Algorithms IIII Usually the fitness function is fairly trivial. For the function maximizing problem we can evaluate the given function with the state (the values for x, y, z... etc) For the function finding problem we can evaluate the function and see how close it matches the data. For TSP the fitness function is just the length of the tour represented by the linked list

  15. Genetic Algorithms V Parent state B Parent state A + + log sin cos 5 tan pow / Sexual Reproduction (crossover) x x y - y x y Child of A and B + Parent state A cos sin 10011101 tan / 10011000 x y - 11101000 Child of A and B Parent state B x y

  16. + + x x y y tan cos 5 5 / / Genetic Algorithms VI Parent state A Child of A Mutation Asexual Reproduction Parent state A Parent state A 10011101 10011111 Child of A Mutation Child of A

  17. Discussion of Genetic Algorithms • It turns out that the policy of “keep the best n individuals” is not the best idea… • Genetic Algorithms require many parameters... (population size, fraction of the population generated by crossover; mutation rate, number of sexes... ) How do we set these? • Genetic Algorithms are really just a kind of hill-climbing search, but seem to have less problems with local maximums… • Genetic Algorithms are very easy to parallelize... • Applications • Protein Folding, Circuit Design, Job-Shop Scheduling Problem, Timetabling, designing wings for aircraft….

  18. + + x x y x - y y tan log cos sin 5 pow /

More Related