- 85 Views
- Uploaded on
- Presentation posted in: General

Biologically Inspired Intelligent Systems

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Biologically Inspired Intelligent Systems

Lecture 10

Dr. Roger S. Gaborski

Roger S. Gaborski

- WEEKLY UPDATES
- PRESENTATIONS START Tuesday, 1025
- -Updates even after you present
- -Final project material due 11/8 (no exceptions)
- -Random selection
- -Volunteers?

Roger S. Gaborski

- Essentials of Metaheuristics
- Sean Luke, George Mason University
- Available online at no cost:
http://cs.gmu.edu/~sean/book/metaheuristics/

Some lecture material taken from pp1-49

Roger S. Gaborski

- Darwin – 1859 Charles Darwin proposed a model explaining evolution

Roger S. Gaborski

- Diversity of population is critical for adaptability to the environment. Population of structures that are heterogeneous. Individual phenotypes express different traits.
- Populations evolve over many generations. Reproduction of a specific individual depends on specific conditions of the environment and the organism’s ability to survive and produce offspring
- Offspring inherit their parents fitness, but the genetic material of the parents merge and mutation results in slight variations

Roger S. Gaborski

- Rote Learning: No inference, direct implantation of knowledge
- Learning by Instruction: Knowledge acquired from a teacher or organized source
- Learning by Deduction: Deductive, truth preserving inferences and memorization of useful conclusions
- Learning by Analog: Transformation of existing knowledge that bears similarity to new desired concept
- Learning by Induction:
- Learning from examples (concept acquisition): Based on set of examples and counter examples, induce general concept description that explains examples and counter examples.
- Learning by observations and discovery (descriptive generalization, unsupervised learning): Search regularities and general rules explaining observations without a teacher

Roger S. Gaborski

(Michalski, Carbonell, Mitchell,…)

- Inductive learning by observation and discovery
- No teacher exists who presents examples. System develops examples on its own
- Creation of new examples (search points) by the algorithm is an inductive guess on basis of existing knowledge
- Knowledge base – population
- If new example is good, added to population (knowledge added to population)

Roger S. Gaborski

- Evaluate every solution in the population and determine its fitness
- Fitness is a measure of how close the solution matches the problem’s objective
- Fitness is calculated by a fitness function
- Fitness functions are problem dependent
- Fitness values are usually positive, with zero being a perfect score (the larger the fitness value the worse the solution)

Roger S. Gaborski

In1

In2

+1

bias

y

There are 7 weights. The

solution is a point in a 7

dimensional space

Roger S. Gaborski

- Write a Matlab function that will accept weight matrix W, two input vectors and a target vector. The function should return the total error value
- Implement XOR with known weight matrix W
- Use the following variables:
- Weight matrix: W size 2x5
- Input values: Iin1 = [ 1 1 0 0]
and Iin2 = [ 1 0 1 0]

- Target Outputs: Y = [ 0 1 1 0]
- 2 Neurons with outputs x1 and x2

Roger S. Gaborski

- When the program executes with the correct W matrix performance should be similar to the example given in the previous lecture.
- W = [-2.19 -2.20 .139 0 0;-2.81 -2.70 3.90 -31.8 0]
- Use a sigmoid fcn, not tanh
1/(1+exp(-5*x))

The 5 controls the slope

- total error should be approximately 0
- Test on AND and OR target vectors – are results correct (using given W matrix)
- Experiment with random W matrices (but with 0’s in same locations)
- Can you find another eight matrix that solves the problem?

Roger S. Gaborski

- Find the maximum (or minimum) of a function

http://en.wikipedia.org/wiki/File:

Gradient_ascent_%28surface%29.png

http://en.wikipedia.org/wiki/

Gradient_descent

Roger S. Gaborski

- Find the maximum of a function
- Start with arbitrary value x
- Add to this value a small fraction of the slope x = x+af’(x) a<1, f’(x) derivative
- Positive slope, x increases
- Negative slope, x decreases

- x will continue towards the maximum, at the maximum of f(x) the slope is zero and x will no longer change

Roger S. Gaborski

- Gradient descent is the same algorithm except the slope is subtracted to find the minimum of the function

Roger S. Gaborski

- Convergence time – how large should ‘a’ be? To large and we may overshoot the maxima, too small and we may run out of time before maximum (or minimum is found)
- Local maxima (minima)
- Saddle points
- Newton’s Method: Use both the first and second derivatives:
X = x + a( f’(x)/f’’(x) )

Roger S. Gaborski

- Gradient Ascent is a local optimization method
- Instead of starting at only one x, randomly select several x’s and apply gradient ascent at each x allowing a wider exploration of the solution space

Roger S. Gaborski

Replace x with vector x (underscore indicating a vector)

Slope f’(x) is now the gradient of x, f(x)

The gradient is a vector where each element of the vector is the slope of x in that dimension

Δ

Roger S. Gaborski

- We are making the assumption that we can calculate the first and possibly the second derivative in each dimension.
- What if we can’t?
- We don’t know what the function is

- We can
- Create inputs
- Test the inputs
- Assess the results

Roger S. Gaborski

- Initialization Procedure: Provide one or more initial candidate solutions
- Assessment Procedure: Assess quality of a solution
- Modification Procedure: Make a copy of a candidate solution and produce a candidate that is slightly randomly different from candidate solution
(Derivatives are not calculated)

Roger S. Gaborski

- Somehow create an initial candidate solution
- Randomly modify the candidate solution
- If modified candidate is better than initial solution, replace initial candidate with modified candidate
- Continue process until solution found or run out of time

Roger S. Gaborski

- Make several modifications to the candidate instead of a single modification
- Always keep the modified candidate (don’t compare) but keep a separate variable called ‘best’ that always retains the best discovered solution – at end of program, return ‘best’

Roger S. Gaborski

- What does the solution candidate look like?
- Single number
- A vector of numbers (or matrix)
- Tree or graph
- Etc.

- Assume fixed length vector containing real numbers

Roger S. Gaborski

- Assume vector length L and a range of valid entries, low and high
- Use a random number generator (uniform distribution) and scale values to be between low and high:
>> low = 10; high = 20;

>> v = low + (high-low) .*rand(5,1)

v =

18.2415

12.1823

10.9964

16.1951

11.0381

Roger S. Gaborski

>> low = 10;

>> high = 20;

>> test = low + (high-low) .*rand(10000,1);

>> min(test)

ans =

10.0012

>> max(test)

ans =

19.9991

Roger S. Gaborski

- Add a small amount of uniformly distributed random noise to each component of vector v

u = (2*rand(10000,1))-1; %u ranges from -1 to +1

Simply scale u to the desired range, r

Desired range -r to r, let r=.1 resulting in: -.1 to +.1

>> u1 = r*n;

>> min(u1)

= -0.1000

>> max(u1)

= 0.1000

v(i) = v(i) + u1(i), check that v(i) is within bounds: low, high

Roger S. Gaborski

>> u1(1:5)

=

-0.0476

-0.0297

0.0526

-0.0639

-0.1000

>> low = 10; high = 20;

v = low + (high-low) .*rand(5,1)

v =

14.7211

11.7077

17.7977

19.9797

16.2644

Modified v:

>> v = v +u1(1:5)

v =

14.6735

11.6780

17.8503

19.9158

16.1644

Roger S. Gaborski

- If r is very small, hill climbing will explore only the local region and potentially get caught in local minima.
- If r is very large, hilling climbing will bounce around and if its near the peak of the function it may miss it because it may overshoot the peak
- r controls the degree of Exploration (randomly explore space) versus Exploitation (exploit local gradient) in the hill climbing algorithm

Roger S. Gaborski

- Extreme Exploration – random search
- Extreme Exploitation – very small r
- Combination:
- Randomly select starting place x
- Using small r, perform hill climbing for a random amount of time, save result if ‘Best’

- At end of time, randomly select new starting point x
- Using small r, perform hill climbing for a random amount of time, save result if ‘Best’

- Repeat until solution found

- Randomly select starting place x

Roger S. Gaborski

- If random time interval is long, algorithm effectively becomes a Hill Climber algorithm
- If random time interval is short, algorithm effectively becomes a random search
- The random time interval drives the algorithm from one extreme to the other
- Which is best It Depends….

Roger S. Gaborski

Roger S. Gaborski

RANDOM SEARCH

HILL CLIMBING

RANDOM SEARCH

HILL CLIMBING LEAD

AWAY FROM MAXIMUM

Roger S. Gaborski

Previously, we required a bounded uniform distribution. The range of values was specified.

A Gaussian distribution usually generates small numbers, but numbers of any magnitude are possible.

Large numbers result in exploration

Roger S. Gaborski

GAUSSIAN DISTRIBUTION

>>g1(1:5)

=

0.0280 larger values

-0.1634

-0.1019

1.0370

0.1884

PREVIOUSLY, UNIFORM

>> u1(1:5)

=

-0.0476

-0.0297

0.0526

-0.0639

-0.1000

>> low = 10; high = 20;

v = low + (high-low) .*rand(5,1)

v =

14.7211

11.7077

17.7977

19.9797

16.2644

Modified v:

>> v = v +u1(1:5)

v =

14.6735

11.6780

17.8503

19.9158

16.1644

>> v = v +g1(1:5)

v =

14.7491

11.5443

17.6958

21.0167

16.4528

Roger S. Gaborski

- Differs from Hill Climbing in its decision when to replace the original individual (parent S) with the modified individual (the child R)
- In Hill Climbing, check if modified individual is better. If it is, replace original
- In simulated annealing, if the child is better, replace parent
- If the child is NOT better, still replace parent with child with a certain probability P(t,R,S):
- P(t,R,S)= exp(Quality(R)-Quality(S)/t)

- If the child is NOT better, still replace parent with child with a certain probability P(t,R,S):

Roger S. Gaborski

- Recall, R is worse than S
- First, t ≥ 0
- (Quality(R) – Quality(S)) is negative
- If R is much worse than S, the fraction is larger, and the probability is close to 0
- If R is very close to S, the probability is close to 1 and we will select R with reasonable probability
- t is selectable, t close to 0, fraction is large and the probability is close to 0
- If t is large, probability is close to 1

Roger S. Gaborski

- R(child) = 5, S(parent) = 8, t = 2
- P(t,R,S) = exp((R-S)/t) =0.2231
Raise t, t=8

- P(t,R,S) = exp((R-S)/t) = 0.6873
- The probability of replace S with R increases when t increases
- Initially set t high causing the algorithm to move to the newly created solution even if it is worse than the current position (random walk)
- Slowly decrease t as the algorithm proceeds, eventually to zero (then it’s simple Hill climbing)

Roger S. Gaborski

- The rate we decrease t is called the algorithm’s schedule
- The longer the schedule is, the longer the algorithm resembles a random walk and the more exploration it does

Roger S. Gaborski

- Keep a history of recently considered candidate solutions (tabu list)
- Do not return to solutions on tabu list until there are sufficiently in the past
- Keep a list of previous candidates of length k. After list is full, remove old candidates and add new candidates
- Tabu Search operates in discrete spaces

Roger S. Gaborski

- Unlikely you will visit the same real valued location twice
- Consider candidate to be on the list if it is sufficiently similar to member on the list
- Similar measure needs to be determined

Roger S. Gaborski

- Instead of candidate solutions, keep list of changes you made to specific features

Roger S. Gaborski

- Keep a collection of candidate solutions and not just a single candidate (as in Hill Climbing)
- Candidate solutions interact

Roger S. Gaborski

Evolutionary Computation (EC)- A Set of Techniques

Based on population biology, genetics and evolution

Roger S. Gaborski

- Generational algorithms – update entire population once per iteration
- Steady-state algorithms – update the population a few samples at a time
- Common EAs include Genetic Algorithms (GA) and Evolution Strategies (ES)
- Both generational and steady state methods are used

Roger S. Gaborski

- Individual – a candidate solution
- Child and Parent – child is a modified copy of the candidate solution (parent)
- Population – set of candidate solutions
- Fitness- quality of solution
- Selection – picking individuals based on their fitness
- Mutation – simple modification to an individual
- Recombination or Crossover – takes two parents and swaps sections, resulting in 2 children

Roger S. Gaborski

- Genotype or genome – individual’s data structure used during breeding
- Chromosome – a genotype
- Gene – a particular position in the chromosome
- Phenotype – how the individual operates during fitness assessment
- Generation – one cycle of fitness assessment, breeding

Roger S. Gaborski

- First, construct an initial population
- Iterate:
- Assess fitness of individuals in population
- Use fitness function to breed new population of children
- Join parents and children to form new population

- Continue until solution found or time runs out

Roger S. Gaborski

- Breeding:
- Two parts:
– select parents from population

- modify (mutation and/or recombining) to form children

- Two parts:
- Join Operation
- Completely replace parents with children
- Keep fit parents and fit children

Roger S. Gaborski

- Truncation Selection Method (TSM)
- Uses only mutation
- Simplest ES algorithm is the µ,λ algorithm
- Population of λ individuals
- ITERATE:
- Find fitness of all individuals
- Delete all but µ fittest individuals (TSM)
- Each of µ fittest individuals get to produce λ/ µ children through mutation resulting in λ new children
- The children replace all the parents
- Repeat fixed number of times, or until goal met

Roger S. Gaborski

- µ = 5 and λ = 20
- Find the 5 fittest individuals
- Each individual gets to produce 20/5 children through mutation = 4
- Total number of children, 4*5 = 20
- Replace all parents with the new children

Roger S. Gaborski

- In the ES(µ,λ) all the parents are replaced with the children in the next generation
- In the ES(µ+λ) algorithm, the next generation consists of the µ parents plus the λ new children
- The parents and children compete
- All successive generations are µ+λ in size

Roger S. Gaborski

- µ = 5 and λ = 20
- Find the 5 fittest individuals
- Each individual gets to produce 20/5 children through mutation = 4
- Total number of children, 4*5 = 20
- The next generation consists of the 5 parents plus the 20 new children = 25
- All successive generations are 5+20 in size

Roger S. Gaborski

Some General Ideas

Roger S. Gaborski

- Potential solutions to a problem
- Any point in the search space defines a potential solution
- ‘Search’ – navigating through the search space
- ‘Evolutionary Search’ is inspired by nature

Roger S. Gaborski

- Evolutionary algorithms consider a large number, or population, of potential solutions at once
- Use the whole population, or a subset of the population, to help navigate through the search space in search of the ‘optimal’ solution
- Making use of previously evaluated solutions

Roger S. Gaborski

- Perform search by evolving solutions
- Maintain a population of potential solutions
- Breed better solutions in the population
- Keep ‘children’ that are created
- Remove poorer performing solutions

- Evolve solutions for a given number of generations or until an acceptable solution is found

Roger S. Gaborski

- Optimization
- Knowledge-rich encoding of problem
- Solution is parameterized in minimum number of parameters
- Evolution is used to find the best parameters

Roger S. Gaborski

cos(0.1*(x*x + y*y))

f(x,y) =1.2 +

1+ .01*(x*x + y*y)

Roger S. Gaborski

Roger S. Gaborski

Roger S. Gaborski

- Given 2D function
- We would like to find the (x,y) pair that corresponds to the maximum of the function

Roger S. Gaborski

Create a population of 100 random potential solutions

Measure fitness of each potential solution

True maximum is F(0,0) = 2.2

Best solution found in random population

F(-.8318, -.2905) = 2.1893

For this function there are many solutions near the

maximum

Roger S. Gaborski

Roger S. Gaborski

- Similar to string problem
- Select best candidate from population
- Create new population by modifying x and y coordinates of best candidate
- Modify coordinates by adding a small random numbers to the x and y coordinate

- Include original ‘best candidate’ in new population
- Select best candidate from new population
- Continue until optimal solution found

Roger S. Gaborski

bestCoord =

0.0476 0.0524

Roger S. Gaborski

- Given XOR Architecture
- Find weight matrix using ES algorithm
- Run program 10 times
- Was the program successful?
- Was more than one correct weight matrix found?

Roger S. Gaborski

- Holland – explained adaptive processes of natural systems and design artificial systems based on natural systems
- Most widely used evolutionary algorithm
- GAs use two spaces:
- Search space: space of coded solutions
- Coded solutions genotypes

- Solution space; space of actual solutions
- Actual solutions phenotypes
- Fitness is evaluated on phenotype solutions

- Search space: space of coded solutions

Roger S. Gaborski

- Maintain of population of individuals
- Each individual consists of genotype and its phenotype
- Genotypes are coded versions of parameters
- A coded parameter is a gene
- Collect of genes in one genotype is a chromosome

- Phenotypes are the actual set of parameters

Roger S. Gaborski

- GAs do not use the representation of the parameter space directly
- The population consists of individuals commonly referred to as chromosomes
- The genotypes are represented as binary strings
- Genetic operators are applied to the binary strings
- The most common operator is crossover

Roger S. Gaborski

Search SpaceSolution Space

11100110

GENOTYPES

11100000

11011000

14x+6y

PHENOTYPES

14x+4y

13x-8y

NOTE: Only the numerical

values are determined by

the genotype (ai,bi)

ai x + bi y

Evaluate Fitness of Each Solution

Roger S. Gaborski

- Interpretation and Evaluation
- Selection and Reproduction
- Variation
- Reproduction

Roger S. Gaborski

- Interpretation and Evaluation
- Decode binary strings into decimal values, such as, the x and y coordinates
- Coordinates are evaluated using the objective function

Roger S. Gaborski

- Interpretation and Evaluation
- Selection and Reproduction
- Select two individuals from the current population
- Many methods are available to implement the selection step of the algorithm

Roger S. Gaborski

- Variation
- Two selected bit strings are modified by a crossover and mutation operator
- Crossover – randomly select a position in the binary string. Create first child by recombining the first section from parent 1 with the second section from string 2
- The second child is forms by combining the second section of parent 1 with the first section of parent 2
- Mutation is implemented by simply selecting a bit and flipping its value, 01, 10
- Application of the operators is determined by a probability
- Both classes of operators are biologically inspired

Roger S. Gaborski

- SELECTION
- Out of an initial population of individuals, how do you select parents that will be used to breed the next population?
- Randomly – just select two parents
- Based on fitness – the higher an individuals fitness, the more likely it will be chosen as a parent
- Tournament Selection- randomly select k individuals from the population. Return the best r, r can equal 1
- NOTE: Fitness is a potential issue, especially early on – what does it really mean?

Roger S. Gaborski

- Crossover: After selecting two chromosomes from the population,
Parent1: ABCDEFG and Parent2: abcdefg

Select a random position (for example, 4), split the two chromosomes at this point, interchange substrings and recombine

ABCDEFG and abcdefg

child1: ABCDefg

child2: abcdEFG

Roger S. Gaborski

- Mutation: Randomly change the value of one element of the chromosome
ABCDefg

ABKDefg

Roger S. Gaborski

Crossover:

Parent1: 011011101011

Parent2: 100110110101

Choose crossover point as position 3

Parent1: 011011101011

Parent2: 100110110101

Child1: 011110110101

Child2: 100011101011

Mutation (randomly choose position 8):

Child1: 011110110101 011110100101

Roger S. Gaborski

- v1 v2 v3 v4 v5 v6
- With single point crossover the probability is high that v1 and v6 will end up in different children. If the pair v1 and v6 is important to get a high fitness, single point cross with be a poor choice for crossover
- Also, it is highly likely that v1 and v2 will remain together in the child. There is only a 1 out of 6 possibility that they will be separated

Roger S. Gaborski

- Parent1: ABCDEFGHIJK
- Parent2: abcdefghijk
- Two point crossover, pt1 = 3, pt2 = 6
- Child1: ABCdefGHIJK
- Child2: abcDEFghijk
- Two point crossover, pt1 = 2, pt2 = 6
- Child1: AbcdefGHIJK
- Child2: aBCDEFghijk

Roger S. Gaborski

- For every position, flip a coin. If heads, flip, if tails, no change:
- Parent1: ABCDEFGHIJK
- HTTTHTHHTHH
- Parent2: abcdefghijk
- Child1: aBCDeFghIjk
- Child2: AbcdEfGHiJK
- Can generate several children by using another probability string

Roger S. Gaborski

- Instead of using binary values, use real values.
- Crossover
- Parents:
12.1 1.4 16.5 18.1 20.7

6.1 -7 -8.2 9.1 -10.1

-Children:

12.1 1.4 16.5 9.1 -10.1

6.1 -7 -8.2 18.1 20.7

- Parents:

Roger S. Gaborski

- Instead of swapping values between parents to form children, average values from the two parents
- Uniform crossover. For each head location, average the corresponding values

Roger S. Gaborski

- Number Consider each vector a point in n dimensional space (n, of elements)
- Draw a line between the two points. Select points off this line. Allow the line to extend beyond the points

X

Original Points

X Children

X

Roger S. Gaborski

- Use a Gaussian Random generator to generate a small random number. Add random number to chosen value.
- The random number can be scaled in both range and magnitude. If numbers are in the -2.0 to +2.0 range, a mutation value of .1 might be reasonable, but if the numbers are in the 1000 – 2000 range, a random number of 10 might be more reasonable

Roger S. Gaborski

- There are several popular methods to create the new population
- One approach is to simply replace the two parents with the two children
- Another approach is to evaluate both children and only replace the parents if the children are more fit
- Yet another approach is to replace the least fit individuals in the population with the children and also retain the original parents in the population

Roger S. Gaborski

- The two parameter problem can be represented by two segments of binary strings of equal length
- For example, [0100100110]
- The first 5 bits represent the x coordinate
- The last 5 bits represent the y coordinate
- Must be mapped to phenotypical representation of real numbers

Roger S. Gaborski

- For the 2D example we restrict
[ xmin , xmax] = [ ymin , ymax] = [-10,10]

x coordinate:

Xmin +[ (xmax - xmin)/ (2kx-1) ] * [ binary to decimal]

kx is the number of bits representing the x coordinate

-10 +[ (10-(-10))/(25 –1)] * [ binary to decimal]

-10+ [20/31]*[ binary to decimal]

Roger S. Gaborski

00000 -10+ [20/31]*[ 0] = 0

00001 -10+ [20/31]*[ 1] = -9.3548

00010 -10+ [20/31]*[ 2] = -8.7097

…..

11111 -10+ [20/31]*[ 31] = +10

Increasing the number of bits representing the

coordinate will increase the accuracy

For 10 bits:

-10 +[ (10-(-10))/(210 –1)]*[ binary to decimal]

-10+ [20/1023]*[ binary to decimal]

0000000001 -10+ [20/1023]*[1 ] = -9.9804

0000000010 -10+ [20/1023]*[2 ] = -9.9609

Roger S. Gaborski

The initial population consists of randomly generated

individuals which are strings 20 binary bits long

The first 10 bits represent the x-coordinate,

the last 10 bits represent the y-coordinate

Calculate corresponding fitness of original population

Procedure:

Decode first 10 bits to x-coordinate

Decode second 10 bits to y-coordinate

Calculate fitness of individual

At this point, each individual in the population has a

fitness associated with it.

Roger S. Gaborski

[001000101110101100011101]

[111000100110101100011101]

[011010101110101100011100]

[001000101110101100011101]

[011000101101101100011101]

[011000101110101000011100]

[011111101110101100010100]

Roger S. Gaborski

- Each vector is 24 binary bits long
- Assume each 4 bits represents a decimal number:
0000 = 0

0001 = 1

0010 = 2

…

1111 = 15

Roger S. Gaborski

- [0010 0010 1110 1011 0001 1101]
- The whole vector is the chromosome
- In this example each 4 bit sequence is a gene
- This chromosome has 6 genes
- In this example the genes represent integer numbers:
- [ 2 2 14 11 1 13]

Roger S. Gaborski

Create a random population

Evaluate all individuals in population

If individual meets fitness requirement, DONE

Create a mating pool – chromosomes with higher fitness

values have more copies in the mating pool-

highly fit individuals have a better chance of reproducing

Randomly select two parents from the mating pool

With a given probability, apply GA operators (crossover and mutation)

Depending on the probability crossover or mutation operators may not be applied

Place children into population

Continue until all chromosomes in population are children

(Completion of generation)

Roger S. Gaborski

- GA search is not random, but is a directed search
- In the simple GA, the number of times a copy of a parent is placed in the mating pool depends on that parent’s fitness function

- The simple GA is just that, simple
- Literature contains examples of more advanced GAs

Roger S. Gaborski

- Example from: Creative Evolutionary Systems (Bentley and Corne)
- The chromosome represents a ‘program’
- Programs are represented by a tree like structure
- Genetic Programs evolves a hierarchical tree structure that can be interpreted as a computer program

Roger S. Gaborski

Repeat 3 times

[move 3 steps forward

move 2 steps right

repeat 2 times

[move back 1 step

]

]

Repeat

3

List

Forward

Right

Repeat

Back

3 2

2

1

Roger S. Gaborski

Parent 1 Parent 2

Child 1 Child 2

Roger S. Gaborski

- How do we know if the solution provided by a ‘child’ is better?
- Need to define a ‘distance measure’
- How ‘close’ are we to the optimal solution

Roger S. Gaborski