evolutionary computation n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Evolutionary Computation PowerPoint Presentation
Download Presentation
Evolutionary Computation

Loading in 2 Seconds...

play fullscreen
1 / 25

Evolutionary Computation - PowerPoint PPT Presentation


  • 110 Views
  • Uploaded on

Evolutionary Computation. Evolving Neural Network Topologies. Project Problem. There is a class of problems that are not linearly separable The XOR function is a member of this class The BACKPROPAGATION algorithm can express a variety of these non-linear decision surfaces. Project Problem.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Evolutionary Computation' - axl


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
evolutionary computation

EvolutionaryComputation

Evolving Neural Network Topologies

project problem
Project Problem
  • There is a class of problems that are not linearly separable
  • The XOR function is a member of this class
  • The BACKPROPAGATION algorithm can express a variety of these non-linear decision surfaces
project problem1
Project Problem
  • Non-linear decision surfaces can’t be “learned” with a single perceptron
  • Backprop uses a multi-layer approach where there is a number of input units, a number of “hidden” units, and the corresponding output units
parametric optimization
Parametric Optimization
  • Parametric Optimization was the goal of this project
  • The parameter to be optimized was the number of hidden units in the hidden layer of a backprop network used to learn the output for the XOR benchmark function
tool boxes
Tool Boxes
  • A neural network tool box developed by Herve Abdi (available from Matlab Central) was used for the backprop application
  • The Genetic Algorithm for Function Optimization (GAOT) was used for the GA application
graphical illustrations
Graphical Illustrations
  • The next plot shows the randomness associated with successive runs of the backprop algorithm
  • xx = mytestbpg(1,5000,0.25)
  • Where 1 is the number of hidden units, 5000 is the number of training iterations, and 0.25 is the learning rate
graphical illustrations3
Graphical Illustrations
  • E(n) is the error at time epoch n
  • Ti(n) = [0 11 0] is the output of the target function with input sets: (0,0), (0,1), (1,0), and (1,1) at time epoch n
  • Oi(n) = [o1 o2 o3 o4] is the output of the backprop network at time epoch n
  • Notice that the training examples will cover the entire state space of this function
graphical illustrations4
Graphical Illustrations
  • The next few plots will show the effects of the rate of error convergence for different numbers of hidden units
parametric optimization1
Parametric Optimization
  • The GA supplied by the GAOT tool box was used to optimize the number of hidden units needed for the XOR backprop benchmark
  • A real-valued (floating point representation instead of binary value representation) was used in conjunction with the selection, mutation, and cross-over operators
parametric optimization2
Parametric Optimization
  • The fitness (evaluation) function is the driving factor for the GA in the GAOT toolbox
  • The fitness function is specific to the problem at hand
parametric optimization3
Parametric Optimization
  • Authors of the “NEAT” paper give the following results for their NeuroEvolutionary implementation
  • Optimum number of hidden nodes (average value): 2.35
  • Average number of generations: 32
parametric optimization4
Parametric Optimization
  • Results of my experimentation:’
  • Optimum number of hidden units (average value): 2.9
  • Convergence of this value after approximately 17 generations
parametric optimization5
Parametric Optimization
  • GA parameters:
  • Population size of 20
  • Maximum number of generations: 50
  • Backprop fixed parameters: 5000 training iterations; learning rate of 0.25
  • Note: Approximately 20 minutes per run of the GA with these parameter values. Running on Matlab with 768MB of RAM at 2.2 GHz
conclusion
Conclusion
  • Interesting experimental results. Also agrees with other researchers results
  • GA is a good tool to use for parameter optimization. However, the results depend on a good fitness function for the task at hand