1 / 31

Neural Networks

Neural Networks. INFO 629 Dr. R. Weber. the evidence. ~= 2 nd -5 th week training vision . the evidence. ~= 2 nd -5 th week training vision 10. the evidence. ~= 2 nd -5 th week training vision 10. the evidence. ~= 2 nd -5 th week training vision .

bruce-wade
Download Presentation

Neural Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Networks INFO 629 Dr. R. Weber

  2. the evidence ~= 2nd-5th week training vision

  3. the evidence ~= 2nd-5th week training vision 10

  4. the evidence ~= 2nd-5th week training vision 10

  5. the evidence ~= 2nd-5th week training vision

  6. NN: model of brains input output neurons synapses electric transmissions :

  7. Elements • input nodes • output nodes • links • weights

  8. terminology • input and output nodes (or units) connected by links • each link has a numeric weight • weights store information • networks are trained on training sets (examples) and after are tested on test sets to assess networks’ accuracy • learning/training takes place as weights are updated to reflect the input/output behavior

  9. 1 0 0 1 1 1 1 1 1 1 0 0 1 0 0 1 1 1 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 The training => mammal (1) ij ij => bird (0) Output=Step(w f ) learning takes place as weights are updated to reflect the input/output behavior i=1 i=2 i=3 flies eggs 4 legs 0 1 1 Goal minimize error between representation of the expected and actual outcome 20 j=1 j=2 j=3

  10. The concept i 1  Yes, 0  No layeggs fly 4 legs 0 1 1 => bird 1 0 0 => mammal 1 1 0 => mammal

  11. The concept ii 1  Yes, 0  No layeggs fly 4 legs 0 1 1 => bird 1 0 0 => mammal 1 1 0 => mammal

  12. The learning i 1  Yes, 0  No layeggs fly 4 legs 0 1 1 => bird 0.5 0.5 0.5 1 0 0 => mammal 1 1 0 => mammal

  13. The learning ii layeggs fly 4 legs 0 1 1 => bird 0*0.5+1*0.5+1*0.5= 1 1 0 0 => mammal 1*0.5+0*0.5+0*0.5= 0.5 0.5 0.5 0.5 1 1 0 => mammal 1*0.5+1*0.5+0*0.5= 1 Goal is to have weights that recognize different representations of mammals and birds as such

  14. The learning iii layeggs fly 4 legs 0 1 1 => bird 0*0.5+1*0.5+1*0.5= 1 1 0 0 => mammal 1*0.5+0*0.5+0*0.5= 0.5 0.5 0.5 0.5 1 1 0 => mammal 1*0.5+1*0.5+0*0.5= 1 Suppose we want bird to be greater 0.5 and mammal to be equal or less than 0.5

  15. The learning iv layeggs fly 4 legs 0 1 1 => bird 0*0.25+1*0.25+1*0.5= 0.75 1 0 0 => mammal 1*0.25+0*0.25+0*0.5= 0.25 0.25 0.25 0.5 1 1 0 => mammal 1*0.25+1*0.25+0*0.5= 0.5 Suppose we want bird to be greater 0.5 and mammal to be equal or less than 0.5

  16. NN demo…..

  17. Characteristics • NN implement inductive learning algorithms (through generalization) therefore, it requires several training examples to learn; • NN do not provide an explanation or a rule telling why the task performed the way it was; • uses data rather than explicit knowledge; • Classification (pattern recognition), clustering, diagnosis, optimization, forecasting (prediction), modeling, reconstruction, routing;

  18. NN Parameters • Number of hidden layers, • Randomly initialization of the weights • Error, threshold • Adjustment rate for weights fixed or variable • Optimal parameters are determined empirically A. Abraham and B. Nath, “Hybrid Heuristics for Optimal Design of Neural Nets,” in Proceedings of the Third International Conference on Recent Advances in Soft Computing, R. John and R. Birkenhead, Eds. Germany: Springer Verlag, 2000, pp. 15-22.

  19. Network Topology: Feedforward & Feedback • Feedforward • connections between units do not form cycles • produce a response to an input quickly • can be trained using a wide variety of efficient conventional numerical methods • Feedback (recurrent NN) • connections between units form cycles • Takes a long time before it produces a response • more difficult to train than feedforward

  20. Feedforward • Linear, Perceptron, Adaline , Higher Order, Functional Link, MLP: Multilayer perceptron, Backpropagation, Cascade Correlationm, Quickprop, RPROP , Radial Basis Function networks, OLS: Orthogonal Least Squares, CMAC: Cerebellar Model Articulation Controller, Classification only, LVQ: Learning Vector Quantization, Kohonen, PNN: Probabilistic Neural Network, GNN: General Regression Neural Network

  21. Feedback • BAM: Bidirectional Associative Memory, Boltzman Machine, Recurrent time series, Backpropagation through time , FIR: Finite Impulse Response, Real-time recurrent network, Recurrent backpropagation, TDNN: Time Delay NN

  22. Where are NN applicable? • Where they can form a model from training data alone; • When there may be an algorithm, but it is not known, or has too many variables; • There are enough examples available • It is easier to let the network learn from examples • Other inductive learning methods may not be as accurate

  23. Applications (i) • predict movement of stocks, currencies, etc., from previous data; • to recognize signatures made (e.g. in a bank) with those stored; • Classify medical imaging e.g., ECG signal, X Rays, MRIs • classify the state of aircraft engines (by monitoring vibration levels and sound, early warning of engine problems can be given; British Rail have been testing an application to diagnose diesel engines;

  24. Applications (ii) • Pronunciation (rules with many exceptions); • Handwritten character recognition(network w/ 200,000 is impossible to train, final 9,760 weights, used 7300 examples to train and 2,000 to test, 99% accuracy) • ATR automated target recognition, distinguish threatening from non threatening targets; • Learn brain patterns to control and activate limbs as in the “Rats control a robot by thought alone” article • Credit assignment

  25. Applications (iii) • Optimization (max,min) and routing (min distances) problems • Modeling e.g., create model of input vs.output analysis of software programs to work as an oracle of predicted output • Reconstruction to produce clean versions of noisy patterns by matching the closest training pattern to input pattern • Clustering

  26. CMU Driving ALVINNhttp://www.ri.cmu.edu/projects/project_160.html • learns from human drivers how to steer a vehicle along a single lane on a highway • ALVINN is implemented in two vehicles equipped with computer-controlled steering, acceleration, and braking • cars can reach 70 m/h with ALVINN • programs that consider all the problem environment reach 4 m/h only

  27. Why using NN for the driving task? • there is no good theory of driving, but it is easy to collect training samples • training data is obtained with a human* driving the vehicle • 5min training, 10 min algorithm runs • driving is continuous and noisy • almost all features contribute with useful information *humans are not very good generators of training instances when they behave too regularly without making mistakes

  28. the neural network • INPUT:video camera generates array of 30x32 grid of input nodes • OUTPUT: 30 nodes layer corresponding to steering direction • vehicle steers to the direction of the layer with highest activation

  29. Genetic algorithms (i) • learn by experimentation • based on human genetics, it originates new solutions • representational restrictions • good to improve quality of other methods e.g., search algorithms, CBR • evolutionary algorithms (broader)

  30. Genetic algorithms (ii) • requires an evaluation function to guide the process • population of genomes represent possible solutions • operations are applied over these genomes • operations can be mutation, crossover • operations produce new offspring • an evaluation function tests how fit an offspring is • the fittest will survive to mate again

  31. Genetic Algorithms (iii) • http://ai.bpa.arizona.edu/~mramsey/ga.html You can change parameters • http://www.rennard.org/alife/english/gavgb.html Steven Thompson presented

More Related