1 / 13

CSC 4510 – Machine Learning

CSC 4510 – Machine Learning. 8 : Multilayer Neural Networks. Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website: www .csc.villanova.edu /~map/4510/. Some of the slides in this presentation are adapted from:

jabari
Download Presentation

CSC 4510 – Machine Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSC 4510 – Machine Learning 8: Multilayer Neural Networks Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ • Some of the slides in this presentation are adapted from: • Prof. Frank Klassner’s ML class at Villanova • Prof Ray Mooney’s UTexascourse http://www.cs.utexas.edu/~mooney/cs391L/ • The Stanford online ML course http://www.ml-class.org/ CSC 4510 - M.A. Papalaskari - Villanova University • Some of the slides in this presentation are adapted from: • Prof. Frank Klassner’s ML class at Villanova • the University of Manchester ML course http://www.cs.manchester.ac.uk/ugt/COMP24111/ • The Stanford online ML course http://www.ml-class.org/ • Some of the slides in this presentation are adapted from: • Prof. Frank Klassner’s ML class at Villanova • the University of Manchester ML course http://www.cs.manchester.ac.uk/ugt/COMP24111/ • The Stanford online ML course http://www.ml-class.org/ • Some of the slides in this presentation are adapted from: • Prof. Frank Klassner’s ML class at Villanova • the University of Manchester ML course http://www.cs.manchester.ac.uk/ugt/COMP24111/ • The Stanford online ML course http://www.ml-class.org/ • Some of the slides in this presentation are adapted from: • Prof. Frank Klassner’s ML class at Villanova • the University of Manchester ML course http://www.cs.manchester.ac.uk/ugt/COMP24111/ • The Stanford online ML course http://www.ml-class.org/ • Some of the slides in this presentation are adapted from: • Prof. Frank Klassner’s ML class at Villanova • the University of Manchester ML course http://www.cs.manchester.ac.uk/ugt/COMP24111/ • The Stanford online ML course http://www.ml-class.org/ • Some of the slides in this presentation are adapted from: • Prof. Frank Klassner’s ML class at Villanova • the University of Manchester ML course http://www.cs.manchester.ac.uk/ugt/COMP24111/ • The Stanford online ML course http://www.ml-class.org/ • Some of the slides in this presentation are adapted from: • Prof. Frank Klassner’s ML class at Villanova • the University of Manchester ML course http://www.cs.manchester.ac.uk/ugt/COMP24111/ • The Stanford online ML course http://www.ml-class.org/ • Some of the slides in this presentation are adapted from: • Prof. Frank Klassner’s ML class at Villanova • the University of Manchester ML course http://www.cs.manchester.ac.uk/ugt/COMP24111/ • The Stanford online ML course http://www.ml-class.org/

  2. Reminder: Motivation for Neural Networks • Learning a non-linear function • Take inspiration from the brain CSC 4510 - M.A. Papalaskari - Villanova University • Some of the slides in this presentation are adapted from: • Prof. Frank Klassner’s ML class at Villanova • the University of Manchester ML course http://www.cs.manchester.ac.uk/ugt/COMP24111/ • The Stanford online ML course http://www.ml-class.org/

  3. Neural Network Learning Learning approach based on modeling adaptation in biological neural systems. • Perceptron: Initial algorithm for learning simple neural networks (single layer) developed in the 1950’s. • Backpropagation: More complex algorithm for learning multi-layer neural networks developed in the 1980’s.

  4. Learning Algorithms • Many (>100) algorithms for multilayer neural net learning • Most popular: backpropagation CSC 4510 - M.A. Papalaskari - Villanova University

  5. Backpropagation Training Algorithm • Intialization: Set weights to small random real values. • Activation (Forward propagation): Calculate output for input x1,…, xn • Weight training (Backward propagation): Update the weights by propagating backwards the errors from the output layer • Calculate the error and error gradient for each neuron • Calculate weight corrections and update weights • Iteration: Go back to step 2 and repeat until all training examples produce the correct value (within ε), or mean squared error ceases to decrease, or other termination criterion.

  6. Learning in Multi-Layer Nets • In order to apply gradient descent, we need the output of a unit to be a differentiable function of its input and weights. • Standard linear threshold function is not differentiable at the threshold. oi 1 0 Tj netj

  7. Differentiable Output Function • Need non-linear output function to move beyond linear functions. • A multi-layer linear network is still linear. • Standard solution is to use the non-linear, differentiable sigmoidal “logistic” function: 1 0 Tj netj Can also use tanh or Gaussian output function

  8. Comments on Training Algorithm • Not guaranteed to converge to zero training error, may converge to local optima or oscillate indefinitely. • However, in practice, does converge to low error for many large networks on real data. • Many epochs (thousands) may be required, hours or days of training for large networks. • To avoid local-minima problems, run several trials starting with different random weights (random restarts). • Take results of trial with lowest training set error. • Build a committee of results from multiple trials (possibly weighting votes by training set accuracy).

  9. Representational Power • Boolean functions: Any boolean function can be represented by a two-layer network with sufficient hidden units. • Continuous functions: Any bounded continuous function can be approximated with arbitrarily small error by a two-layer network. • Sigmoid functions can act as a set of basis functions for composing more complex functions, like sine waves in Fourier analysis. • Arbitrary function: Any function can be approximated to arbitrary accuracy by a three-layer network.

  10. Hidden Unit Representations • Trained hidden units can be seen as newly constructed features that make the target concept linearly separable in the transformed space. • On many real domains, hidden units can be interpreted as representing meaningful features such as vowel detectors or edge detectors, etc.. • However, the hidden layer can also become a distributed representation of the input in which each individual unit is not easily interpretable as a meaningful feature.

  11. Over-Training Prevention • Running too many epochs can result in over-fitting. • Keep a hold-out validation set and test accuracy on it after every epoch. Stop training when additional epochs actually increase validation error. • To avoid losing training data for validation: • Use internal 10-fold CV on the training set to compute the average number of epochs that maximizes generalization accuracy. • Train final network on complete training set for this many epochs. error on test data on training data 0 # training epochs

  12. Determining the Best Number of Hidden Units • Too few hidden units prevents the network from adequately fitting the data. • Too many hidden units can result in over-fitting. • Use internal cross-validation to empirically determine an optimal number of hidden units. error on test data on training data 0 # hidden units

  13. Successful Applications • Text to Speech (NetTalk) • Fraud detection • Financial Applications • HNC (eventually bought by Fair Isaac) • Chemical Plant Control • Pavillion Technologies • Automated Vehicles • Game Playing • Neurogammon • Handwriting recognition

More Related