1 / 22

Introduction to Neural Networks

Introduction to Neural Networks. John Paxton Montana State University Summer 2003. Textbook. Fundamentals of Neural Networks: Architectures, Algorithms, and Applications Laurene Fausett Prentice-Hall 1994. Chapter 1: Introduction. Why Neural Networks? Training techniques exist.

jvogt
Download Presentation

Introduction to Neural Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Neural Networks John Paxton Montana State University Summer 2003

  2. Textbook Fundamentals of Neural Networks: Architectures, Algorithms, and Applications Laurene Fausett Prentice-Hall 1994

  3. Chapter 1: Introduction • Why Neural Networks? Training techniques exist. High speed digital computers. Specialized hardware. Better capture biological neural systems.

  4. Who is interested? • Electrical Engineers – signal processing, control theory • Computer Engineers – robotics • Computer Scientists – artificial intelligence, pattern recognition • Mathematicians – modelling tool when explicit relationships are unknown

  5. Characterizations • Architecture – a pattern of connections between neurons • Learning Algorithm – a method of determining the connection weights • Activation Function

  6. Problem Domains • Storing and recalling patterns • Classifying patterns • Mapping inputs onto outputs • Grouping similar patterns • Finding solutions to constrained optimization problems

  7. A Simple Neural Network w1 x1 y x2 w2 yin = x1w1 + x2w2 Activation is f(yin)

  8. Biological Neuron • Dendrites receive electrical signals affected by chemical process • Soma fires at differing frequencies soma dendrite axon

  9. Observations • A neuron can receive many inputs • Inputs may be modified by weights at the receiving dendrites • A neuron sums its weighted inputs • A neuron can transmit an output signal • The output can go to many other neurons

  10. Features • Information processing is local • Memory is distributed (short term = signals, long term = dendrite weights) • The dendrite weights learn through experience • The weights may be inhibatory or excitatory

  11. Features • Neurons can generalize novel input stimuli • Neurons are fault tolerant and can sustain damage

  12. Applications • Signal processing, e.g. suppress noise on a phone line. • Control, e.g. backing up a truck with a trailer. • Pattern recognition, e.g. handwritten characters or face sex identification. • Diagnosis, e.g. aryhthmia classification or mapping symptoms to a medical case.

  13. Applications • Speech production, e.g. NET Talk. Sejnowski and Rosenberg 1986. • Speech recognition. • Business, e.g. mortgage underwriting. Collins et. Al. 1988. • Unsupervised, e.g. TD-Gammon.

  14. Single Layer Feedforward NN w11 x1 y1 w1m wn1 xn ym wnm

  15. Multilayer Neural Network • More powerful • Harder to train x1 z1 y1 xn zp ym

  16. Setting the Weight • Supervised • Unsupervised • Fixed weight nets

  17. Activation Functions • Identity f(x) = x • Binary step f(x) = 1 if x >= q f(x) = 0 otherwise • Binary sigmoid f(x) = 1 / (1 + e-sx)

  18. Activation Functions • Bipolar sigmoid f(x) = -1 + 2 / (1 + -sx) • Hyperbolic tangent f(x) = (ex – e-x) / (ex + e-x)

  19. History • 1943 McCulloch-Pitts neurons • 1949 Hebb’s law • 1958 Perceptron (Rosenblatt) • 1960 Adaline, better learning rule (Widrow, Huff) • 1969 Limitations (Minsky, Papert) • 1972 Kohonen nets, associative memory

  20. History • 1977 Brain State in a Box (Anderson) • 1982 Hopfield net, constraint satisfaction • 1985 ART (Carpenter, Grossfield) • 1986 Backpropagation (Rumelhart, Hinton, McClelland) • 1988 Neocognitron, character recognition (Fukushima)

  21. McCulloch-Pitts Neuron x1 f(yin) = 1 if yin >= q y x2 x3

  22. Exercises • 2 input AND • 2 input OR • 3 input OR • 2 input XOR

More Related