1 / 29

Neural Networks

Neural Networks. Presented by M. Abbasi. Course lecturer: Dr.Tohidkhah. Neural networks are adaptive statistical models based on an analogy with the structure of the brain.

lisle
Download Presentation

Neural Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah

  2. Neural networks are adaptive statistical models based on an analogy with the structure of the brain.

  3. Basically, neural networks are built from simple units, sometimes called neurons or cells by analogy with the real thing. These units are linked by a set of weighted connections. Learning is usually accomplished by modification of the connection weights. Each unit codes or corresponds to a feature or a characteristic of a pattern that we want to analyze or that we want to use as a predictor.

  4. Biological Analogy

  5. Computational Structure of a neuron . . . w1 x1 w2 f y S x2 wN xN

  6. Multi-Layer Neural Network

  7. The goal of the network is to learn or to discover some association between input and output patterns, or to analyze, or to find the structure of the input patterns.

  8. The learning process is achieved through the modification of the connection weights between units. In statistical terms, this is equivalent tointerpreting the value of the connections between units as parameters (e.g., like the values of a and b in the regression equation y = a + bx) to be estimated.

  9. Any function whose domain is the real numbers can be used as a transfer function.

  10. The most popular ones are : • The linear function • The step function (activation values less than a given threshold are set to 0 or to −1 , and the other values are set to +1) • The logistic function [f(x) =1/(1 + exp{-x})]¸ which maps the real numbers into the interval [-1 + 1] and whose derivative, needed for learning, is easily computed {f’(x) = f(x) [1-f(x)] • The normal or Gaussian function

  11. The architecture (i.e., the pattern of connectivity) of the network, along with the transfer functions used by the neurons and the synaptic weights, completely specify the behavior of the network.

  12. Neural networks are adaptive statistical devices. This means that they can change iteratively the values of their parameters (i.e., the synaptic weights) as a function of their performance.

  13. These changes are made according to learning rules which can be characterized as supervised (when a desired output is known and used to compute an error signal) or unsupervised (when no such error signal is used).

  14. The Widrow-Hoff ( gradient descent or Delta rule) is the most widely known supervised learning rule. It uses the difference between the actual input of the cell and the desired output as an error signal for units in the output layer.

  15. Units in the hidden layers cannot compute directly their error signal but estimate it as a function (e.g., a weighted average) of the error of the units in the following layer. This adaptation of the Widrow-Hoff learning rule is known as error backpropagation.

  16. This adaptation of the Widrow-Hoff learning rule is known as Error Backpropagation: • Minimizes the mean squared error using a gradient descent method • Error is backpropagated into previous layers one layer at a time. • Does not guarantee an optimal solution, as it might converge onto a local minimum • takes a long time to train and requires long amount of training data

  17. Alternative Activation functions • Radial Basis Functions • Square • Triangle • Gaussian! • (μ, σ) can be varied at each hidden node to guide training

  18. Typical Activation Functions • F(x) = 1 / (1 + e -k ∑ (wixi) ) • Shown for k = 0.5, 1 and 10 • Using a nonlinear function which approximates a linear threshold allows a network to approximate nonlinear functions

  19. The Hebbian rule is the most widely known unsupervised learning rule, it is based on work by the Canadian neuropsychologist Donald Hebb, who theorized that neuronal learning (i.e., synaptic change) is a local phenomenon expressible in terms of the temporal correlation between the activation values of neurons.

  20. Specifically, the synaptic change depends on both presynaptic and postsynaptic activities and states that the change in a synaptic weight is a function of the temporal correlation between the presynaptic and postsynaptic activities.

  21. Specifically, the value of the synaptic weight between two neurons increases whenever they are in the same state and decreases when they are in different states.

  22. What can a Neural Net do? • Compute a known function • Approximate an unknown function • Pattern Recognition • Signal Processing • Learn to do any of the above

  23. The areas where neural nets may be useful ·   pattern association ·   pattern classification ·   regularity detection ·   image processing ·   speech analysis ·   optimization problems ·   robot steering ·   processing of inaccurate or incomplete inputs ·   quality assurance ·   stock market forecasting ·   simulation ·   ...

  24. One the most popular architectures in neural networks is the multi-layer perceptron.

  25. Hopfield Net structure

  26. Recap – Neural Networks • Components – biological plausibility • Neurone / node • Synapse / weight • Feed forward networks • Unidirectional flow of information • Good at extracting patterns, generalisation and prediction • Distributed representation of data • Parallel processing of data • Training: Backpropagation • Not exact models, but good at demonstrating principles • Recurrent networks • Multidirectional flow of information • Memory / sense of time • Complex temporal dynamics (e.g. CPGs) • Various training methods (Hebbian, evolution) • Often better biological models than FFNs

More Related