Neural n etwork
This presentation is the property of its rightful owner.
Sponsored Links
1 / 21

Neural N etwork PowerPoint PPT Presentation


  • 42 Views
  • Uploaded on
  • Presentation posted in: General

Neural N etwork. Contents. Diagram of a Neuron The Simple Perceptron Multilayer Neural Network What is Hidden Layer? Why do we Need a Hidden Layer? How do Multilayer Neural Networks Learn?. Weight. Output Signals. Input signals. w 1. x 1. w 2. Neuron. Y. x 2. w n. x 3.

Download Presentation

Neural N etwork

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Neural Network


Contents

  • Diagram of a Neuron

  • The Simple Perceptron

  • Multilayer Neural Network

  • What is Hidden Layer?

  • Why do we Need a Hidden Layer?

  • How do Multilayer Neural Networks Learn?


Weight

Output Signals

Input signals

w1

x1

w2

Neuron

Y

x2

.

.

.

wn

x3

Diagram of a Neuron


Hard Limiter

Linear Combiner

w1

Y-output

Σ

w2

Th

Threshold

Example of NN: The Perceptron

  • Single neuron with adjustable synaptic weight and a hard limiter.

x1

x2

  • Step & sign activation function called hard limit functions.


Multilayer Neural Network

  • A multilayer Perceptronis a feedforward network with one or more hidden layers

  • The network consists of:

    • an input layer of source neurons,

    • at least one middle or hidden layer of computation neurons

    • An output layer of computation neurons

  • The input signals are propagated in a forward direction on a layer-by-layer basis


Multilayer Perceptron with two Hidden Layers


What is Hidden Layer?

  • A hidden layer hides its desired output

  • Neurons in the hidden layer cannot be observed through the input/output behavior of the network.

  • There is no obvious way to know what the desired output of the hidden layer should be.


Why do we Need a Hidden Layer?

  • The input layer accepts input signals from the outside world and redistributes these signals to all neurons in the hidden layer.

  • Neuron in the hidden layer detect the features; the weights of the neurons represent the features hidden in the input patterns.

  • The output layer accepts output signal from the hidden layer and establishes the output pattern of the entire network.


How Do Multilayer Neural Networks Learn?

  • Most popular method of learning is back-propagation.

  • Learning in a multi-layer network proceeds the same way as for a Perceptron

  • A training set of input patterns is presented to the network

  • The network computes the output pattern.

  • If there is an error, the weight are adjusted to reduce this error.

  • In multilayer network, there are many weights, each of which contributes to more than one output.


Back Propagation Neural Network (1/2)

  • A back-propagation network is a multilayer network that has three or four layers.

  • The layers are fully connected, i.e, every neuron in each layer is connected to every other neuron in the adjacent forward layer

  • A neuron determines its output in a manner similar to Rosenblatt’s Perceptron.


Back Propagation Neural Network (2/2)

  • The net weighted input value is passed through the activation function.

  • Unlike a Perceptron, neuron in the back propagation network use a sigmoid activation function:


Three-layer Back Propagation Neural Network


Learning Law Used in Back- Propagation Network

  • In three layer network, i,j and k refer to neurons in the input, hidden and output layers.

  • Input signal x1, x2, …….. xnare propagated through the network from left to right

  • Error signals e1, e2, en from right to left.

  • The symbol Wij denotes the weight for the connection between neuron i in the input layer and neuron j in the hidden layer

  • The symbol Wjk denotes the weight for the connection between neuron j in the hidden layer and neuron k in the output layer


Learning Law Used in Back- Propagation Network

  • The error signal at the output of neuron k at iteration p is defined by,

  • The updated weight at the output layer is defined by,


Learning Law Used in Back- Propagation Network

  • The error gradient is determined as the derivative of the activation function multiplied by the error at the neuron output,

  • Where yk(p) is the output of neuron k at iteration p and xk(p) is the net weighted input to neuron k,


Learning Law Used in Back- Propagation Network

  • The weight correction for the hidden layer,


Back Propagation Training Algorithm

  • Initialization : Set all the weights and threshold levels of the network to random numbers uniformly distributed inside a small range (Haykin 1994):

    (-2.4/Fi, +2.4/Fi), Where Fi is the total number of inputs of neuron i in the network.

  • Activation:

    • Calculate the actual outputs of the neurons in the hidden layer

    • Calculate the actual outputs of the neurons in the output layer

  • Weight Training: Update the weights in the back-propagation network propagating backward the errors associated with output neurons.

  • Iteration: Increase iteration p by one, go back to step 2 and repeat the process until the selected error criterion is satisfied.


Back-propagation: Activation

(A) Calculate the actual outputs of the neurons in the hidden layer

(B) Calculate the actual outputs of the neurons in the output layer


Back-propagation: Weight Training

(A) Calculate the error gradient for the neurons in the output layer.


Back-propagation: Weight Training

(B) Calculate the error gradient for the neurons in the hidden layer.


Recommended Textbooks

  • [Negnevitsky, 2001] M. Negnevitsky “ Artificial Intelligence: A guide to Intelligent Systems”, Pearson Education Limited, England, 2002.

  • [Russel, 2003] S. Russell and P. Norvig Artificial Intelligence: A Modern Approach Prentice Hall, 2003, Second Edition

  • [Patterson, 1990] D. W. Patterson, “Introduction to Artificial Intelligence and Expert Systems”, Prentice-Hall Inc., Englewood Cliffs, N.J, USA, 1990.

  • [Minsky, 1974] M. Minsky “A Framework for Representing Knowledge”, MIT-AI Laboratory Memo 306, 1974.


  • Login