- By
**uma** - Follow User

- 102 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' Derivation of a Learning Rule for Perceptrons ' - uma

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Single Layer Perceptrons

x1

wk1

x2

wk2

.

.

.

wkm

xm

Derivation of a Learning Rule for PerceptronsAdaline

(Adaptive Linear Element)

Widrow [1962]

Goal:

Single Layer Perceptrons

Least Mean Squares (LMS)- The following cost function (error function) should be minimized:

i : index of data set, the ith data set

j : index of input, the jth input

Single Layer Perceptrons

Adaline Learning Rule- With

then

- As already obtained before,

Weight Modification Rule

- Defining

we can write

Single Layer Perceptrons

Derivation of Learning RulesLinear function

Tangent sigmoid

function

Logarithmic sigmoid

function

Single Layer Perceptrons

x1

w11

x2

w12

Homework 3Given a neuron with linear activation function (a=0.5), write an m-file that will calculate the weights w11 and w12 so that the input [x1;x2] can match output y1 the best.

- Use initial values w11=1 and w12=1.5, and η= 0.01.
- Determine the required number of iterations.
- Note: Submit the m-file in hardcopy and softcopy.

[x1;x2]=[2;3]

[x1;x2]=[[2 1];[3 1]]

Case 2

Case 1

[y1]=[5 2]

[y1]=[5]

- Odd-numbered Student ID

- Even-numbered Student ID

Single Layer Perceptrons

x1

w11

x2

w12

Homework 3AGiven a neuron with a certain activation function, write an m-file that will calculate the weights w11 and w12 so that the input [x1;x2] can match output y1 the best.

- Use initial values w11=0.5 and w12=–0.5, and η= 0.01.
- Determine the required number of iterations.
- Note: Submit the m-file in hardcopy and softcopy.

[x1]=[0.2 0.5 0.4]

[x2]=[0.5 0.8 0.3]

[y1]=[0.1 0.7 0.9]

?

- Even Student ID:Tangent sigmoid function

- Odd Student ID:Logarithmic sigmoid function

Multi Layer Perceptrons

x1

x2

x3

wlk

wji

wkj

MLP ArchitectureHidden layers

Input

layer

Output

layer

y1

Outputs

Inputs

y2

- Possessessigmoid activation functionsin the neurons to enable modeling of nonlinearity.
- Contains one or more “hidden layers”.
- Trained using the “Backpropagation” algorithm.

Multi Layer Perceptrons

MLP Design Consideration- What activation functions should be used?
- How many inputs does the network need?
- How many hidden layers does the network need?
- How many hidden neurons per hidden layer?
- How many outputs should the network have?

- There is no standard methodology to determine these values. Even there is some heuristic points, final values are determinate by a trial and error procedure.

Multi Layer Perceptrons

x1

x2

x3

wlk

wji

wkj

Advantages of MLP- MLP with one hidden layer is a universal approximator.
- MLP can approximate any function within any preset accuracy
- The conditions: the weights and the biases are appropriately assigned through the use of adequate learning algorithm.

- MLP can be applied directly in identification and control of dynamic system with nonlinear relationship between input and output.
- MLP delivers the best compromise between number of parameters, structure complexity, and calculation cost.

Multi Layer Perceptrons

f(.)

f(.)

f(.)

Learning Algorithm of MLPFunction signal

Error signal

- Computations at each neuron j:
- Neuron output, yj
- Vector of error gradient, ¶E/¶wji

Forward propagation

“Backpropagation

Learning Algorithm”

Backward propagation

Multi Layer Perceptrons

Backpropagation Learning AlgorithmIf node j is an output node,

dj(n)

yj(n)

netj(n)

wji(n)

ej(n)

yi(n)

-1

f(.)

Multi Layer Perceptrons

Backpropagation Learning AlgorithmIf node j is a hidden node,

dk(n)

netk(n)

yj(n)

yk(n)

netj(n)

wji(n)

wkj(n)

yi(n)

ek(n)

f(.)

f(.)

-1

Multi Layer Perceptrons

k

j

i

Right

Left

k

j

i

Right

Left

MLP Training- Forward Pass
- Fix wji(n)
- Compute yj(n)

- Backward Pass
- Calculate dj(n)
- Update weights wji(n+1)

Download Presentation

Connecting to Server..