Neural Networks. Single Layer Perceptrons. x 1. w k 1. x 2. w k 2. . . . . w km. x m. Derivation of a Learning Rule for Perceptrons . Adaline (Adaptive Linear Element). Widrow [1962]. Goal:. Neural Networks. Single Layer Perceptrons. Least Mean Squares (LMS).
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Neural Networks
Single Layer Perceptrons
x1
wk1
x2
wk2
.
.
.
wkm
xm
Adaline
(Adaptive Linear Element)
Widrow [1962]
Goal:
Neural Networks
Single Layer Perceptrons
i: index of data set, the ith data set
j : index of input, the jth input
Neural Networks
Single Layer Perceptrons
then
Weight Modification Rule
we can write
Neural Networks
Single Layer Perceptrons
Neural Networks
Single Layer Perceptrons
x1
wk1
x2
wk2
.
.
.
wkm
xm
Goal:
Neural Networks
Single Layer Perceptrons
x1
wk1
x2
wk2
.
.
.
wkm
xm
Goal:
Neural Networks
Single Layer Perceptrons
Neural Networks
Single Layer Perceptrons
Depends on the activation function used
Neural Networks
Single Layer Perceptrons
Linear function
Tangent sigmoid
function
Logarithmic sigmoid
function
Neural Networks
Single Layer Perceptrons
Neural Networks
Single Layer Perceptrons
x1
w11
x2
w12
Given a neuron with linear activation function (a=0.5), write an m-file that will calculate the weights w11 and w12 so that the input [x1;x2] can match output y1 the best.
[x1;x2]=[2;3]
[x1;x2]=[[2 1];[3 1]]
Case 2
Case 1
[y1]=[5 2]
[y1]=[5]
Neural Networks
Single Layer Perceptrons
x1
w11
x2
w12
Given a neuron with a certain activation function, write an m-file that will calculate the weights w11 and w12 so that the input [x1;x2] can match output y1 the best.
[x1]=[0.2 0.5 0.4]
[x2]=[0.5 0.8 0.3]
[y1]=[0.1 0.7 0.9]
?
Neural Networks
Multi Layer Perceptrons
x1
x2
x3
wlk
wji
wkj
Hidden layers
Input
layer
Output
layer
y1
Outputs
Inputs
y2
Neural Networks
Multi Layer Perceptrons
Neural Networks
Multi Layer Perceptrons
x1
x2
x3
wlk
wji
wkj
Neural Networks
Multi Layer Perceptrons
f(.)
f(.)
f(.)
Function signal
Error signal
Forward propagation
“Backpropagation
Learning Algorithm”
Backward propagation
Neural Networks
Multi Layer Perceptrons
If node j is an output node,
dj(n)
yj(n)
netj(n)
wji(n)
ej(n)
yi(n)
-1
f(.)
Neural Networks
Multi Layer Perceptrons
If node j is a hidden node,
dk(n)
netk(n)
yj(n)
yk(n)
netj(n)
wji(n)
wkj(n)
yi(n)
ek(n)
f(.)
f(.)
-1
Neural Networks
Multi Layer Perceptrons
k
j
i
Right
Left
k
j
i
Right
Left