Artificial Neural Networks. Outline. Biological Motivation Perceptron Gradient Descent Least Mean Square Error Multilayer networks Sigmoid node Backpropagation. Biological Neural Systems. Neuron switching time : > 10 3 secs Number of neurons in the human brain: ~10 10
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Artificial Neural Networks
x1
x2
xn
x0=1
w1
w0
w2
S
o
.
.
.
i=0n wi xi
wn
1 if i=0nwi xi >0
o(xi)=
1 otherwise
{
x2
+

x1
+

Xor
x2
+
+
+


x1
+


Linearly Separable
Theorem: VCdim = n+1
S sample
xi input vector
t=c(x) is the target value
o is the perceptron output
learning rate(a small constant ), assume=1
wi = wi + wi
wi = (t  o) xi
t=1
t=1
o=1
w=[0.25 –0.1 0.5]
x2 = 0.2 x1 – 0.5
o=1
(x,t)=([2,1],1)
o=sgn(0.450.6+0.3)
=1
(x,t)=([1,1],1)
o=sgn(0.25+0.10.5)
=1
w=[0.2 –0.2 –0.2]
w=[0.2 –0.4 –0.2]
(x,t)=([1,1],1)
o=sgn(0.250.7+0.1)
=1
w=[0.2 0.2 0.2]
where S is the set of training examples
(w1,w2)
Gradient:
E[w]=[E/w0,… E/wn]
(w1+w1,w2 +w2)
S={<(1,1),1>,<(1,1),1>,
<(1,1),1>,<(1,1),1>}
w= E[w]
wi= E/wi
=/wi 1/2d(tdod)2
= /wi 1/2d(tdi wi xi)2
= d(td od)(xi)
GradientDescent(S:training_examples, )
Until TERMINATION Do
w=w  ES[w] over the entire data S
ES[w]=1/2d(tdod)2
w=w  Ed[w] over individual training examples d
Ed[w]=1/2 (tdod)2
Incremental Gradient Descent can approximate Batch Gradient Descent arbitrarily closely if is small enough
Perceptron learning rule guaranteed to succeed if
Linear unit using Gradient Descent
output layer
hidden layer(s)
input layer
x1
x2
xn
x0=1
w1
w0
z=i=0n wi xi
o=(z)=1/(1+ez)
w2
S
o
.
.
.
wn
(z) =1/(1+ez)
sigmoid function.
(z) =1/(1+ez)
d(z)/dz= (z) (1 (z))
wi,j(n)= j xi + wi,j (n1)
Boolean functions
Continuous functions
Theorem: VCdim(F(C,G)) < 2ds log (es)