1 / 27

CS621: Artificial Intelligence

CS621: Artificial Intelligence. Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 43– Perceptron Capacity; Perceptron Training ; Convergence 8 th Nov, 2010. The Perceptron Model

vidal
Download Presentation

CS621: Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS621: Artificial Intelligence Pushpak BhattacharyyaCSE Dept., IIT Bombay Lecture 43– Perceptron Capacity; Perceptron Training; Convergence 8th Nov, 2010

  2. The Perceptron Model A perceptron is a computing element with input lines having associated weights and the cell having a threshold value. The perceptron model is motivated by the biological neuron. Output = y Threshold = θ w1 wn Wn-1 x1 Xn-1

  3. y 1 θ Σwixi Step function / Threshold function y = 1 for Σwixi >=θ =0 otherwise

  4. Features of Perceptron • Input output behavior is discontinuous and the derivative does not exist at Σwixi = θ • Σwixi - θ is the net input denoted as net • Referred to as a linear threshold element - linearity because of x appearing with power 1 • y= f(net): Relation between y and net is non-linear

  5. Recap on Capacity

  6. Fundamental Observation • The number of TFs computable by a perceptron is equal to the number of regions produced by 2n hyper-planes,obtained by plugging in the values <x1,x2,x3,…,xn> in the equation ∑i=1nwixi= θ

  7. Number of regions founded by n hyperplanes in d-dim passing through origin is given by the following recurrence relation we use generating function as an operating function Boundary condition: 1 hyperplane in d-dim n hyperplanes in 1-dim, Reduce to n points thru origin The generating function is

  8. From the recurrence relation we have, Rn-1,d corresponds to ‘shifting’ n by 1 place, => multiplication by x Rn-1,d-1 corresponds to ‘shifting’ n and d by 1 place => multiplication by xy On expanding f(x,y) we get

  9. After all this expansion, since other two terms become zero

  10. This implies also we have, Comparing coefficients of each term in RHS we get,

  11. Comparing co-efficients we get

  12. θ, ≤ θ, < w1 w2 wn w1 w2 wn . . . w3 . . . x1 x2 x3 xn x1 x2 x3 xn Perceptron Training Algorithm (PTA) Preprocessing: • The computation law is modified to y = 1 if ∑wixi > θ y = o if ∑wixi < θ  w3

  13. θ θ wn w0=θ w1 w2 w3 . . . x0= -1 x1 x2 x3 xn w1 w2 w3 wn . . . x2 x3 xn PTA – preprocessing cont… 2. Absorb θ as a weight  3. Negate all the zero-class examples x1

  14. Example to demonstrate preprocessing • OR perceptron 1-class <1,1> , <1,0> , <0,1> 0-class <0,0> Augmented x vectors:- 1-class <-1,1,1> , <-1,1,0> , <-1,0,1> 0-class <-1,0,0> Negate 0-class:- <1,0,0>

  15. Example to demonstrate preprocessing cont.. Now the vectors are x0 x1 x2 X1 -1 0 1 X2 -1 1 0 X3 -1 1 1 X4 1 0 0

  16. Perceptron Training Algorithm • Start with a random value of w ex: <0,0,0…> • Test for wxi > 0 If the test succeeds for i=1,2,…n then return w 3. Modify w, wnext = wprev + xfail

  17. Tracing PTA on OR-example w=<0,0,0> wx1 fails w=<-1,0,1> wx4 fails w=<0,0 ,1> wx2 fails w=<-1,1,1> wx1 fails w=<0,1,2> wx4 fails w=<1,1,2> wx2 fails w=<0,2,2> wx4 fails w=<1,2,2> success

  18. PTA convergence

  19. Statement of Convergence of PTA • Statement: Whatever be the initial choice of weights and whatever be the vector chosen for testing, PTA converges if the vectors are from a linearly separable function.

  20. Proof of Convergence of PTA • Suppose wn is the weight vector at the nth step of the algorithm. • At the beginning, the weight vector is w0 • Go from wi to wi+1 when a vector Xj fails the test wiXj > 0 and update wi as wi+1 = wi + Xj • Since Xjs form a linearly separable function,  w* s.t. w*Xj > 0 j

  21. Proof of Convergence of PTA (cntd.) • Consider the expression G(wn) = wn . w* | wn| where wn = weight at nth iteration • G(wn) = |wn| . |w*| . cos |wn| where  = angle between wn and w* • G(wn) = |w*| . cos • G(wn) ≤ |w*| ( as -1 ≤ cos≤ 1)

  22. Behavior of Numerator of G wn . w* = (wn-1 + Xn-1fail ) . w* • wn-1 . w* + Xn-1fail . w* • (wn-2 + Xn-2fail ) . w* + Xn-1fail . w* ….. • w0 . w*+ ( X0fail + X1fail +.... + Xn-1fail ). w* w*.Xifail is always positive: note carefully • Suppose |Xj| ≥  , where  is the minimum magnitude. • Num of G ≥ |w0 . w*| + n  . |w*| • So, numerator of G grows with n.

  23. Behavior of Denominator of G • |wn| = wn . wn •  (wn-1 + Xn-1fail )2 •  (wn-1)2+ 2. wn-1. Xn-1fail + (Xn-1fail )2 •  (wn-1)2+ (Xn-1fail )2 (as wn-1. Xn-1fail ≤ 0 ) •  (w0)2+ (X0fail )2 + (X1fail )2 +…. + (Xn-1fail )2 • |Xj| ≤  (max magnitude) • So, Denom≤ (w0)2+ n2

  24. Some Observations • Numerator of G grows as n • Denominator of G grows as  n => Numerator grows faster than denominator • If PTA does not terminate, G(wn) values will become unbounded.

  25. Some Observations contd. • But, as |G(wn)| ≤ |w*| which is finite, this is impossible! • Hence, PTA has to converge. • Proof is due to Marvin Minsky.

  26. Convergence of PTA proved • Whatever be the initial choice of weights and whatever be the vector chosen for testing, PTA converges if the vectors are from a linearly separable function.

More Related