1 / 20

Perceptron Learning Rule

Perceptron Learning Rule. Learning Rules. • Supervised Learning Network is provided with a set of examples of proper network behavior (inputs/targets). • Reinforcement Learning Network is only provided with a grade, or score, which indicates network performance.

sanchezh
Download Presentation

Perceptron Learning Rule

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Perceptron Learning Rule

  2. Learning Rules • Supervised Learning Network is provided with a set of examples of proper network behavior (inputs/targets) • Reinforcement Learning Network is only provided with a grade, or score, which indicates network performance • Unsupervised Learning Only network inputs are available to the learning algorithm. Network learns to categorize (cluster) the inputs.

  3. w w ¼ w 1 , 1 1 , 2 1 , R w w ¼ w W 2 , 1 2 , 2 2 , R = w w ¼ w S , 1 S , 2 S , R T w w 1 i , 1 T w w i , 2 w W = = 2 i w T i , R w S Perceptron Architecture

  4. Single-Neuron Perceptron

  5. Decision Boundary • All points on the decision boundary have the same inner product with the weight vector. • Therefore they have the same projection onto the weight vector, and they must lie on a line orthogonal to the weight vector

  6. Example - OR

  7. OR Solution Weight vector should be orthogonal to the decision boundary. Pick a point on the decision boundary to find the bias.

  8. Multiple-Neuron Perceptron Each neuron will have its own decision boundary. A single neuron can classify input vectors into two categories. A multi-neuron perceptron can classify input vectors into 2S categories.

  9. Learning Rule Test Problem

  10. Starting Point Random initial weight: Present p1 to the network: Incorrect Classification.

  11. Tentative Learning Rule • Set 1w to p1 – Not stable • Add p1 to 1w Tentative Rule:

  12. Second Input Vector (Incorrect Classification) Modification to Rule:

  13. Third Input Vector (Incorrect Classification) Patterns are now correctly classified.

  14. Unified Learning Rule A bias is a weight with an input of 1.

  15. Multiple-Neuron Perceptrons To update the ith row of the weight matrix: Matrix form:

  16. e = t – a = 1 – 0 = 1 1 Apple/Banana Example Training Set Initial Weights First Iteration

  17. Second Iteration

  18. Check

  19. Perceptron Rule Capability The perceptron rule will always converge to weights which accomplish the desired classification, assuming that such weights exist.

  20. Perceptron Limitations Linear Decision Boundary Linearly Inseparable Problems

More Related