1 / 45

An Illustrative Example

An Illustrative Example. Apple/Banana Sorter. Prototype Vectors. McCulloch-Pitts Perceptron. Perceptron Training. How can we train a perceptron for a classification task? We try to find suitable values for the weights in such a way that the training examples are correctly classified.

nonnie
Download Presentation

An Illustrative Example

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AnIllustrativeExample

  2. Apple/Banana Sorter

  3. Prototype Vectors

  4. McCulloch-Pitts Perceptron

  5. Perceptron Training • How can we train a perceptron for a classification task? • We try to find suitable values for the weights in such a way that the training examples are correctly classified. • Geometrically, we try to find a hyper-plane that separates the examples of the two classes.

  6. Perceptron Geometric View The equation below describes a (hyper-)plane in the input space consisting of real valued m-dimensional vectors. The plane splits the input space into two regions, each of them describing one class. decision region for C1 x2 w1p1 + w2p2 + b >= 0 decision boundary C1 x1 C2 w1p1 + w2p2 + b = 0

  7. Two-Input Case

  8. Apple/Banana Example

  9. Testing the Network

  10. XOR problem A typical example of non-linealy separable function is the XOR. This function takes two input arguments with values in {-1,1} and returns one output in {-1,1}, as specified in the following table: If we think at -1 and 1 as encoding of the truth values false and true, respectively, then XOR computes the logical exclusive or, which yields true if and only if the two inputs have different truth values.

  11. x2 1 -1 1 x1 -1 XOR problem • In this graph of the XOR, input pairs giving output equal to 1 and -1 are depicted with green and red circles, respectively. These two classes (green and red) cannot be separated using a line. We have to use two lines, like those depicted in blue. The following NN with two hidden nodes realizes this non-linear separation, where each hidden node describes one of the two blue lines.

  12. Multilayer Network

  13. Abbreviated Notation

  14. Recurrent Network

  15. Hamming Network

  16. Feedforward Layer

  17. Recurrent Layer

  18. Hamming Operation

  19. Hamming Operation

  20. Hopfield Network

  21. Apple/Banana Problem

  22. Summary • Perceptron • Feedforward Network • Linear Decision Boundary • One Neuron for Each Decision • Hamming Network • Competitive Network • First Layer – Pattern Matching (Inner Product) • Second Layer – Competition (Winner-Take-All) • # Neurons = # Prototype Patterns • Hopfield Network • Dynamic Associative Memory Network • Network Output Converges to a Prototype Pattern • # Neurons = # Elements in each Prototype Pattern

  23. Learning Rules • Supervised Learning Network is provided with a set of examples of proper network behavior (inputs/targets) • Reinforcement Learning Network is only provided with a grade, or score, which indicates network performance • Unsupervised Learning Only network inputs are available to the learning algorithm. Network learns to categorize (cluster) the inputs.

  24. Early Learning Rules • These learning rules are designed for single layer neural networks • They are generally more limited in their applicability. • Some of the early algorithms are: • Perceptron learning • LMS learning • Grossberg learning

  25. Perceptron Architecture AGAIN!!!!

  26. Single-Neuron Perceptron

  27. Decision Boundary

  28. Example - OR

  29. OR Solution

  30. Multiple-Neuron Perceptron Each neuron will have its own decision boundary. A single neuron can classify input vectors into two categories. A multi-neuron perceptron can classify input vectors into 2S categories.

  31. Learning Rule Test Problem

  32. Starting Point

  33. Tentative Learning Rule

  34. Second Input Vector

  35. Third Input Vector

  36. Unified Learning Rule

  37. Multiple-Neuron Perceptrons

  38. Apple/Banana Example

  39. Second Iteration

  40. Check

  41. Perceptron Rule Capability The perceptron rule will always converge to weights which accomplish the desired classification, assuming that such weights exist.

  42. Rosenblatt’s single layer perceptron is trained as follow: • Randomly initialize all the networks weights. • Apply inputs and find outputs ( feedforward). • compute the errors. • Update each weight as • Repeat steps 2 to 4 until the errors reach the satisfictory level.

  43. What is this: • Name : Learning rate. • Where is living: usually between 0 and 1. • It can change it’s value during learning. • Can define separately for each parameters.

  44. Perceptron Limitations

  45. You Can Find Your First Homework here http://saba.kntu.ac.ir/eecd/People/aliyari/ NEXT WEEK

More Related