1 / 17

An Illustrative Example

An Illustrative Example. Apple/Orange Sorter. Neural Network. Sensors. Sorter. Shape: {1 : round ; -1 : elliptical} Texture: {1 : smooth ; -1 : rough} Weight: {1 : > 1 lb. ; -1 : < 1 lb.}. Apple. Orange. Prototype Vectors. sensors :. Shape: {1 : round ; -1 : elliptical}

holt
Download Presentation

An Illustrative Example

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Illustrative Example

  2. Apple/Orange Sorter Neural Network Sensors Sorter Shape: {1 : round ; -1 : elliptical} Texture: {1 : smooth ; -1 : rough} Weight: {1 : > 1 lb. ; -1 : < 1 lb.} Apple Orange

  3. Prototype Vectors sensors: Shape: {1 : round ; -1 : elliptical} Texture: {1 : smooth ; -1 : rough} Weight: {1 : > 1 lb. ; -1 : < 1 lb.}

  4. Perceptron a = -1, n < 0 hardlims: a = 1, n ≧0

  5. Perceptron (cont.) i.e. W = [-1,1]T (p1, p2) = (-1,2), then n = 2a = hardlims(2) = 1 (p1, p2) = (1,-3), then n = -5a= hardlims(-5) = -1

  6. Apple/Orange Example

  7. Apple/Orange Example 橘子

  8. Hamming Network

  9. Feedforward Layer For Orange/Apple Recognition S=2 purelin:a=n

  10. Feedforward Layer (cont.) Why is it called Hamming ? The Hamming distance between two vectors is equal the number of elements that are different. e.g. the Hamming distance between [1,-1,-1] and [1,1,1] is 2 , the Hamming distance between [1,1,-1] and [1,1,1] is 1

  11. Recurrent Layer a = 0, n < 0 poslims: a= n, n ≧0

  12. Hamming Operation First Layer:input

  13. Hamming Operation Second Layer: ε=0.5 橘子

  14. Hopfield Network

  15. Apple/Orange Problem a = -1, n <-1 satlins: a = n, -1≦n ≦1 a = 1, 1<n

  16. Apple/Orange Problem Test: 橘子

  17. Summary • Perceptron • Feedforward Network • Linear Decision Boundary • One Neuron for Each Decision • Hamming Network • Competitive Network • First Layer – Pattern Matching (Inner Product) • Second Layer – Competition (Winner-Take-All) • # Neurons = # Prototype Patterns • Hopfield Network • Dynamic Associative Memory Network • Network Output Converges to a Prototype Pattern • # Neurons = # Elements in each Prototype Pattern

More Related