1 / 36

Neural Networks Chapter 9

Neural Networks Chapter 9. Joost N. Kok Universiteit Leiden. Unsupervised Competitive Learning. Competitive learning Winner-take-all units Cluster/Categorize input data Feature mapping. 1. 2. 3. Unsupervised Competitive Learning. winner. output. input (n-dimensional).

rarthur
Download Presentation

Neural Networks Chapter 9

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural NetworksChapter 9 Joost N. Kok Universiteit Leiden

  2. Unsupervised Competitive Learning • Competitive learning • Winner-take-all units • Cluster/Categorize input data • Feature mapping

  3. 1 2 3 Unsupervised Competitive Learning

  4. winner output input (n-dimensional) Unsupervised Competitive Learning

  5. Simple Competitive Learning • Winner: • Lateral inhibition

  6. Simple Competitive Learning • Update weights for winning neuron

  7. Simple Competitive Learning • Update rule for all neurons:

  8. Graph Bipartioning • Patterns: edges = dipole stimuli • Two output units

  9. Simple Competitive Learning • Dead Unit Problem Solutions • Initialize weights tot samples from the input • Leaky learning: also update the weights of the losers (but with a smaller h) • Arrange neurons in a geometrical way: update also neighbors • Turn on input patterns gradually • Conscience mechanism • Add noise to input patterns

  10. Vector Quantization • Classes are represented by prototype vectors • Voronoi tessellation

  11. Learning Vector Quantization • Labelled sample data • Update rule depends on current classification

  12. Adaptive Resonance Theory • Stability-Plasticity Dilemma • Supply of neurons, only use them if needed • Notion of “sufficiently similar”

  13. Adaptive Resonance Theory • Start with all weights = 1 • Enable all output units • Find winner among enabled units • Test match • Update weights

  14. Feature Mapping • Geometrical arrangement of output units • Nearby outputs correspond to nearby input patterns • Feature Map • Topology preserving map

  15. i i w w After learning Before learning Self Organizing Map • Determine the winner (the neuron of which the weight vector has the smallest distance to the input vector) • Move the weight vector w of the winning neuron towards the input i

  16. Self Organizing Map • Impose a topological order onto the competitive neurons (e.g., rectangular map) • Let neighbors of the winner share the “prize” (The “postcode lottery” principle) • After learning, neurons with similar weights tend to cluster on the map

  17. Self Organizing Map

  18. Self Organizing Map

  19. Self Organizing Map • Input: uniformly randomly distributed points • Output: Map of 202 neurons • Training • Starting with a large learning rate and neighborhood size, both are gradually decreased to facilitate convergence

  20. Self Organizing Map

  21. Self Organizing Map

  22. Self Organizing Map

  23. Self Organizing Map

  24. Self Organizing Map

  25. Feature Mapping • Retinotopic Map • Somatosensory Map • Tonotopic Map

  26. Feature Mapping

  27. Feature Mapping

  28. Feature Mapping

  29. Feature Mapping

  30. Kohonen’s Algorithm

  31. Travelling Salesman Problem

  32. Hybrid Learning Schemes supervised unsupervised

  33. Counterpropagation • First layer uses standard competitive learning • Second (output) layer is trained using delta rule

  34. Radial Basis Functions • First layer with normalized Gaussian activation functions

More Related