neural networks chapter 9 l.
Skip this Video
Loading SlideShow in 5 Seconds..
Neural Networks Chapter 9 PowerPoint Presentation
Download Presentation
Neural Networks Chapter 9

Loading in 2 Seconds...

play fullscreen
1 / 36

Neural Networks Chapter 9 - PowerPoint PPT Presentation

  • Uploaded on

Neural Networks Chapter 9. Joost N. Kok Universiteit Leiden. Unsupervised Competitive Learning. Competitive learning Winner-take-all units Cluster/Categorize input data Feature mapping. 1. 2. 3. Unsupervised Competitive Learning. winner. output. input (n-dimensional).

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

Neural Networks Chapter 9

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
neural networks chapter 9

Neural NetworksChapter 9

Joost N. Kok

Universiteit Leiden

unsupervised competitive learning
Unsupervised Competitive Learning
  • Competitive learning
  • Winner-take-all units
  • Cluster/Categorize input data
  • Feature mapping
unsupervised competitive learning4



input (n-dimensional)

Unsupervised Competitive Learning
simple competitive learning
Simple Competitive Learning
  • Winner:
  • Lateral inhibition
simple competitive learning6
Simple Competitive Learning
  • Update weights for winning neuron
simple competitive learning7
Simple Competitive Learning
  • Update rule for all neurons:
graph bipartioning
Graph Bipartioning
  • Patterns: edges = dipole stimuli
  • Two output units
simple competitive learning9
Simple Competitive Learning
  • Dead Unit Problem Solutions
    • Initialize weights tot samples from the input
    • Leaky learning: also update the weights of the losers (but with a smaller h)
    • Arrange neurons in a geometrical way: update also neighbors
    • Turn on input patterns gradually
    • Conscience mechanism
    • Add noise to input patterns
vector quantization
Vector Quantization
  • Classes are represented by prototype vectors
  • Voronoi tessellation
learning vector quantization
Learning Vector Quantization
  • Labelled sample data
  • Update rule depends on current classification
adaptive resonance theory
Adaptive Resonance Theory
  • Stability-Plasticity Dilemma
  • Supply of neurons, only use them if needed
  • Notion of “sufficiently similar”
adaptive resonance theory13
Adaptive Resonance Theory
  • Start with all weights = 1
  • Enable all output units
  • Find winner among enabled units
  • Test match
  • Update weights
feature mapping
Feature Mapping
  • Geometrical arrangement of output units
  • Nearby outputs correspond to nearby input patterns
  • Feature Map
  • Topology preserving map
self organizing map





After learning

Before learning

Self Organizing Map
  • Determine the winner (the neuron of which the weight vector has the smallest distance to the input vector)
  • Move the weight vector w of the winning neuron towards the input i
self organizing map16
Self Organizing Map
  • Impose a topological order onto the competitive neurons (e.g., rectangular map)
  • Let neighbors of the winner share the “prize” (The “postcode lottery” principle)
  • After learning, neurons with similar weights tend to cluster on the map
self organizing map19
Self Organizing Map
  • Input: uniformly randomly distributed points
  • Output: Map of 202 neurons
  • Training
    • Starting with a large learning rate and neighborhood size, both are gradually decreased to facilitate convergence
feature mapping26
Feature Mapping
  • Retinotopic Map
  • Somatosensory Map
  • Tonotopic Map
hybrid learning schemes
Hybrid Learning Schemes



  • First layer uses standard competitive learning
  • Second (output) layer is trained using delta rule
radial basis functions
Radial Basis Functions
  • First layer with normalized Gaussian activation functions