1 / 30

CHAPTER 14

CHAPTER 14. Competitive Networks. Objectives. Discuss networks that are very similar in structure and operation to Hamming network . They use the associative learning rule to adaptively learn to classify pattern .

teleri
Download Presentation

CHAPTER 14

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 14 Competitive Networks

  2. Objectives • Discuss networks that are very similar in structure and operation to Hamming network. • They use the associative learning rule to adaptively learn to classify pattern. • Three such networks are introduced in this chapter: the competitive network, the self-organizingfeature map (SOFM) network and the learning vector quantization (LVQ) network.

  3. Hamming Network • The first layer performs a correlation between the input vector and the prototype vector. • The second layer performs a competition to determine a winner which of the prototype vectors is closest to the input vector.

  4. Layer 1: Correlation • The prototype vector: {p1, p2, …, pQ} • The weight matrix and the bias vector for Layer 1: • The output of the first layer:These inner products indicatehow close each of the prototypepatterns is to the input pattern.

  5. Layer 2: Competition • The second layer is initialized with the output of the first layer. • The neurons compete with each other to determine a winner. The neuron with the largest initial value will win the competition.  winner-take-all Lateral Inhibition: excites itself and inhibits all the other neurons

  6. Competitive Layer • A recurrent competitive layer:(assuming vectors have normalized length of L)

  7. Competitive Learning • Instar rule: • Train the weights in a competitive network, without knowing the prototype vectors. • For the competitive network, a is nonzero (i.e., 1) for the winning neuron (i = i*). • Kohonen rule:

  8. Example: Hamming Network Input vectors After p2 is presented Final weights& initial weights • Each weigh vector will point at a differentcluster of input vectors and become aprototype for a different cluster. • Once the network has learned to cluster the input vectors, it will classify new vectors accordingly.

  9. Prob. with Compet. Layer The First Problem: • The choice of learning rate forces a trade-off between the speed of learningrate and the stability of the final weightvectors. • Initial training can be done with a largelearning rate for fast learning. Then thelearning rate can be decreased astraining progressed, to achieve stableprototype vectors.

  10. Prob. with Compet. Layer The Second Problem: • A more serious stability problem occurs when clusters are close together. • Input vector: blue star; Order: (a)  (b)  (c) • Two input vectors in (a) are presented several times. The final weight vectors are as (b), and so on. • Resulting in unstabe learning.

  11. Prob. with Compet. Layer The Third Problem: • A neuron’s initial weight vector islocated so far from any input vectorthat it never wins the competition,and therefore never learns. • Resulting in a “dead” neuron,which does nothing useful.

  12. Prob. with Compet. Layer The Forth Problem: • When the numberof clusters is not known in advance, the competitive layer may not acceptable for applications. The Fifth Problem: • Competitive layers cannot form classes with nonconvexregions or classes that are the union of unconnected regions.

  13. Compet. Layers in Biology On-center/off-surround connection • The weights in layer 2 of the Hamming network: • In terms of the distances: • Each neuron reinforces itself (center),while inhibiting all other neurons (surround).

  14. Mexican-Hat Function • In biology, a neuron reinforces not only itself, but also those neurons close to it. • Typically, the transition from reinforcement to inhibition occurs smoothly as the distance between neurons increases.

  15. Neighborhood • The neighborhood Ni(d) contains the indices for all the neurons that lie within a radius d of the neuron i.

  16. 2D topology Self-Organizing Feature Map • Determine the winning neuron i* using the same procedure as the competitive layer • Update the weight vectors using the Kohonen rule: initial weight vectors

  17. Feature Map Training 250 iterations per diagram

  18. Other Training Examples

  19. Improving Feature Maps To speed up the self-organizing process and to make it more reliable • Graduallyreducethe size of the neighborhoods during training until it only covers the winning neuron. • Graduallydecreasethe learning rate asymptotically toward 0 during training. • The winningneuron uses a larger learning rate than the neighboring neurons.

  20. Learning Vector Quantization • LVQ network is a hybrid network. It uses both unsupervised and supervised learning to form classifications.

  21. LVQ: Competitive Layer • Each neuron in the 1st layer isassigned to a class, with severalneurons often assigned to thesame class. • Each class is then assigned to oneneuron in the 2nd layer. Negative distance

  22. Subclass • In the competitive network, the neuron with the nonzero output indicates which class the input vector belongs to. • For the LVQ network, the winning neuron in the first layer indicates a subclass, rather than a class. • There may be several different neurons (subclasses) that make up each class.

  23. LVQ: Linear Layer • The 2nd layer of the LVQ network is used to combine subclasses into a single class. • The columns of W2 represent subclasses,and the rows represent classes. • W2 has a single 1 in each column,with the other elements set to zero. subclass i is a part of class k Subclasses 1 & 3 belong to class 1 Subclasses 2 & 5 belong to class 2 Subclass 4 belongs to class 3

  24. Complex/Convex Boundary • A standard competitive layer has the limitation that it can only create decision regions that are convex. • The process of combining subclasses to form a class allows the LVQ network to create complex class boundaries.

  25. LVQ Learning • The learning in the LVQ network combines competitive learning with supervision. • Assignment of W2: If hidden neuron i is to be assigned to class k, then set • Ifpis classified correctly, move the weights of the winning hidden neuron towardp. • Ifpis classified incorrectly, move the weights of the winning hidden neuron awayp.

  26. Example: LVQ • Classification problem:

  27. Training Process • Present p3 to the network:

  28. After 1st & Many Iterations

  29. Improving LVQ networks • First, as with competitive layers, occasionally a hidden neuron may be a dead neuron. • Secondly, depending on how initial weight vectors are arrange, a neuron’s weight vector may have to travel through a region of a class that it doesn’t represent, to get to a region that it does represent. • The second problem can be solved by applying the modification to the Kohonen rule (LVQ2).

  30. LVQ2 • When the network correctly classifies an input vector, the weights of only one neuron are moved toward the input vector. • If the input vector is incorrectly classified, the weights of two neurons are updated, one weight vector is move away from the input vector, and the other one is moved toward the input vector.

More Related