1 / 14

IE 585

IE 585. Competitive Network – Learning Vector Quantization & Counterpropagation. LVQ. Supervised version of SOM Same architecture as SOM Used for pattern classification Weight vector for an output neuron is a “ reference ” vector for the class that the neuron represents. LVQ – cont.

miach
Download Presentation

IE 585

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation

  2. LVQ • Supervised version of SOM • Same architecture as SOM • Used for pattern classification • Weight vector for an output neuron is a “reference” vector for the class that the neuron represents

  3. LVQ – cont. • A set of training patterns with known classifications is provided, along with an initial distribution of reference vectors (each of which represents a known classification) • After training, LVQ classifies an input vector by assigning it to the same class as the output neuron that has its weight vector closest to the input vector.

  4. Procedure of LVQ • Same with SOM, except if the winning neuron is “correct” use same weight update: wnew = wold+(x - wold) and • if winning neuron is “incorrect” use: wnew = wold - (x - wold)

  5. Ways to Initialize the Weight Vectors • Take the first m training vectors and use them as weight vectors; the remaining is used for training • Assign the initial weights and classifications randomly • Determined by k-means or SOM

  6. Example of LVQ

  7. Variations of LVQ • LVQ2 & LVQ3 [Kohonen, 1990] • Allows two vectors to learn (the winner and a runner-up) • LVQ3 uses momentum for learning rate  prevent the reference vectors from moving away from their optimal placement

  8. Counterpropagation • Developed by Hecht-Nielsen, 1987 • Used for compressing data, approximating functions or associating patterns • Two types – full and forward only

  9. Architecture of Forward Only Counterpropagation x Kohonen neurons z Output Layer w weight matrix y Cluster Layer vweight matrix Input Layer

  10. Procedure of the Counterpropagation Net 2 phases of learning Phase 1 – SOM (unsupervised learning) Phase 2 – Grossberg outstar learning (supervised learning)

  11. Counterpropagation Net Example

  12. Iris Data Set A data set with 150 random samples of flowers from the iris species setosa, versicolor, and virginica. From each species there are 50 observations for sepal length, sepal width, petal length, and petal width in cm. This dataset was used by Fisher (1936) in his initiation of the linear-discriminant-function technique. http://www.stat.sc.edu/~bradley/Data.html Has some cool statistical plotting routines using this data.

  13. Cool Web Sites • http://gepasi.dbs.aber.ac.uk/roy/koho/kohonen.htm (nice write up with some good figures on self organizing maps) • http://odur.let.rug.nl/~kleiweg/kohonen/kohonen.html#lit (brief write up with downloadable software) • http://www.cs.may.ie/~trenaman/nnets/SOFM/index.htm (series of slides on the nets) • http://www.cis.hut.fi/research/refs/ (a bibliography of over 4000 papers using SOM)

  14. More Sites • http://www.patol.com/java/TSP/ (super cool simulation - we will cover this in the optimization lecture) • http://rfhs8012.fh-regensburg.de/~saj39122/jfroehl/diplom/e-index.html (another cool simulation like above but this time in 3D!) • http://rana.usc.edu:8376/~yuri/kohonen/kohonen.html (pretty cool simulation that you change parameters with)

More Related