1 / 26

A principal components analysis self-organizing map

A principal components analysis self-organizing map. Ezequiel Lopez-Rubio, Jose Munoz-Perez, Jose Antonio Gomez-Ruiz. Advisor : Dr. Hsu Student : Sheng-Hsuan Wang Department of Information Management. Neural Network 17 (2004) 261-270. Outline. Motivation Objective

rasul
Download Presentation

A principal components analysis self-organizing map

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A principal components analysis self-organizing map Ezequiel Lopez-Rubio, Jose Munoz-Perez, Jose Antonio Gomez-Ruiz Advisor : Dr. Hsu Student : Sheng-Hsuan Wang Department of Information Management Neural Network 17 (2004) 261-270

  2. Outline • Motivation • Objective • The ASSOM network • The PCASOM model • Experiments • Conclusion

  3. Motivation • The adaptive subspace self-organizing map (ASSOM) is an alternative to the standard principal component analysis (PCA) algorithm • Look for the most relevant features of the input data. • However, its training equations are complexes. • Separate ability in the classical PCA and ASSOM.

  4. Objective • This paper proposed a new self-organizing neural model that performs principal components analysis • Like the ASSOM, but has a broader capability to represent the input distribution.

  5. The ASSOM network • The ASSOM network uses subspaces in each node rather than just single weights. • The ASSOM network is based on training not just using single samples but sets of slightly translated, rotated and/or scaled signal or image samples, called episodes. • Each neuron of an ASSOM network represents a subset of the input data with a vector basis which is adapted so that the local geometry of the input data is build.

  6. The ASSOM network projection error orthogonal projection

  7. The ASSOM network • orthogonal projection • A vector x on an orthonormal vector basis B={bh|h=1,…,K} • The vector x can be decomposed into two vectors • orthogonal projection and projection error.

  8. The ASSOM network • The input vectors are grouped into episodes in order to be presented to the network. • An episode S(t) has many time instants tp belongs to S(t), each with an input vector x(tp). • episodes: sets of slightly translated, rotated or scaled samples.

  9. The ASSOM network • Winner lookup • Learning • Basis vectors rotation • Dissipation Eliminate instability • Orthonormalization • Orthonormalize every basis for good performance.

  10. The ASSOM network • The objective function is the average expected spatially weighted normalized squared projection error over the episodes. • The Robbins-Monro stochastic approximation is used to minimize objective function, which leads to Basis vectors rotation.

  11. The PCASOM network • Neuron weights updating • Use the covariance matrix to store the information. • The covariance matrix of an input vector x is defined as • M input samples

  12. The PCASOM network • The best approximation • It is an unbiased estimator with minimum variance. • If we obtain N new input samples

  13. The PCASOM network

  14. The PCASOM network The outputs of the algorithm are RV and eIII the new approximations.

  15. The PCASOM network

  16. The PCASOM network • Competition among neurons • The neuron c that has the minimum sum of projection errors is the winner: • Orth(x, B) is the orthogonal projection of vector x on basis B.

  17. The PCASOM network • Network topology • Neighborhood function • update the vector ei and the matrix Ri

  18. The PCASOM network • Summary • For every unit i, obtain the initial covariance matrixR(0). • For every unit i, build the vector ei(0) by using small random value. • At time instant t, select the input vectors x(t). Compute the winning neuron c. • For every unit i; update the vector ei and the matrix Ri • Convergence condition.

  19. Comparison with ASSOM • Solidly rooted on statistics. • Update equation is more stable • Matrix sums • Does not need episodes. • It has a wider capability to represent the input distribution.

  20. Drawback of the classical PCA

  21. Experiments • Convergence speed experiment • Relative error for an input vector x projection error norm for BMU the norm of the input vector

  22. Experiments • Separation capability experiment

  23. Experiments • UCI benchmark databases experiment

  24. Experiments • UCI benchmark databases experiment

  25. Conclusions • A new self-organizing network that performs PCA • Related to the ASSOM • Its training equations are much simpler • Its input representation capability is broader • Experiments show that the new model has better performance than the ASSOM network.

  26. Personal opinion • Valuable idea • SOM based on PCA • Contribution • Input data • Cluster shape • Performance • Drawback • Hard to implement.

More Related