1 / 25

Discussion Ying Nian Wu UCLA Department of Statistics JSM 2011

Discussion Ying Nian Wu UCLA Department of Statistics JSM 2011. Population value decomposition. Latent variable models. Hidden. Observed. Learning:. Examples. Inference:. Latent variable models. Mixture model. Factor analysis. Computational neural science. Hidden. Observed.

Download Presentation

Discussion Ying Nian Wu UCLA Department of Statistics JSM 2011

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Discussion Ying Nian Wu UCLA Department of Statistics JSM 2011 Population value decomposition

  2. Latent variable models Hidden Observed Learning: Examples Inference:

  3. Latent variable models Mixture model Factor analysis

  4. Computational neural science Hidden Observed Z: Internal representation by neurons Y: Sensory data from outside environment Connection weights Hierarchical extension: modeling Z by another layer of hidden variables explaining Y instead of Z Inference / explaining away

  5. Visual cortex: layered hierarchical architecture bottom-up/top-down V1: primary visual cortex simple cells complex cells Source: Scientific American, 1999

  6. Independent Component AnalysisBell and Sejnowski, 1996 Laplacian/Cauchy

  7. Hyvarinen, 2000

  8. Sparse codingOlshausen and Field, 1996 Laplacian/Cauchy/mixture Gaussians

  9. Sparse coding / variable selection Inference: sparsification, non-linear lasso/basis pursuit/matching pursuit mode and uncertainty of p(C|I) explaining-away, lateral inhibition Learning: A dictionary of representational elements (regressors)

  10. Olshausen and Field, 1996

  11. Restricted Boltzmann Machine Hinton, Osindero and Teh, 2006 hidden, binary visible P(H|V): factorized no-explaining away P(V|H)

  12. Visual cortex: layered hierarchical architecture bottom-up/top-down What is beyond V1? Hierarchical model? Source: Scientific American, 1999

  13. Hierarchical RBM Hinton, Osindero and Teh, 2006 V’ Unfolding, untying, re-learning H I V P(H)  P(V’,H) P(V,H) = P(H)P(V|H) Discriminative correction by back-propagation

  14. Hierarchical sparse coding Attributed sparse coding elements transformation group topological neighborhood system Layer above : further coding of the attributes of selected sparse coding elements

  15. Active basis model Wu, Si, Gong, Zhu, 10 Zhu, Guo, Wang, Xu, 05 n-stroke template n = 40 to 60, box= 100x100

  16. Learning and Inference Finding n strokes to sketch M images simultaneously n = 60, M = 9 Scan over multiple resolutions

  17. Scan over multiple resolutions and orientations (rotating template)

  18. Learning active basis models from non-aligned image EM-type maximum likelihood learning, Initialized by single image learning

  19. Learning active basis models from non-aligned image

  20. Learning active basis models from non-aligned image

  21. Hierarchical active basis High log-like Low log-likelihood

  22. Model based clustering MNIST 500 total

More Related