1 / 14

Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee. Introduction. Preprocessing Obtain more useful representation of information

jerry
Download Presentation

Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee

  2. Introduction • Preprocessing • Obtain more useful representation of information • Useful for subsequent operation for classification, pattern recognition • Unsupervised learning • Automatic methods to recover hidden structure • Density estimation • Dimensionality reduction

  3. PCA & MDS • Principle Component Analysis:PCA • Linear projections of greatest variance from the top eigenvectors of the data covariance matrix • Multidimensional Scaling:MDS • Low dimensional embedding that best preserves pairwise distances between data points • Modeling of linear variabilities in high dimensional data • No local minima • Find linear subspace and cannot deal properly with data lying on nonlinear manifolds

  4. Problem in PCA and MDS • Create distortion in local and global geometry • Map faraway data point to nearby point

  5. LLE(Locally Linear Embedding) • Property • Preserving the local configurations of nearest neighbors • LLE • Local: only neighbors contribute to each reconstruction • Linear: reconstructions are confined to linear subspace • Assumption • Well-sampled data->locally linear patch of the manifold • d-dimensional manifold->2d neighbors

  6. LLE Algorithm • Step1 Neighborhood search • Compute the neighbors of each data point • K nearest neighbors per data point

  7. LLE Algorithm • Step2 Constrained Least Square Fits • Reconstruction Error if not neighbor -> Invariant to rotations, rescaling, and translation of that point and its neighbor

  8. LLE Algorithm • Step3 Eigenvalue Problem • Reconstruction Error • -> centered at the origin • -> Avoid degenerate solution

  9. Corners faces to the corners of its two dimensional embedding Embedding by LLE

  10. Examples

  11. LLE advantages • Ability to discover nonlinear manifold of arbitrary dimension • Non-iterative • Global optimality • Few parameters: K, d • O(DN2) and space efficient due to sparse matrix

  12. LLE disadvantages • Requires smooth, non-closed, densely sampled manifold • Quality of manifold characterization dependent on neighborhood choice • Cannot estimate low dimensionality • Sensitive to outliers

  13. Comparisons: PCA vs LLE vs Isomap • PCA: find linear subspace projection vectors to the data • LLE: find embedding coordinate vectors that best fit local neighborhood relationships • ISOMAP: find embedding coordinate vectors that preserve geodesic shortest distances

More Related