1 / 30

Generalized Principal Component Analysis GPCA

Generalized Principal Component Analysis GPCA. René Vidal Center for Imaging Science Department of Biomedical Engineering Johns Hopkins University. Principal Component Analysis (PCA). Applications: data compression, regression, image analysis (eigenfaces) , pattern recognition

rweinstock
Download Presentation

Generalized Principal Component Analysis GPCA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Generalized Principal Component AnalysisGPCA René Vidal Center for Imaging Science Department of Biomedical Engineering Johns Hopkins University

  2. Principal Component Analysis (PCA) • Applications: data compression, regression, image analysis (eigenfaces), pattern recognition • Identify a linear subspace S from sample points Basis for S

  3. Extensions of PCA • Probabilistic PCA (Tipping-Bishop ’99) • Identify subspace from noisy data • Gaussian noise: standard PCA • Noise in exponential family (Collins et.al ’01) • Nonlinear PCA (Scholkopf-Smola-Muller ’98) • Identify a nonlinear manifold from sample points • Embed data in a higher dimensional space and apply standard PCA • What embedding should be used? • Mixtures of PCA (Tipping-Bishop ’99) • Identify a collection of subspaces from sample points Generalized PCA (GPCA)

  4. Some possible applications of GPCA • Geometry • Vanishing points • Segmentation • Intensity (black-white) • Texture • Motion (2-D, 3-D) • Scene (host-guest) • Recognition • Faces (Eigenfaces) • Man - Woman • Dynamic Textures • Water-steam • Image/data compression

  5. Generalized Principal Component Analysis • Given points on multiple subspaces, identify • The number of subspaces and their dimensions • A basis for each subspace • The segmentation of the data points • “Chicken-and-egg” problem • Given segmentation, estimate subspaces • Given subspaces, segment the data • Prior work • Iterative algorithms: K-subspace (Ho et al. ’03), RANSAC, subspace selection and growing (Leonardis et al. ’02) • Probabilistic approaches learn the parameters of a mixture model using e.g. EM (Tipping-Bishop ‘99): • Initialization? • Geometric approaches: 2 planes in R3 (Shizawa-Maze ’91) • Factorization approaches: independent, orthogonal subspaces of equal dimension (Boult-Brown ‘91, Costeira-Kanade ‘98, Kanatani ’01)

  6. Our approach to segmentation (GPCA) • We consider the most general case • Subspaces of unknown and possibly different dimensions • Subspaces may intersect arbitrarily (not only at the origin) • Subspaces do not need to be orthogonal • Multiple subspaces can be represented with multiple homogeneous polynomials • Number of subspaces =degree of a polynomial • Subspace basis =derivatives of a polynomial • Subspace clustering is algebraically equivalent to • Polynomial fitting • Polynomial differentiation • Polynomial division

  7. Representing one subspace • One plane • One line • One subspace can be represented with • Set of linear equations • Set of polynomials of degree 1

  8. Representing n subspaces • Two planes • One plane and one line • Plane: • Line: • A union of n subspaces can be represented with a set of homogeneous polynomials of degree n De Morgan’s rule

  9. Veronese map Fitting polynomials to data points • Polynomials can be written linearly in terms of the vector of coefficients by using polynomial embedding • Coefficients of the polynomials can be computed from nullspace of embedded data • Solve using least squares • N = #data points

  10. Minimum number of points K = dimension of ambient space n = number of subspaces In practice the dimension of each subspace ki is much smaller than K Number and dimension of the subspaces is preserved by a linear projection onto a subspace of dimension Can remove outliers by robustly fitting the subspace Open problem: how to choose projection? PCA? Dealing with high-dimensional data Subspace 1 Subspace 2

  11. Polynomial Factorization Alg. (PFA) [CVPR 2003] • Find roots of polynomial of degree in one variable • Solve linear systems in variables • Solution obtained in closed form for Finding a basis for each subspace • Case of hyperplanes: • There is only one polynomial • Basis are normal vectors • Problems • Computing roots may be sensitive to noise • The estimated polynomial may not perfectly factor with noisy • Cannot be applied to subspaces of different dimensions • Polynomials are estimated up to change of basis, hence they may not factor, even with perfect data

  12. Finding a basis for each subspace • To learn a mixture of subspaces we just need one positive example per class Theorem: GPCA by differentiation

  13. Choosing points by polynomial division • With noise and outliers • Polynomials may not be a perfect union of subspaces • Normals can estimated correctly by choosing points optimally • Distance to closest subspace without knowing segmentation?

  14. There are multiple polynomials fitting the data The derivative of each polynomial gives a different normal vector Can obtain a basis for the subspace by applying PCA to normal vectors GPCA: subspaces of different dimensions

  15. GPCA: Algorithm summary • Apply polynomial embedding to projected data • Obtain multiple subspace model by polynomial fitting • Solve to obtain • Need to know number of subspaces • Obtain bases & dimensions by polynomial differentiation • Optimally choose one point per subspace using distance

  16. Apply GPCA in homoge-neous coordinates ( ) n=3 subspaces dimensions 3, 2 and 2 Yale Face Database B n = 3 faces N = 64 views per face K = 30x40 pixels per face Apply PCA to obtain 3 principal components Face clustering under varying illumination

  17. Segmenting N=30 frames of a sequence containing n=3 scenes Host Guest Both Image intensities are output of linear system Apply GPCA to fit n=3 observability subspaces A + x x v = 1 + t t t C + dynamics y x w = t t t images appearance News video segmentation

  18. Segmenting N=60 frames of a sequence containing n=3 scenes Burning wheel Burnt car with people Burning car Image intensities are output of linear system Apply GPCA to fit n=3 observability subspaces A + x x v = 1 + t t t C + dynamics y x w = t t t images appearance News video segmentation

  19. 3-D motion segmentation problem • Given a set of point correspondences in multiple views, determine • Number of motion models • Motion models: affine, projective, fundamental matrices, etc. • Segmentation: model to which each pixel belongs • Mathematics of the problem depends on • Number of frames (2, 3, multiple) • Projection model (affine, perspective) • Motion model (affine, translational, planar motion, rigid motion) • 3-D structure (planar or not)

  20. Structure = 3D surface Motion = camera position and orientation Single-body factorization • Affine camera model • p = point • f = frame • Motion of one rigid-body lives in a 4-D subspace(Boult and Brown ’91, Tomasi and Kanade ‘92) • P = #points • F = #frames

  21. Multiframe 3-D motion segmentation • Sequence A Sequence B Sequence C • Percentage of correct classification

  22. A + x x v = 1 + t t t C + y x w = t t t Segmentation of dynamic textures • Dynamic textures: • Output of a linear dynamical model (Soatto et al. 2001) • Dynamic texture segmentation using level-sets (Doretto et al. 2003) dynamics images appearance

  23. One dynamic texture lives in the observability subspace Multiple textures live in multiple subspaces Apply PCA to image intensities Apply GPCA to projected data Segmentation of dynamic textures images

  24. Time varying model Segmentation of multiple moving subspaces Apply PCA to intensities in a moving time window Apply GPCA to projected data dynamics images appearance Optical flow of moving dynamic textures

  25. Static textures: optical flow is computed from brightness constancy constraint (BCC) Dynamic textures: optical flow computed from dynamic texture constancy constraint (DTCC) Optical flow of moving dynamic textures

  26. Conclusions and ongoing work • Algebraic/geometric approach to simultaneousmodel estimation and data segmentation for • Mixtures of subspaces: linear constraints • Mixtures of surfaces: bilinear/trilinear • Solution based on • Fitting and differentiating polynomials • Solution is closed form if nsubspaces ≤ 4 • Applications • Computer vision: representation, segmentation, recognition • Hybrid systems: identification • A geometric/statistical theory of segmentation? • Model selection + robust statistics + GPCA (CVPR’04) • Connections with spectral clustering and Kernel PCA

  27. University of Illinois Yi Ma Kun Huang Johns Hopkins University Jacopo Piazzi Australian National University Richard Hartley UCLA Stefano Soatto UC Berkeley Shankar Sastry Thanks to colaborators

  28. Texture-based image segmentation GPCA Human

  29. Texture-based image segmentation

  30. Computation time GPCA, KMEANS and EM

More Related