1 / 17

Feature Generation: Linear Transforms

Feature Generation: Linear Transforms. By Zhang Hongxin State Key Lab of CAD&CG 2004-03-24. Outline. Introduction PCA and SVD ICA Other transforms. Introduction. Goal: choosing suitable transforms, so as to obtain high “information packing”. Raw data -> Meaningful features.

stella
Download Presentation

Feature Generation: Linear Transforms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Feature Generation: Linear Transforms By Zhang Hongxin State Key Lab of CAD&CG 2004-03-24

  2. Outline • Introduction • PCA and SVD • ICA • Other transforms

  3. Introduction • Goal: choosing suitable transforms, so as to obtain high “information packing”. • Raw data -> Meaningful features. • Unsupervised/Automatic methods. • To exploit and remove information redundancies via transform.

  4. Basis Vectors and Images • Input samples • Unitary NxN matrix A and transformed Vector • Basis vector representation

  5. Basis Vectors and Images (cont’) • When X is an image, A is a huge piece of bread to eat ( ) • An alternative possibility: • Let U and V be two unitary matrices, and • Then Y is diagonal

  6. The Karhunen-Loeve Transform • Goal: to generate features that are optimally uncorrelated, that is, • Correlation matrix • is symmetric, A is chosen so that its columns are the orthonormal eigenvectors of

  7. Properties of KL transform • Mean square error approximation: • Error estimation: Approximation!

  8. Principle Component Analysis • Choosing the eigenvectors corresponding to the m largest eigen-values of the correlation matrix, to obtain minimal error • This is also the minimum MSE, compare with any other approximation of x by an m-dimensional vector. • A different form: computing A in terms of eigen-values of the covariance matrix.

  9. Remarks of PCA • Total variance • From all possible sets of m features, obtained via any orthorgnal linear transformation on x, KL have the largest sum variance. • Entropy • When zero mean Gaussian

  10. Geometry interpretation • If the data points form an ellipsoidal shaped cloud • the eigenvectors are the principal axes of this hyper-ellipsoid • the first principal axis is the line that passes through its greatest dimension

  11. Singular value decomposition • SVD of X Singular values Unitary Matrices

  12. An example: Eigenfaces • G. D. Finlayson, B. Schiele & J. Crowley. Comprehensive colour image normalisation. ECCV 98 pp. 475~490.

  13. Problem of PCA

  14. Independent component analysis • Goal: find independence rather than un-correlation of the data. • Given the set of input samples X, determine an NxN invertible matrix W such that the entries y(i) of the transformed vector • are mutually independent. • ICA is meaningful only the involved random variables are non-Gaussian.

  15. ICA based on Second and Fourth-order Cumulants • Hint: let Second and Fourth-order Cumulants be zero. • Step1. Perform a PCA on the input data. • Step2. Compute another unitary matrix, so that the fourth-order cross cumulants of the components of are zero. Equivalent to find Matrix diagonalization Finally, independent components is given by the combined transform

  16. ICA based on mutual information • An iterative method.

  17. Other transforms • Discrete Fourier Transform • Discrete Wavelet Transform • Please think about the relationship among those Linear Transforms.

More Related