1 / 26

PCA (Principal Component Analysis) by Zongqiao Liu & Wei Zhou

PCA (Principal Component Analysis) by Zongqiao Liu & Wei Zhou. What is PCA?.

Download Presentation

PCA (Principal Component Analysis) by Zongqiao Liu & Wei Zhou

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PCA (Principal Component Analysis)by Zongqiao Liu & Wei Zhou

  2. What is PCA? • PCA(Principal Component Analysis) is a data analysis method. According to Wikipedia’s explanation, Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. • Key words: transformation, correlated, uncorrelated

  3. What PCA can do? • PCA for a 2*5 matrix X = [1 1 2 4 2; 1 3 3 4 4]. • After the PCA, we will get a new matrix Y = [-3/sqrt(2) -1/sqrt(2) 0 3/sqrt(2) -1/sqrt(2)] which has a lower dimension. • 2 dimension => 1 dimension.

  4. PCA algorithm • Form a n*m matrix X using the original data set; • Find the empirical mean along each column and calculate the deviations from the mean; • Find the covariance matrix; • Find the eigenvectors and eigenvalues of the covariance matrix; • Rearrange the eigenvectors and eigenvalues and choose the k rows to get a matrix P; • Y = P*X is the data we need.

  5. Pattern recognition • If we get a series of images of human faces, PCA can be used to identify the similarity. • Mean-face: • Using PCA, we get 19 feature-faces corresponding to 19 principle components: • Test the following faces:

  6. Pattern recognition • Reconstruct the testing faces by using the principal components • nalysis the error between the testing faces and the reconstructed faces, which is [1.4195e+003 1.9564e+003 4.7337e+003 7.0103e+003] • The first one has lowest error, so it is the nearest approximation of the original face

  7. Information compression • The way to compress the images using PCA is also called Hotelling algorithm, or Karhumen and Leove(KL) transform. Hotelling algorithm use PCA to extract the principle elements from the original images. Then sorting the principle elements to prune the secondary components and transform to the original coordinate system. In this way the information of the original images are greatly compressed and the “most important” information are saved.

  8. Code in matlab • clear; • load hald; • disp(ingredients); • m = mean(ingredients,2); • disp(m); • column = size(ingredients,2); • A = []; • for i = 1:column • temp = ingredients(:,i) - m; • A = [A temp]; • end • disp(A);

  9. Code in matlab • L = A'*A/size(A,2);; • [vector, value] = eigs(L, column); • for i = 1:column • UL(:,i) = A*vector(:,i)/sqrt(value(i,i)); • end • disp(UL); • PCA = A*UL; • disp(PCA);

  10. CNN (Convolutional Neural Network)by Zongqiao Liu & Wei Zhou

  11. Brain David Hubel and TorstenWiesel found a kind of neuronal cell called “orientation selective sell” in 1958. It leads to the thinking of human’s nervous system. The process of nerve cell – central nervous – brain may be a process of iteration and abstraction.

  12. Brain Eye to brain is a process of Iteration and abstraction The human’s logical mind is result of the higher level visual abstraction

  13. Brain

  14. Locally Connection Reduce the number of parameter & share the weights

  15. Locally Connection Reduce the number of parameter & share the weights

  16. CNN Architecture

  17. Convolutional Layer Receptive Field is 3*3 matrix 3*3 matrix is called ‘filter’ or ‘kernel’ Every item of filter matrix is a weight corresponding to one item included in receptive field The Convolution operation. The output matrix is called Convolved Feature or Feature Map

  18. Convolutional Layer - Effect of Filter Image pixel filter Convoluted Result = w1 * x1+ … + w9 * x9 . =

  19. Convolutional Layer - Effect of Filter We see, different filter can extract different feature. In practice, we have to learn the values of these filters during the training process. (We need to determine the numbers of filters, filter size)

  20. Convolutional Layer - Effect of Filter Extract Feature Two kinds of filters slide over the input image separately to produce two feature maps.

  21. Convolutional Layer – Training Filter • Train the values of filter: Hkij = S((Wk * x)ij + bk) • Adjust the weights: Use least square error to adjust the weights of every filter in every convolutional layer. (Gradient Decent)

  22. Sub-sampling Layer Max Pooling

  23. Sub-sampling Layer Makes the input representations smaller and more manageable Reduce the number of parameters and computation in the network

  24. Fully connected Layer Neurons of preceding layers are connected to every neuron in subsequent layers.

  25. Reference • https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ • https://docs.gimp.org/en/plug-in-convmatrix.html • http://blog.csdn.net/zouxy09/article/details/8781543

  26. Thank you!

More Related