1 / 42

Yan Cui 2013.1.16

A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario. Yan Cui 2013.1.16. The related work 2. The integration algorithm framework 3. Experiments. The related work Locally linear embedding

Download Presentation

Yan Cui 2013.1.16

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui 2013.1.16

  2. The related work 2. The integration algorithm framework 3. Experiments

  3. The related work • Locally linear embedding • Sparse representation-based classifier • K-SVD dictionary learning

  4. Locally linear embedding LLE is an unsupervised learning algorithm that computers low-dimensional, neighbor-hood-preserving embedding of high-dimensional inputs.

  5. Specifically, we expect data point and its neighbors to lie on or close to a locally linear patch of the manifold and the local reconstruction errors of these patches are measured by

  6. Sparse representation-based classifier The sparse representation-based classifier can be considered a generalization of nearest neighbor (NN) and nearest subspace (NS), it adaptively chooses the minimal number of training samples needed to represent each test sample.

  7. (4)

  8. (5)

  9. K-SVD dictionaries learning • The original training samples have much redundancy as well as noise and trivial information that can be negative to the recognition. • If the training samples are huge, the computation of SR will be time consuming, so an optimal dictionary is needed for the sparse representation and classification.

  10. The K-SVD algorithm

  11. The dictionary update stage:

  12. The integration algorithm for supervised learning Let be the training data matrix, is the -th class training samples matrix,a test data can be well approximated by the linear combination of the training data, i.e.

  13. Let be the representation coefficient vector with respect to -class. To make SRC achieve good performance on all training samples, we expect the within class residual minimized, while the between class residual maximized, simultaneously. Therefore we redefine the following optimization problem: (15)

  14. Restrict 𝐸_𝑝by choosing only the columns corresponding to 𝑤_𝑝 and obtain 𝐸_𝑝^𝑅 (16)

  15. Let is the representation coefficient vector with respect to -th class, so the optimization problem in Eq. (16) is turned to (17)

  16. In order to obtain the sparse representation coefficients, we want to learn an embedding map to reduce the dimensionality of and preserve the spare reconstruction. So the optimization problem in Eq. (17) is turned to

  17. For a given test set , we can adaptively learn the embedding map, the optimal dictionary and the sparse reconstruction coefficients by the following optimization problem

  18. The feature extraction and classification algorithm

  19. Experiments for unsupervised learning • The effect of dictionary selection • Compare with pure feature extraction

  20. Databases descriptions UCI databases: the Gas Sensor Array Drift Data set and the Synthetic Control Chart Time Series Date Set.

  21. The effect of dictionary selection

  22. Compare with pure feature extraction

  23. The effect of dictionary selection • Compare with pure classification • Compare with pure feature extraction Experiments

  24. Databases descriptions

  25. The effect of dictionary selection

  26. Compare with pure classification

  27. Compare with pure feature extraction

  28. Thanks!

  29. Question & suggestion?

More Related