1 / 12

A Novel Method for Sparse Classification

A Novel Method for Sparse Classification . Sujeeth Bharadwaj Mark Hasegawa-Johnson. Overview. Sparse Classification Compressed Sensing New Approach Theoretical Bounds Simulation Results Future Work. Sparse Classification. Compressed Sensing.

shawno
Download Presentation

A Novel Method for Sparse Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Novel Method for Sparse Classification Sujeeth Bharadwaj Mark Hasegawa-Johnson

  2. Overview • Sparse Classification • Compressed Sensing • New Approach • Theoretical Bounds • Simulation Results • Future Work

  3. Sparse Classification

  4. Compressed Sensing Given an incoming signal and our “database”, can we reconstruct the sparse vector, s? Uniqueness? Computational complexity?

  5. L1 Relaxation Consider Unique and equals the solution of the l0 norm when: Where s is k-sparse

  6. New Approach L1 norm finds the sparsest vector, but they need not belong to the same class. Can we do better? Desire one or few columns of S to be non-zero What if each Theta spans the entire feature space?

  7. New Approach Has a 1-sparse solution, but the l0 norm makes it intractable.

  8. L1p Relaxation Yields the same solution as the original problem if L is the dimensionality of the feature space

  9. Tests on Gaussian Vectors Classification accuracy for unit Gaussians P = 1 refers to the traditional CS minimization. P = 2, 3, 4 refer to the new method introduced.

  10. Discussion • L1 minimization works best when the classes are farther apart. • We need higher norms for things in between. E.g. class 2/3. • Use a fusion algorithm? Accuracy for a fusion algorithm using majority rule

  11. Future Work • Tighter necessary conditions • Generalized problem of minimizing the matrix L0 norm • Extend to other applications such as audio source localization/scene analysis • Faster algorithms

  12. References [1] J. Wright, A. Yang, A. Ganesh, S. Shastry, and Y. Ma, Robust face recognition via sparse representation, To appear in IEEE Trans. on Pattern Analysis and Machine Intelligence, available at http://perception.csl.illinois.edu/recognition/Files/PAMIFace.pdf [2] J. F. Gemmeke, B. Cranen, Noise reduction through compressed sensing, Interspeech 2008, Brisbane, Australia, September 2008. [3] D.L. Donoho, Compressed Sensing, IEEE Trans. On Information Theory, 52(4):1289-306, April 2006. [4] R. Baraniuk, Compressive Sensing, IEEE Signal Processing Magazine, 24(4), 118-121, July 2007. [5] A. Bruckstein, D.L. Donoho, M. Elad, From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images,SIAM Review, 51(1), 34-81, February 2009.

More Related