190 likes | 291 Views
Discover how atomic representations like wavelets can be used to analyze patterns with applications in denoising, compression, and pattern recognition. Explore template learning through Penalized Maximum Likelihood Estimation (PMLE) and the TEMPLAR algorithm for effective pattern analysis.
E N D
Template Learning from Atomic Representations: A Wavelet-based Approach to Pattern Analysis Clay Scott and Rob Nowak Electrical and Computer Engineering Rice University www.dsp.rice.edu Supported by ARO, DARPA, NSF, and ONR
The Discrete Wavelet Transform • prediction errors wavelet coefficients • most wavelet coefficients are zero sparse representation
Wavelets as Atomic Representations • Atomic representations: attempt to decompose images into fundamental units or “atoms” Examples: wavelets, curvelets, wedgelets, DCT • Successes: denoising and compression • Drawback: not transformation invariant poor features for pattern recognition
Pattern Recognition Class 1 Class 2 Class 3
Noisy observation of transformed pattern Random transformation of pattern Pattern template in spatial domain Realization from wavelet-domain statistical model Hierarchical Framework Noisy observation of transformed pattern Random transformation of pattern Pattern template in spatial domain Realization from wavelet-domain statistical model
Wavelet-domain statistical model • Sparsity can divide wavelet coefficients into significant and insignificant coefficients • Model wavelet coefficients as independent Gaussian mixtures • where is significant • Constraints:
Model Parameters • Template parameters: where • Finite set of pre-selected transformations • model variability in location and orientation
Pattern Synthesis 1. Generate a random template 2. Transform to spatial domain 3. Apply random transformation 4. Add observation noise
Template Learning Given: Independent observations of the same pattern arising from the (unknown) transformations Goal: Find , s, that “best describe” the observations Approach: Penalized maximum likelihood estimation (PMLE)
PMLE of , s, and • PMLE maximize • Complexity penalty function • where is the number of significant coefficients Minimum description length (MDL) criterion • Complexity regularization Find low-dimensional template that captures essential structure of pattern
TEMPLAR: Template Learning from Atomic Representations • Simultaneously maximizing F over , s, is intractable • Maximize F with alternating-maximization algorithm Non-decreasing sequence of penalized likelihood values Each step is simple, with O(NLT) complexity Converges to a fixed point (no cycling)
Airplane Experiment Picture of me gathering data
Airplane Experiment • 853 significant coefficients out of 16,384 • 7 iterations
Face Experiment Training data for one subject, plus sequence of template convergence
Why Does TEMPLAR Work? • Wavelet-domain model for template is low-dimensional (from MDL penalty and inherent sparseness of wavelets) • Low-dimensional template allows for improved pattern matching by giving more weight to distinguishing features
Classification Given: Templates for several patterns and an unlabeled observation x Classify: • Invariant to unknown transformations • O(NT) complexity • sparsity low-dimensional subspace classifier • robust to background clutter
Face Recognition Results of Yale face test
Image Registration If I get results
Conclusion • Wavelet-based framework for representing pattern observations with unknown rotation and translation • TEMPLAR: Linear-time algorithm for automatically learning low-dimensional templates based using MDL • Low-dimensional subspace classifiers that are invariant to spatial transformations and background clutter