A Weighted Average of Sparse Representations is Better than the Sparsest One Alone

1 / 14

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone - PowerPoint PPT Presentation

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone . SIAM Conference on Imaging Science ’08. Michael Elad and Irad Yavneh. Presented by Dehong Liu ECE, Duke University July 24, 2009. Outline. Motivation A mixture of sparse representations

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

PowerPoint Slideshow about 'A Weighted Average of Sparse Representations is Better than the Sparsest One Alone' - clover

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone

SIAM Conference on Imaging Science ’08

Presented by Dehong Liu

ECE, Duke University

July 24, 2009

Outline
• Motivation
• A mixture of sparse representations
• Experiments and results
• Analysis
• Conclusion
Motivation
• Noise removal problem

y=x+v, in which y is a measurement signal, x is the clean signal, v is assumed to be zero mean iid Gaussian.

• Sparse representation

x=D, in which DRnm, n<m,  is a sparse vector.

• Compressive sensing problem
• Orthogonal Matching Pursuit (OMP)

Sparsest representation

• Question:

“Does this mean that other competitive and slightly inferior sparse representations are meaningless?”

A mixture of sparse representations
• How to generate a set of sparse representations?
• Randomized OMP
• How to fuse these sparse representations?
• A plain averaging
Experiments and results
• Model:
• y=x+v=D+v
• D: 100x200 random dictionary with entries drawn from N(0,1), and then with columns normalized;
• : a random representations with k=10 non-zeros chosen at random and with values drawn from N(0,1);
• v: white Gaussian noise with entries drawn from N(0,1);
• Noise threshold in OMP algorithm T=100(??);
• Run the OMP once, and the RandOMP 1000 times.
Sparse vector reconstruction

The average representation over 1000 RandOMP representations is not sparse at all.

Denoising factor=

Denoising factor based on 1000 experiments

Run RandOMP 100 times for each experiment.

Analysis

The RandOMP is an approximation of the Minimum-Mean-Squared-Error (MMSE) estimate.

0.5

1. Emp. Oracle

0.45

2. Theor. Oracle

3. Emp. MMSE

0.4

4. Theor. MMSE

5. Emp. MAP

0.35

6. Theor. MAP

7. OMP

0.3

8. RandOMP

Relative Mean-Squared-Error

0.25

0.2

0.15

0.1

0.05

0

0

0.5

1

1.5

2

s

Comparison

The above results correspond to a 20x30 dictionary. Parameters: True support=3, x=1, Averaged over 1000 experiments.

Conclusion
• The paper shows that averaging several sparse representations for a signal lead to better denoising, as it approximates the MMSE estimator.