a weighted average of sparse representations is better than the sparsest one alone n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
A Weighted Average of Sparse Representations is Better than the Sparsest One Alone PowerPoint Presentation
Download Presentation
A Weighted Average of Sparse Representations is Better than the Sparsest One Alone

Loading in 2 Seconds...

play fullscreen
1 / 14

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone - PowerPoint PPT Presentation


  • 123 Views
  • Uploaded on

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone . SIAM Conference on Imaging Science ’08. Michael Elad and Irad Yavneh. Presented by Dehong Liu ECE, Duke University July 24, 2009. Outline. Motivation A mixture of sparse representations

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'A Weighted Average of Sparse Representations is Better than the Sparsest One Alone' - clover


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
a weighted average of sparse representations is better than the sparsest one alone

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone

SIAM Conference on Imaging Science ’08

Michael Elad and Irad Yavneh

Presented by Dehong Liu

ECE, Duke University

July 24, 2009

outline
Outline
  • Motivation
  • A mixture of sparse representations
  • Experiments and results
  • Analysis
  • Conclusion
motivation
Motivation
  • Noise removal problem

y=x+v, in which y is a measurement signal, x is the clean signal, v is assumed to be zero mean iid Gaussian.

  • Sparse representation

x=D, in which DRnm, n<m,  is a sparse vector.

  • Compressive sensing problem
  • Orthogonal Matching Pursuit (OMP)

Sparsest representation

  • Question:

“Does this mean that other competitive and slightly inferior sparse representations are meaningless?”

a mixture of sparse representations
A mixture of sparse representations
  • How to generate a set of sparse representations?
    • Randomized OMP
  • How to fuse these sparse representations?
    • A plain averaging
experiments and results
Experiments and results
  • Model:
  • y=x+v=D+v
  • D: 100x200 random dictionary with entries drawn from N(0,1), and then with columns normalized;
  • : a random representations with k=10 non-zeros chosen at random and with values drawn from N(0,1);
  • v: white Gaussian noise with entries drawn from N(0,1);
  • Noise threshold in OMP algorithm T=100(??);
  • Run the OMP once, and the RandOMP 1000 times.
sparse vector reconstruction
Sparse vector reconstruction

The average representation over 1000 RandOMP representations is not sparse at all.

denoising factor based on 1000 experiments

Denoising factor=

Denoising factor based on 1000 experiments

Run RandOMP 100 times for each experiment.

analysis
Analysis

The RandOMP is an approximation of the Minimum-Mean-Squared-Error (MMSE) estimate.

comparison

0.5

1. Emp. Oracle

0.45

2. Theor. Oracle

3. Emp. MMSE

0.4

4. Theor. MMSE

5. Emp. MAP

0.35

6. Theor. MAP

7. OMP

0.3

8. RandOMP

Relative Mean-Squared-Error

0.25

0.2

0.15

0.1

0.05

0

0

0.5

1

1.5

2

s

Comparison

The above results correspond to a 20x30 dictionary. Parameters: True support=3, x=1, Averaged over 1000 experiments.

conclusion
Conclusion
  • The paper shows that averaging several sparse representations for a signal lead to better denoising, as it approximates the MMSE estimator.