sparse representations of signals theory and applications n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Sparse Representations of Signals: Theory and Applications * PowerPoint Presentation
Download Presentation
Sparse Representations of Signals: Theory and Applications *

Loading in 2 Seconds...

play fullscreen
1 / 59

Sparse Representations of Signals: Theory and Applications * - PowerPoint PPT Presentation


  • 248 Views
  • Uploaded on

Sparse Representations of Signals: Theory and Applications *. Michael Elad The CS Department The Technion – Israel Institute of technology Haifa 32000, Israel IPAM – MGA Program September 20 th , 2004. * Joint work with: Alfred M. Bruckstein – CS, Technion

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Sparse Representations of Signals: Theory and Applications *' - xandy


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
sparse representations of signals theory and applications

Sparse Representations of Signals: Theory and Applications *

Michael Elad

The CS Department

The Technion – Israel Institute of technology

Haifa 32000, Israel

IPAM – MGA Program

September 20th, 2004

* Joint work with: Alfred M. Bruckstein – CS, Technion

David L. Donoho – Statistics, Stanford

Vladimir Temlyakov – Math, University of South Carolina

Jean-Luc Starck – CEA - Service d’Astrophysique, CEA-Saclay, France

slide2

Jean Luc Starck CEA-Saclay, France

Freddy Bruckstein CS Technion

Dave Donoho

Statistics, Stanford

Vladimir Temlyakov Math, USC

Collaborators

Sparse representations for Signals – Theory and Applications

slide3

Agenda

  • Introduction
    • Sparse & overcomplete representations, pursuit algorithms
  • 2. Success of BP/MP as Forward Transforms
    • Uniqueness, equivalence of BP and MP
  • 3. Success of BP/MP for Inverse Problems
  • Uniqueness, stability of BP and MP
  • 4. Applications
  • Image separation and inpainting

Sparse representations for Signals – Theory and Applications

slide4

Our dream – solve an linear system of equations of the form

known

where

L

  • L>N,
  • Фis full rank, and
  • Columns are normalized

N

Problem Setting – Linear Algebra

Sparse representations for Signals – Theory and Applications

slide5

*

Generally NO

* Unless additional information is introduced.

Can We Solve This?

Our assumption for today:

the sparsest possible solution is preferred

Sparse representations for Signals – Theory and Applications

slide6

Great … But,

  • Why look at this problem at all? What is it good for? Why sparseness?
  • Is now the problem well defined now? does it lead to a unique solution?
  • How shall we numerically solve this problem?

These and related questions will be discussed in today’s talk

Sparse representations for Signals – Theory and Applications

slide7

Addressing the First Question

We will use the linear relation

as the core idea for modeling signals

Sparse representations for Signals – Theory and Applications

slide8

Signals’ Origin in Sparse-Land

We shall assume that our signals of interest emerge from a random generator machine

M

Random Signal Generator

x

M

Sparse representations for Signals – Theory and Applications

slide9

sparse

Signals’ Origin in Sparse-Land

M

  • Instead of defining over the signals directly, we define it over “their representations” α:
    • Draw the number of none-zeros (s) in αwith probability P(s),
    • Draw the s locations from L independently,
    • Draw the weights in these s locations independently (Gaussian/Laplacian).

The obtained vectors are very simple to generate or describe.

Sparse representations for Signals – Theory and Applications

slide10

M

Multiply by 

sparse

Signals’ Origin in Sparse-Land

  • Every generated signal is built as a linear combination of few columns (atoms) from our dictionary
  • The obtained signals are a special type mixture-of-Gaussians (or Laplacians) – every column participate as a principle direction in the construction of many Gaussians

Sparse representations for Signals – Theory and Applications

slide11

For a square system with non-singular Ф, there is no need for sparsity assumption.

=

Why This Model?

  • Such systems are commonly used (DFT, DCT, wavelet, …).
  • Still, we are taught to prefer ‘sparse’ representations over such systems (N-term approximation, …).
  • We often use signal models defined via the transform coefficients, assumed to have a simple structure (e.g., independence).

Sparse representations for Signals – Theory and Applications

slide12

Going over-complete has been also considered in past work, in an attempt to strengthen the sparseness potential.

=

Why This Model?

  • Such approaches generally use L2-norm regularization to go from x to α – Method Of Frames (MOF).
  • Bottom line: The model presented here is in line with these attempts, trying to address the desire for sparsity directly, while assuming independent coefficients in the ‘transform domain’.

Sparse representations for Signals – Theory and Applications

slide13

What’s to do With Such a Model?

  • Signal Transform: Given the signal, its sparsest (over-complete)representation α is its forward transform. Consider this for compression, feature extraction, analysis/synthesis of signals, …
  • Signal Prior: in inverse problems seek a solution that has a sparse representation over a predetermined dictionary, and this way regularize the problem (just as TV, bilateral, Beltrami flow, wavelet, and other priors are used).

Sparse representations for Signals – Theory and Applications

slide14

Multiply by 

sparse

  • Is ? Under which conditions?
  • Are there practical ways to get ?
  • How effective are those ways?

Signal’s Transform

NP-Hard !!

Sparse representations for Signals – Theory and Applications

slide15

Basis Pursuit

[Chen, Donoho, Saunders (‘95)]

NP-Hard

Multiply by 

sparse

[Mallat & Zhang (‘93)]

Matching Pursuit

Greedily minimize

Practical Pursuit Algorithms

These algorithms work well in many cases (but not always)

Sparse representations for Signals – Theory and Applications

slide16

M

  • Assume that x is known to emerge from , i.e. sparse such that
  • Suppose we observe , a noisy version of x with .
  • We denoise the signal by solving

Signal Prior

  • This way we see that sparse representations can serve in inverse problems (denoising is the simplest example).

Sparse representations for Signals – Theory and Applications

slide17

To summarize …

  • Given a dictionary and a signal x, we want to find the sparsest “atom decomposition” of the signal by either

or

  • Basis/Matching Pursuit algorithms propose alternative traceable method to compute the desired solution.
  • Our focus today:
    • Why should this work?
    • Under what conditions could we claim success of BP/MP?
    • What can we do with such results?

Sparse representations for Signals – Theory and Applications

slide18

Proofs (and there are beautiful and painful proofs).

  • Exotic results (e.g. -norm results, amalgam of ortho-bases, uncertainty principles).

Due to the Time Limit …

(and the speaker’s limited knowledge)we will NOT discuss today

  • Numerical considerations in the pursuit algorithms.
  • Average performance (probabilistic) bounds.
  • How to train on data to obtain the best dictionary Ф.
  • Relation to other fields (MachineLearning, ICA, …).

Sparse representations for Signals – Theory and Applications

slide19

Agenda

  • 1. Introduction
    • Sparse & overcomplete representations, pursuit algorithms
  • 2. Success of BP/MP as Forward Transforms
    • Uniqueness, equivalence of BP and MP
  • 3. Success of BP/MP for Inverse Problems
  • Uniqueness, stability of BP and MP
  • 4. Applications
  • Image separation and inpainting

Sparse representations for Signals – Theory and Applications

slide20

L

Every column is normalized to have an l2 unit norm

N

Problem Setting

The Dictionary:

Our dream - Solve:

known

Sparse representations for Signals – Theory and Applications

slide21

Given a unit norm signal x, assume we hold two different representations for it using 

  • The equation implies a linear combination of columns from  that are linearly dependent. What is the smallest such group?

=

0

v

Uniqueness - Basics

  • What are the limits that these two representations must obey?

Sparse representations for Signals – Theory and Applications

slide22

*

  • By definition, if v=0 then .
  • For any pair of representations of x we have

* Kruskal rank (1977) is defined the same – used for decomposition of tensors (extension of the SVD).

Uniqueness – Matrix “Spark”

Definition : Given a matrix , =Spark{} is the smallest number of columns from  that are linearly dependent.

Properties

  • Generally: 2  =Spark{} Rank{}+1.

Sparse representations for Signals – Theory and Applications

slide23

Result 1

If we found a representation that satisfy

Then necessarily it is unique (the sparsest).

Donoho & E (‘02)

Gribonval & Nielsen (‘03)

Malioutov et.al. (’04)

Uniqueness Rule – 1

Uncertainty rule: Any two different representations of the same x cannot be jointly too sparse – the bound depends on the properties of the dictionary.

Surprising result! In general optimization tasks, the best we can do is detect and guarantee local minimum.

Sparse representations for Signals – Theory and Applications

slide24

We can show (based on Gerśgorin disks theorem) that a lower-bound on the spark is obtained by

  • Non-tight lower bound – too pessimistic! (Example, for [I,FN] the lower bound is instead of ).

Lower bound obtained by Thomas Strohmer (2003).

Evaluating the “Spark”

  • Define the “Mutual Incoherence” as

Sparse representations for Signals – Theory and Applications

slide25

If we found a representation that satisfy

Then necessarily it is unique (the sparsest).

Result 2

Donoho & E (‘02)

Gribonval & Nielsen (‘03)

Malioutov et.al. (’04)

Uniqueness Rule – 2

This is a direct extension of the previous uncertainly result with the Spark, and the use of the bound on it.

Sparse representations for Signals – Theory and Applications

slide26

Somehow we obtain a candidate solution .

  • The uniqueness theorem tells us that a simple test on

could tell us if it is the solution of P0.

Uniqueness Implication

  • We are interested in solving
  • However:
    • If the test is negative, it says nothing.
    • This does not help in solving P0.
    • This does not explain why BP/MP may be a good replacements.

(deterministically!!!!!).

Sparse representations for Signals – Theory and Applications

slide27

M

A representation from that satisfy

is unique (the sparsest) among all representations w.p.1

Result 3

E (‘04), Candes, Romberg & Tao (‘04)

Uniqueness in Probability

More Info: 1. In fact, even representations with more non-zero entries lead to uniqueness with near-1 probability.

2. The analysis here uses the smallest singular value of random matrices. There is also a relation to Matoids.

3. “Signature” of the dictionary – extension of the “Spark”.

Sparse representations for Signals – Theory and Applications

slide28

Given a signal x with a representation ,

Assuming that , P1 (BP) is

Guaranteed to find the sparsest solution .

Result 4

*

Donoho & E (‘02)

Gribonval & Nielsen (‘03)

Malioutov et.al. (’04)

* Is it a tight result? What is the role of “Spark” in dictating Equivalence?

BP Equivalence

In order for BP to succeed, we have to show that sparse enough solutions are the smallest also in -norm. Using duality in linear programming one can show the following:

Sparse representations for Signals – Theory and Applications

slide29

Given a signal x with a representation ,

Assuming that , MP is

Guaranteed to find the sparsest solution.

Result 5

Tropp (‘03)

Temlyakov (‘03)

MP Equivalence

As it turns out, the analysis of the MP is even simpler ! After the results on the BP were presented, both Tropp and Temlyakov shown the following:

SAME RESULTS !? Are these algorithms really comparable?

Sparse representations for Signals – Theory and Applications

slide30

forward transform?

Why works so well?

Implications?

To Summarize so far …

Transforming signals from Sparse-Land can be done by seeking their original representation

Use pursuit Algorithms

We explain (uniqueness and equivalence) – give bounds on performance

(a) Design of dictionaries via (M,σ),

(b) Test of solution for optimality,

(c) Use in applications as a

forward transform.

Sparse representations for Signals – Theory and Applications

slide31

Agenda

  • 1. Introduction
    • Sparse & overcomplete representations, pursuit algorithms
  • 2. Success of BP/MP as Forward Transforms
    • Uniqueness, equivalence of BP and MP
  • 3. Success of BP/MP for Inverse Problems
  • Uniqueness, stability of BP and MP
  • 4. Applications
  • Image separation and inpainting

Sparse representations for Signals – Theory and Applications

slide32

NP-Hard

Basis Pursuit

+

Multiply by 

sparse

Matching Pursuit

remove another atom

The Simplest Inverse Problem

Denoising:

Sparse representations for Signals – Theory and Applications

slide33

Questions We Should Ask

  • Reconstruction of the signal:
    • What is the relation between this and other Bayesian alternative methods [e.g. TV, wavelet denoising, … ]?
    • What is the role of over-completeness and sparsity here?
    • How about other, more general inverse problems?

These are topics of our current research with P. Milanfar, D.L. Donoho, and R. Rubinstein.

  • Reconstruction of the representation:
    • Why the denoising works with P0()?
    • Why should the pursuit algorithms succeed?

These questions are generalizations of the previous treatment.

Sparse representations for Signals – Theory and Applications

slide34

0P<1

P=1

P>1

2D–Example

  • Intuition Gained:
  • Exact recovery is unlikely even for an exhaustive P0 solution.
  • Sparse α can be recovered well both in terms of support and proximity for p≤1.

Sparse representations for Signals – Theory and Applications

slide35

σ

mon. non-increasing,

1

1

Uniqueness? Generalizing Spark

Definition: Spark{} is the smallest number of columns from  that give a smallest singular value .

Properties:

Sparse representations for Signals – Theory and Applications

slide36

d

Result 6

The further the candidate alternative from , the denser is must be.

for

Donoho, E, & Temlyakov (‘04)

Generalized Uncertainty Rule

Assume two feasible & different representations of y:

Sparse representations for Signals – Theory and Applications

slide37

If we found a representation that satisfy

then necessarily it is unique (the sparsest) among all representations that are AT LEAST 2/ away (in sense) .

Result 7

Donoho, E, & Temlyakov (‘04)

Uniqueness Rule

Implications: 1. This result becomes stronger if we are willing to consider substantially different representations.

2. Put differently, if you found two very sparse approximate representations of the same signal, they must be close to each other.

Sparse representations for Signals – Theory and Applications

slide38

Stability: Under which conditions on the original representations , could we guarantee that and are small?

Are the Pursuit Algorithms Stable?

Basis Pursuit

+

Multiply by 

Matching Pursuit

remove another atom

Sparse representations for Signals – Theory and Applications

slide39

Given a signal with a representation

satisfying and bounded

noise , BP will give stability, i.e.,

Result 8

Donoho, E, & Temlyakov (‘04), Tropp (‘04), Donoho & E (‘04)

BP Stability

Observations:1. =0 – weaker version of previous result

2. Surprising - the error is independent of the SNR, and

3. The result is useless for assessing denoising performance.

Sparse representations for Signals – Theory and Applications

slide40

Given a signal with bounded noise , and a sparse representation,

MP will give stability, i.e.,

Result 9

Donoho, E, & Temlyakov (‘04), Tropp (‘04)

MP Stability

Observations:1. =0 leads to the results shown already,

2. Here the error is dependent of the SNR, and

3. There are additional results on the sparsity pattern.

Sparse representations for Signals – Theory and Applications

slide41

What about noise?

Is it still theoretically sound?

Where next?

To Summarize This Part …

We have seen how BP/MP can serve as a forward transform

Relax the equality constraint

We show uncertainty, uniqueness and stability results for the noisy setting

  • Denoising performance?
  • Relation to other methods?
  • More general inverse problems?
  • Role of over-completeness?
  • Average study? Candes & Romberg HW

Sparse representations for Signals – Theory and Applications

slide42

Agenda

  • 1. Introduction
    • Sparse & overcomplete representations, pursuit algorithms
  • 2. Success of BP/MP as Forward Transforms
    • Uniqueness, equivalence of BP and MP
  • 3. Success of BP/MP for Inverse Problems
  • Uniqueness, stability of BP and MP
  • 4. Applications
  • Image separation and inpainting

Sparse representations for Signals – Theory and Applications

slide43

Family of Cartoon images

Our Inverse Problem

Given s, find its building parts and the mixture weights

Our Assumption

Family of Texture images

Decomposition of Images

Sparse representations for Signals – Theory and Applications

slide44

L

x

=

x

=

N

x is chosen such that the representation of are non-sparse:

x is chosen such that the representation of are sparse:

Use of Sparsity

We similarly construct y to sparsify Y’s while being inefficient in representing the X’s.

Sparse representations for Signals – Theory and Applications

slide45

Training, e.g.

Choice of Dictionaries

  • Educated guess: texture could be represented by local overlapped DCT or Gabor, and cartoon could be built by Curvelets/Ridgelets/Wavelets (depending on the content).
  • Note that if we desire to enable partial support and/or different scale, the dictionaries must have multiscale and locality properties in them.

Sparse representations for Signals – Theory and Applications

slide46

y

x

+

Decomposition via Sparsity

  • The idea – if there is a sparse solution, it stands for the separation.
  • This formulation removes noise as a by product of the separation.

Sparse representations for Signals – Theory and Applications

slide47

Theoretical Justification

  • Several layers of study:
  • Uniqueness/stability as shown above apply directly but are ineffective in handling the realistic scenario where there are many non-zero coefficients.
  • Average performance analysis (Candes & Romberg HW) could remove this shortcoming.
  • Our numerical implementation is done on the “analysis domain” – Donoho’s results apply here.
  • All is built on a model for images as being built as sparse combination of Фxα+Фyβ.

Sparse representations for Signals – Theory and Applications

slide48

What About This Model?

  • Coifman’s dream – The concept of combining transforms to represent efficiently different signal contents was advocated by R. Coifman already in the early 90’s.
  • Compression – Compression algorithms were proposed by F. Meyer et. al. (2002) and Wakin et. al. (2002), based on separate transforms for cartoon and texture.
  • Variational Attempts – Modeling texture and cartoon and variational-based separation algorithms: Eve Meyer (2002), Vese & Osher (2003), Aujol et. al. (2003,2004).
  • Sketchability – a recent work by Guo, Zhu, and Wu (2003) – MPand MRF modeling for sketch images.

Sparse representations for Signals – Theory and Applications

slide49

Results – Synthetic + Noise

Original image composed as a combination of texture, cartoon, and additive noise (Gaussian, )

The residual, being the identified noise

The separated cartoon (spanned by 5 layer Curvelets functions+LPF)

The separated texture (spanned by Global DCT functions)

Sparse representations for Signals – Theory and Applications

slide50

Separated Cartoon using Curvelets (5 resolution layers)

Separated texture using local overlapped DCT (32×32 blocks)

Original ‘Barbara’ image

Results on ‘Barbara’

Sparse representations for Signals – Theory and Applications

slide51

Results – ‘Barbara’ Zoomed in

Zoom in on the result shown in the previous slide (the texture part)

The same part taken from Vese’s et. al.

We should note that Vese-Osher algorithm is much faster because of our use of curvelet

The same part taken from Vese’s et. al.

Zoom in on the results shown in the previous slide (the cartoon part)

Sparse representations for Signals – Theory and Applications

slide52

What if some values in s are unknown (with known locations!!!)?

Noise removal Inpainting Decomposition

The image will be the inpainted outcome. Interesting comparison to Bertalmio et.al. (’02)

Inpainting

For separation

Sparse representations for Signals – Theory and Applications

slide53

Texture Part

Outcome

Source

Cartoon Part

Results – Inpainting (1)

Sparse representations for Signals – Theory and Applications

slide54

Texture Part

Outcome

Source

Cartoon Part

Results – Inpainting (2)

Sparse representations for Signals – Theory and Applications

slide55

Outcome

Source

Results – Inpainting (3)

There are still artifacts – these are just preliminary results

Sparse representations for Signals – Theory and Applications

slide56

Today We Have Discussed

  • 1. Introduction
    • Sparse & overcomplete representations, pursuit algorithms
  • 2. Success of BP/MP as Forward Transforms
    • Uniqueness, equivalence of BP and MP
  • 3. Success of BP/MP for Inverse Problems
  • Uniqueness, stability of BP and MP
  • 4. Applications
  • Image separation and inpainting

Sparse representations for Signals – Theory and Applications

slide57

Summary

  • Pursuit algorithms are successful as
    • Forward transform – we shed light on this behavior.
    • Regularization scheme in inverse problems – we have shown that the noiseless results extend nicely to treat this case as well.
  • The dream: the over-completeness and sparsness ideas are highly effective, and should replace existing methods in signal representations and inverse-problems.
  • We would like to contribute to this change by
    • Supplying clear(er) explanations about the BP/MP behavior,
    • Improve the involved numerical tools, and then
    • Deploy it to applications.

Sparse representations for Signals – Theory and Applications

slide58

Future Work

  • Many intriguing questions:
    • What dictionary to use? Relation to learning? SVM?
    • Improved bounds – average performance assessments?
    • Relaxed notion of sparsity? When zero is really zero?
    • How to speed-up BP solver (accurate/approximate)?
    • Applications – Coding? Restoration? …
  • More information (including these slides) is found in http://www.cs.technion.ac.il/~elad

Sparse representations for Signals – Theory and Applications

slide59

Some of the People Involved

Donoho, Stanford Mallat, Paris Coifman, Yale Daubechies, Princetone Temlyakov, USC Gribonval, INRIA Nielsen, Aalborg

Gilbert, Michigan Tropp, Michigan Strohmer, UC-Davis Candes, Caltech Romberg, CalTech Tao, UCLA Huo, GaTech

Rao, UCSD Saunders, Stanford Starck, Paris Zibulevsky, Technion Nemirovski, Technion Feuer, Technion Bruckstein, Technion

Sparse representations for Signals – Theory and Applications