Pixel recovery via minimization in the wavelet domain
This presentation is the property of its rightful owner.
Sponsored Links
1 / 23

Pixel Recovery via Minimization in the Wavelet Domain PowerPoint PPT Presentation


  • 82 Views
  • Uploaded on
  • Presentation posted in: General

Pixel Recovery via Minimization in the Wavelet Domain. Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz. *. *. #. presenting author. *: Polytechnic University, Brooklyn, NY # : DoCoMo Communications Laboratories USA, Inc., San Jose, CA. Overview.

Download Presentation

Pixel Recovery via Minimization in the Wavelet Domain

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Pixel recovery via minimization in the wavelet domain

Pixel Recovery via Minimization in the Wavelet Domain

Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz

*

*

#

presenting author

*: Polytechnic University, Brooklyn, NY

#: DoCoMo Communications Laboratories USA, Inc., San Jose, CA


Overview

Overview

  • Problem statement: Estimation/Recovery of missing data.

  • Formulation as a linear expansion over overcomplete basis.

  • Expansions that minimize the norm.

  • Why do this?

    • Connections to adaptive linear estimators and sparsity.

    • Connections to recent results and statistics

  • Simulation results and comparisons to our earlier work.

  • Why not to do this: Analysis of what is going on.

  • Conclusion and ways of modifying the solutions for better results.

( Presentation is much more detailed than the paper.)

( Some software available, please check the paper.)


Problem statement

available pixels

lost pixels

Image

(assume zero mean)

2.

Lost

Block

3.

Derive predicted

Problem Statement

1.

Original


Formulation

Formulation

available data projection

1. Take NxM matrix of overcomplete basis,

2. Write y in terms of the basis

3. Find the expansion coefficients (two ways)


Find the expansion coefficients to minimize the norm

subject to

Find the expansion coefficients to minimize the norm

norm of expansion coefficients

Regularization

Available data constraint


Why minimize the norm

subject to

Bogus reason

norm

Why minimize the norm?

“Under i.i.d. Laplacian model for coefficient probabilities,

min

Real reason: sparse decompositions.


What does sparsity have to do with estimation recovery

1. Any such decomposition builds an adaptive linear estimate.

2. In fact “any” estimate can be written in this form.

Onur G. Guleryuz, ``Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part I - Theory,‘’ IEEE Tr. on IP, in review.

http://eeweb.poly.edu/~onur (google: onur guleryuz).

What does sparsity have to do with estimation/recovery?


The recovered signal must be sparse

The recovered signal must be sparse

3. The recovered becomes

null space of dimension

y has to be sparse

Onur G. Guleryuz, ``Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part I - Theory,‘’ IEEE Tr. on IP, in review.

http://eeweb.poly.edu/~onur (google: onur guleryuz).


Who cares about y what about the original x

Who cares about y, what about the original x?

If successful prediction is possible x also has to be ~sparse

i.e., if

small, then x ~ sparse

1. Predictable sparse

2. Sparsity of x is not a bad leap of faith to make in estimation

If not sparse, cannot estimate well anyway.

(caveat: the data may be sparse, but not in the given basis)

Onur G. Guleryuz, ``Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part I - Theory,‘’ IEEE Tr. on IP, in review.

http://eeweb.poly.edu/~onur (google: onur guleryuz).


Why minimize the norm1

subject to

Why minimize the norm?

Under certain conditions the problem gives the solution to the problem:

subject to

Find the “most predictable”/sparsest expansion that agrees with the data.

(solving convex, not combinatorial)

D. Donoho, M. Elad, and V. Temlyakov, ``Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise‘’.

http://www-stat.stanford.edu/~donoho/reports.html


Why minimize the norm2

Why minimize the norm?

Experience from statistics literature. The “lasso” is known to generate sparse expansions.

subject to

R. Tibshirani, ``Regression shrinkage and selection via the lasso’’. J. Royal. Statist. Soc B., Vol. 58, No. 1, pp. 267-288.


Simulation results

subject to

H: Two times expansive M=2N, real, dual-tree, DWT. Real part of:

N. G. Kingsbury, ``Complex wavelets for shift invariant analysis and filtering of signals,‘’ Appl. Comput. Harmon. Anal., 10(3):234-253, May 2002.

Simulation Results

vs.

Iterated Denosing (ID) with no layering and no selective thresholding:

Onur G. Guleryuz, ``Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part II –Adaptive Algorithms,‘’ IEEE Tr. on IP, in review.

http://eeweb.poly.edu/~onur (google: onur guleryuz).


Simulation results1

Simulation Results


Sparse modeling generates non convex problems

available pixel constraint

x

Sparse Modeling Generates Non-Convex Problems

x

missing pixel

x

available pixel

Pixel coordinates for a “two pixel” image

Transform coordinates


Sparse non convex who cares what about reality natural images

“Sparse=non-convex”, who cares. What about reality, natural images?

=

+


Geometry

ball

x

x

Case 1

Case 2

Case 3

Not sparse Bogus reason

Geometry

x


Why not to minimize the norm

overwhelming noise:

error due to missing data

modeling error

Why not to minimize the norm

What about all the optimality/sparsest results?

Results such as: D. Donoho et. al. ``Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise‘’.

are very impressive, but they are tied closely to H providing the sparsest decomposition for x.


Why not to minimize the norm1

subject to

(problem due to )

“nice” basis, “decoherent”

“not nice” basis (due to cropping), may become very “coherent”

Why not to minimize the norm


Examples

unnormalized coherency=

normalized coherency= 1

(worst possible)

1. Optimal solution sometimes tries to make coefficients of scaling functions zero.

2. solution never sees the actual problem.

Examples

orthonormal, coherency=0


What does id do

  • Decomposes the big problem into many progressions.

  • Arrives at the final complex problem by solving much simpler problems.

  • is conceptually a single step, greedy version of ID.

Progression 1:

Progression 2:

...

What does ID do?

Uses correctly modeled components to reduce the overwhelming errors/”noise”


Id is all about robustly selecting sparsity

ID is all about robustly selecting sparsity

  • Tries to be sparse, not the sparsest.

  • Robust to model failures.

  • Other constraints easy to incorporate


Conclusion

Conclusion

1. Have to be more agnostic than smoothest, sharpest, smallest, sparsest, *est.

minimum mse not necessarily = sparsest

2. Have to be more robust to modeling errors.

When a convex approximation is possible to the underlying non-convex problem, great. But have to make sure assumptions are not violated.

For lasso/ fans:

3. Is it still possible to use , but with ID principles? Yes


Pixel recovery via minimization in the wavelet domain

subject to

subject to

available

data

subject to

...

But must ensure no case 3 problems (ID stays away from those).

1. It’s not about the lasso or how you tighten the lasso, it’s about what (plural) you tighten the lasso to.

Do you think you reduced mse? No: you shouldn’t have done this. Yes: Do it again.

2. This is not “LASSO”, “LARS”, .... This is Iterated Denoising (use hard thresholding!).


  • Login