1 a probability model respecting those covariance observations gaussian
Download
Skip this Video
Download Presentation
(1) A probability model respecting those covariance observations: Gaussian

Loading in 2 Seconds...

play fullscreen
1 / 29

(1) A probability model respecting those covariance observations: Gaussian - PowerPoint PPT Presentation


  • 96 Views
  • Uploaded on

(1) A probability model respecting those covariance observations: Gaussian. Maximum entropy probability distribution for a given covariance observation (shown zero mean for notational convenience) :

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '(1) A probability model respecting those covariance observations: Gaussian' - meiying


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
1 a probability model respecting those covariance observations gaussian
(1) A probability model respecting those covariance observations: Gaussian
  • Maximum entropy probability distribution for a given covariance observation (shown zero mean for notational convenience):
  • If we rotate coordinates to the Fourier basis, the covariance matrix in that basis will be diagonal. So in that model, each Fourier transform coefficient is an independent Gaussian random variable of covariance

Inverse covariance matrix

Image pixels

power spectra of typical images
Power spectra of typical images

Experimentally, the power spectrum as a function of Fourier frequency is observed to follow a power law.

http://www.cns.nyu.edu/pub/eero/simoncelli05a-preprint.pdf

random draw from gaussian spectral model
Random draw from Gaussian spectral model

http://www.cns.nyu.edu/pub/eero/simoncelli05a-preprint.pdf

noise removal in frequency domain under gaussian assumption
Noise removal (in frequency domain), under Gaussian assumption

Observed Fourier component

Power law prior probability on estimated Fourier component

Estimated Fourier component

Posterior probability for X

Variance of white, Gaussian additive noise

Setting to zero the derivative of the the log probability of X gives an analytic form for the optimal estimate of X (or just complete the square):

slide5
Noise removal, under Gaussian assumption

(1) Denoised with Gaussian model, PSNR=27.87

With Gaussian noise of std. dev. 21.4 added, giving PSNR=22.06

original

(try to ignore JPEG compression artifacts from the PDF file)

http://www.cns.nyu.edu/pub/eero/simoncelli05a-preprint.pdf

2 the wavelet marginal model
(2) The wavelet marginal model

Histogram of wavelet coefficients, c, for various images.

http://www.cns.nyu.edu/pub/eero/simoncelli05a-preprint.pdf

Parameter determining peakiness of distribution

Parameter determining width of distribution

Wavelet coefficient value

random draw from the wavelet marginal model
Random draw from the wavelet marginal model

http://www.cns.nyu.edu/pub/eero/simoncelli05a-preprint.pdf

bayes theorem
By definition of conditional probability

P(x, y) = P(x|y) P(y)

The parameters you want to estimate

Constant w.r.t. parameters x.

Likelihood function

Prior probability

What you observe

Bayes theorem

P(x, y) = P(x|y) P(y)

so

P(x|y) P(y) = P(y|x) P(x)

P(x, y) = P(x|y) P(y)

so

P(x|y) P(y) = P(y|x) P(x)

and

P(x|y) = P(y|x) P(x) / P(y)

Using that twice

bayesian map estimator for clean bandpass coefficient values
y

P(x)

P(y|x)

P(y|x)

P(x|y)

Bayesian MAP estimator for clean bandpass coefficient values

Let x = bandpassed image value before adding noise.

Let y = noise-corrupted observation.

By Bayes theorem

y = 25

P(x|y) = k P(y|x) P(x)

P(x|y)

bayesian map estimator
Bayesian MAP estimator

Let x = bandpassed image value before adding noise.

Let y = noise-corrupted observation.

By Bayes theorem

y = 50

P(x|y) = k P(y|x) P(x)

y

P(y|x)

P(x|y)

bayesian map estimator1
Bayesian MAP estimator

Let x = bandpassed image value before adding noise.

Let y = noise-corrupted observation.

By Bayes theorem

y = 115

P(x|y) = k P(y|x) P(x)

y

P(y|x)

P(x|y)

slide22
P(x|y)

P(x)

P(y|x)

y

y = 25

y = 115

y

P(y|x)

For small y: probably it is due to noise and y should be set to 0

For large y: probably it is due to an image edge and it should be kept untouched

P(x|y)

map estimate as function of observed coefficient value y
MAP estimate, , as function of observed coefficient value, y

Simoncelli and Adelson, Noise Removal via Bayesian Wavelet Coring

http://www-bcs.mit.edu/people/adelson/pub_pdfs/simoncelli_noise.pdf

slide24
(1) Denoised with Gaussian model, PSNR=27.87

With Gaussian noise of std. dev. 21.4 added, giving PSNR=22.06

original

(2) Denoised with wavelet marginal model, PSNR=29.24

http://www.cns.nyu.edu/pub/eero/simoncelli05a-preprint.pdf

slide25
M. F. Tappen, B. C. Russell, and W. T. Freeman, Efficient graphical models for processing images IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) Washington, DC, 2004.
motivation for wavelet joint models
Motivation for wavelet joint models

Note correlations between the amplitudes of each wavelet subband.

http://www.cns.nyu.edu/pub/eero/simoncelli05a-preprint.pdf

statistics of pairs of wavelet coefficients
Statistics of pairs of wavelet coefficients

Contour plots of the joint histogram of various wavelet coefficient pairs

Conditional distributions of the corresponding wavelet pairs

http://www.cns.nyu.edu/pub/eero/simoncelli05a-preprint.pdf

3 gaussian scale mixtures
Gaussian scale mixture model simulation

observed

(3) Gaussian scale mixtures

Wavelet coefficient probability

A mixture of Gaussians of scaled covariances

z is a spatially varying hidden variable that can be used to

(a) Create the non-gaussian histograms from a mixture of Gaussian densities, and (b) model correlations between the neighboring wavelet coefficients.

slide29
(1) Denoised with Gaussian model, PSNR=27.87

(2) Denoised with wavelet marginal model, PSNR=29.24

With Gaussian noise of std. dev. 21.4 added, giving PSNR=22.06

original

(3) Denoised with Gaussian scale mixture model, PSNR=30.86

http://www.cns.nyu.edu/pub/eero/simoncelli05a-preprint.pdf

ad