optimizing and learning for super resolution n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Optimizing and Learning for Super-resolution PowerPoint Presentation
Download Presentation
Optimizing and Learning for Super-resolution

Loading in 2 Seconds...

play fullscreen
1 / 40

Optimizing and Learning for Super-resolution - PowerPoint PPT Presentation


  • 113 Views
  • Uploaded on

Optimizing and Learning for Super-resolution. Lyndsey C. Pickup, Stephen J. Roberts & Andrew Zisserman Robotics Research Group, University of Oxford. The Super-resolution Problem. Given a number of low-resolution images differing in: geometric transformations

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Optimizing and Learning for Super-resolution' - woods


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
optimizing and learning for super resolution

Optimizing and Learning forSuper-resolution

Lyndsey C. Pickup, Stephen J. Roberts

& Andrew Zisserman

Robotics Research Group, University of Oxford

the super resolution problem
The Super-resolution Problem

Given a number of low-resolution images

differing in:

  • geometric transformations
  • lighting (photometric) transformations
  • camera blur (point-spread function)
  • image quantization and noise.

Estimate a high-resolution image:

generative model

High-resolution image, x.

W1

W2

W3

W4

y1

y2

y3

y4

Low-resolution images

Generative Model

Registrations, lighting and blur.

generative model1
Generative Model
  • Set of low-resolution input images, y.

We don’t have:

We have:

  • Geometric registrations
  • Point-spread function
  • Photometric registrations
maximum a posteriori map solution

x

W1

W4

W2

W3

y1

y2

y3

y4

Maximum a Posteriori (MAP) Solution
  • Standard method:
  • Compute registrations from low-res images.
  • Solve for SR image, x, using gradient descent.

[Irani & Peleg ‘90, Capel ’01, Baker & Kanade ’02, Borman ‘04]

what s new

x

W1

W4

W2

W3

y1

y2

y3

y4

What’s new
  • We solve for registrations and SR image jointly.
  • We also find appropriate values for

parameters in the prior term at the

same time.

  • Hardie et al. ’97: adjust registration while optimizing super-resolution estimate.
      • Exhaustive search limits them to translation only.
      • Simple smoothness prior softens image edges.

i.e. given the low-res images, y, we solve for the SR image xand the mappings, W simultaneously.

overview of rest of talk
Overview of rest of talk
  • Simultaneous Approach
    • Model details
    • Initialisation technique
    • Optimization loop
  • Learning values for the prior’s parameters
  • Results on real images
maximum a posteriori map solution1

x

W1

W4

W2

W3

y1

y2

y3

y4

Warp, with parameters Φ.

Blur by point-spread function.

Decimate by zoom factor.

Corrupt with additive Gaussian noise.

Image x.

Maximum a Posteriori (MAP) Solution

y

details of huber prior
Details of Huber Prior

Huber function is quadratic in the middle, and linear in the tails.

ρ (z,α)

p (z|α,v)

Red: large α

Blue: small α

Probability distribution is like a heavy-tailed Gaussian.

This is applied to image gradients in the SR image estimate.

details of huber prior1
Details of Huber Prior

Advantages: simple, edge-preserving, leads to convex form for MAP equations.

Solutions as α and v vary:

Ground Truth

α=0.05 v=0.05

α=0.01 v=0.01

α=0.01 v=0.005

α=0.1 v=0.4

Edges are sharper

Too much smoothing

Too little smoothing

advantages of simultaneous approach
Advantages of Simultaneous Approach
  • Learn from lessons of Bundle Adjustment: improve results by optimizing the scene estimate and the registration together.
  • Registration can be guided by the super-resolution model, not by errors measured between warped, noisy low-resolution images.
  • Use a non-Gaussian prior which helps to preserve edges in the super-resolution image.
overview of simultaneous approach
Overview of Simultaneous Approach
  • Start from a feature-based RANSAC-like registration between low-res frames.
  • Select blur kernel, then use average image method to initialise registrations and SR image.
  • Iterative loop:
  • Update Prior Values
  • Update SR estimate
  • Update registration estimate
initialisation

Average image

Initialisation
  • Use average image as an estimate of the super-resolution image (see paper).
  • Minimize the error between the average image and the low-resolution image set.
  • Use an early-stopped iterative ML estimate of the SR image to sharpen up this initial estimate.

ML-sharpened estimate

optimization loop
Optimization Loop
  • Update prior’s parameter values (next section)
  • Update estimate of SR image
  • Update estimate of registration and lighting values, which parameterize W
  • Repeat till converged.
joint map results

Registration Fixed

Joint MAP

Decreasing prior strength

Joint MAP Results
learning prior parameters

Use first set to obtain an SR image.

Find error on validation set.

Learning Prior Parameters α, ν
  • Split the low-res images into two sets:
learning prior parameters1
Learning Prior Parameters α, ν
  • Split the low-res images into two sets:

Use first set to obtain an SR image.

Find error on validation set.

  • But what if one of the validation images is mis-registered?
learning prior parameters2
Learning Prior Parameters α, ν
  • Instead, we select pixels from across all images, choosing differently at each iteration.
  • We evaluate an SR estimate using the unmarked pixels, then use the forward model to compare the estimate to the red pixels.
learning prior parameters3
Learning Prior Parameters α, ν
  • Instead, we select pixels from across all images, choosing differently at each iteration.
  • We evaluate an SR estimate using the unmarked pixels, then use the forward model to compare the estimate to the red pixels.
learning prior parameters4
Learning Prior Parameters α, ν
  • To update the prior parameters:
  • Re-select a cross-validation pixel set.
  • Run the super-resolution image MAP solver for a small number of iterations, starting from the current SR estimate.
  • Predict the low-resolution pixels of the validation set, and measure error.
  • Use gradient descent to minimise the error with respect to the prior parameters.
results eye chart
Results: Eye Chart

MAP version: fixing registrations then super-resolving

Joint MAP version with adaptation of prior’s parameter values

results groundhog day1

Blur radius = 1

Blur radius = 1.4

Blur radius = 1.8

Results: Groundhog Day
  • The blur estimate can still be altered to change the SR result. More ringing and artefacts appear in the regular MAP version.

Regular MAP

Simultaneous

conclusions
Conclusions
  • Joint MAP solution: better results by optimizing SR image and registration parameters simultaneously.
  • Learning prior values: preserve image edges without having to estimate image statistics in advance.
  • DVDs: Automatically zoom in on regions with a registrations up to a projective transform and with an affine lighting model.
  • Further work: marginalize over the registration – see NIPS 2006.