1 / 40

Deducing Local Influence Neighbourhoods in Images Using Graph Cuts

This paper proposes a new image structure called local influence neighbourhoods (LINs) that allow probing the intermediate structure of local features in images. LINs were developed initially for image processing tasks like denoising and interpolation but have wide applications as local image features.

jeffreyroy
Download Presentation

Deducing Local Influence Neighbourhoods in Images Using Graph Cuts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Deducing Local Influence Neighbourhoods in Images Using Graph Cuts Ashish Raj,Karl Young and Kailash Thakur Assistant Professor of Radiology University of California at San Francisco, AND Center for Imaging of Neurodegenerative Diseases (CIND) San Francisco VA Medical Center email:ashish.raj@ucsf.edu Webpage:http://www.cs.cornell.edu/~rdz/SENSE.htm http://www.vacind.org/faculty

  2. San Francisco, CA

  3. Overview • We propose a new image structure called local influence neighbourhoods (LINs) • LINs are basically locally adaptive neighbourhoods around every voxel in image • Like “superpixels” • Idea of LIN not new, but first principled cost minimization approach • Thus LINs allow us to probe the intermediate structure of local features at various scales • LINs were developed initially to address image processing tasks like denoising and interpolation • But as local image features they have wide applications

  4. 1 3 2 Neighbourhood-level Region-level Pixel-level Local neighbourhoods as intermediate image structures Low level High level Too cumbersome Computationally expensive Not suited for pattern recognition Good intermediaries between low and high levels? Prone to error propagation Great for graph theoretic and pattern recognition

  5. Outline • Intro to Local Influence Neighbourhoods • How to compute LINs? • Use GRAPH CUT energy minimzation • Some examples of LINs in image filtering and denoising • Other Applications: • Segmentation • Using LINs for Fractal Dimension estimation • Use as features for tracking, registration

  6. Local Influence Neighbourhoods • A local neighbourhood around a voxel (x0, y0) is the set of voxels “close” to it • closeness in geometric space • closeness in intensity • First attempt: use a “space-intensity box” • Definition of e, d arbitrary • Produces disjoint, non-contiguous, “holey”, noisy neighbourhoods! • Need to introduce prior expectations about contiguity • We develop a principled probabilistic approach, using likelihood and prior distributions

  7. input image output image Example: Binary image denoising • Suppose we receive a noisy fax: • Some black pixels in the original image were flipped to white pixels, and some white pixels were flipped to black • We want to recover the original

  8. bad labeling (constraint 1) bad labeling (constraint 2) good labeling Problem Constraints likelihood • Our Constraints: • If a pixel is black (white) in the original image, it is more likely to get the black (white) label • Black labeled pixels tend to group together, and white labeled pixels tend to group together prior original image

  9. Example of box vs. smoothness

  10. Example of box vs. smoothness

  11. A Better neighbourhood criterion • Incorporate closeness, contiguity and smoothness assumptions • Set up as a minimization problem • Solve using everyone’s favourite minimization algorithm • Simulated Annealing • (just kidding) - Graph Cuts! • A) Closeness: lets assume neighbourhoods follow Gaussian shapes around a voxel

  12. A) Closeness criterion in action

  13. p q B) Contiguity and smoothness • This is encoded via penalty terms between all neighbouring voxel pairs G(x) = Sp,q V(xp, xq) V(xp, xq) = distance metric Define a binary field Fp around voxel p s.t. 0 means not in LIN, 1 means in LIN B) Contiguity/smoothness • Closeness Bayesian interpretation: this is the log-prior for LINs

  14. p q Markov Random Field Priors • Imposes spatial coherence (neighbouring pixels are similar) G(x) = Sp,q V(xp, xq) • V(xp, xq) = distance metric • Potential function is discontinuous, non-convex • Potts metric is GOOD but very hard to minimize

  15. Bottomline • Maximizing LIN prior corresponds to the minimization of E(x) = Ecloseness(x) + Esmoothness(x) • MRF priors encode general spatial coherence properties of images • E(x) can be minimized using ANY available minimization algorithm • Graph Cuts can speedily solve cost functions involving MRF’s, sometimes with guaranteed global optimum.

  16. Graph Cut based Energy Minimization

  17. How to minimize E? • Graph cuts have proven to be a very powerful tool for minimizing energy functions like this one • First developed for stereo matching • Most of the top-performing algorithms for stereo rely on graph cuts • Builds a graph whose nodes are image pixels, and whose edges have weights obtained from the energy terms in E(x) • Minimization of E(x) is reduced to finding the minimum cut of this graph

  18. a cut C “source” “sink” T S A graph with two terminals Minimum cut problem • Mincut/maxflow problem: • Find the cheapest way to cut the edges so that the “source” is separated from the “sink” • Cut edges going from source side to sink side • Edge weights now represent cutting “costs”

  19. Graph construction • Links correspond to terms in energy function • Single-pixel terms are called t-links • Pixel-pair terms are called n-links • A Mincut is equivalent to a binary segmentation • I.e. mincut minimizes a binary energy function

  20. n-links t-link t t-link s Table1: Edge costs of induced graph

  21. Graph Algorithm • Repeat graph mincut for each voxel p

  22. Examples of Detected LINs

  23. Results: Most Popular LINs

  24. = Filtering with LINs • Use LINs to restrict effect of filter • Convolutional filters: • Rank order filter:

  25. Maximum filter using LINs

  26. Median filter using LINs

  27. EM-style Denoising algorithm Likelihood for i.i.d. Gaussian noise: Noise model: O = I + n Image prior: Maximize the posterior:

  28. Bayes Theorem: Pr(x|y) = Pr(y|x).Pr(x) Pr(y) Bayesian (Maximum a Posteriori) Estimate likelihood • Here x is LIN, y is observed image • Bayesian methods maximize the posterior probability: Pr(x|y) Pr(y|x).Pr(x) posterior prior

  29. EM-style image denoising Joint maximization is challenging We propose EM-style approach: Start with Iterate: We show that

  30. Results: LIN-based Image Denoising

  31. Results: Bike image

  32. Table1: Denoising Results

  33. Other Applications of LINs LINs can be used to probe scale-space of image data By varying scale parameters sx and sn Measuring fractal dimensions of brain images Hierarchical segmentation – “superpixel” concept Use LINs as feature vectors for image registration Object recognition Tracking

  34. Hierarchical segmentation • Begin with LINs at fine scale • Hierarchically fuse finer LINs to obtain coarser LINS  segmentation

  35. How to measure Fractal Dimension using LINs? How LINs vary with changing sx and sn depends on local image complexity Fractal dimension is a stable measure of complexity of multidimensional structures Thus LINs can be used to probe the multi-scale structure of image data

  36. ln N CP1 CP2 ln sx FD using LINs • For each voxel p, for each value of sx, sn: • count the number N of voxels included in Bp phase transition . • Slope of each segment = local fractal dimension extend to (sx , sn) plane

  37. Possible advantages of LIN over current techniques • LINs provide FD for each voxel • Captures the FD of local regions as well as global • Ideal for directional structures and oriented features at various scales • Far less susceptible to noise • (due to explicit intensity scale sn which can be tuned to the noise level) • Enables the probing of phase transitions

  38. Possible Discriminators of Neurodegeneration • Fractal measures may provide better discriminators of neurodegeneration (Alzheimer’s Disease, Frontotemporal Dementia, Mild Cognitive Disorder, Normal Aging, etc) • Possibilities: • Mean (overall) FD -- D(0) • Critical points, phase transitions in (sx, sn) plane • More general Renyi dimensions D(q) for q ¸ 1 • Summary image feature f(a)  D(q) • Phase transitions in f(a) • Fractal structures can be characterized by dimensions D(q), summary f(a) and various associated critical points • These quantities may be efficiently probed by the Graph Cut –based local influence neighbourhoods • These fractal quantities may provide greater discriminability between normal, AD, FTD, etc.

  39. Summary • We proposed a general method of estimating local influence neighbourhoods • Based on an “optimal” energy minimization approach • LINs are intermediaries between purely pixel-based and region-based methods • Applications include segmentation, denoising, filtering, recognition, fractal dimension estimation, … • … in other words, Best Thing Since Sliced Bread

  40. Deducing Local Influence Neighbourhoods in Images Using Graph Cuts Ashish Raj CIND, UCSF email:ashish.raj@ucsf.edu Webpage:http://www.cs.cornell.edu/~rdz/SENSE.htm http://www.vacind.org/faculty

More Related