1 / 18

Jul 21 , 2014 Jason Su

Jul 21 , 2014 Jason Su. Motivation. Visualization of multiple image modalities or contrasts is difficult Side by side comparisons are often not precise Flipping back and forth helps to pronounce changes but some modalities may have no structural landmarks

felton
Download Presentation

Jul 21 , 2014 Jason Su

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Jul 21, 2014 Jason Su

  2. Motivation • Visualization of multiple image modalities or contrasts is difficult • Side by side comparisons are often not precise • Flipping back and forth helps to pronounce changes but some modalities may have no structural landmarks • Beginning to collect time-resolved “MP-nRAGE” with view-sharing methods • What is the best way to visualize such data, esp. for thalamic nuclei? • Can we do something else other than fitting a T1 map?

  3. Goal: Image Fusion • The combination of multiple images into one while preserving the important information from each • Common examples: • fMRI overlays on structural images • Segmentation overlays • Nuclear medicine overlays • HDR photography • Compared to quantitative imaging, the goal is to achieve a pleasing effect to the eye instead of fitting to a model • Thus there are many possible algorithms and no necessarily “correct” way to do things

  4. Background: Types of data fusion • Signal level, pixel level • Image fusion, e.g. averaging, SOS, MIP • Region-based: consider neighborhood around current pixel • Feature level • Label fusion segmentation: combine multiple candidate labels to identify features • Decision level • Image biomarkers

  5. Gaussian and Laplacian Pyramid • Pyramids are multiresolution decompositions of images • Each level is subsampled by a factor of 2, i.e. each level is an octave • GP: Successively blurred and downsampled versions of the image • Gives scale of features in the image • LP: take differences between Gaussian levels • Gives information about edges of varying widths

  6. ROLP/Contrast Pyramid • Ratio of low-pass pyramid: take ratios between Gaussian levels • Contrast = (L-Lb)/Lb • R = L/Lb = C + 1

  7. Background: The -lets • Discontinuities destroy the sparsity of a Fourier series, the Gibbs phenomenon • Wavelets – are localized and multi-scale • Perform well in 1D, but poor sense of orientation for 2D • Only horizontal, vertical or diagonal • How to better represent a 2D image? • Want multiresolution, localization, critical sampling, directionality, anisotropy • Curvelets – Candéset al. • Developed in continuous domain then adapted to discrete • Optimally sparse representation for smooth 2D functions except for a discontinuity along a curve • Models wave propagation • Contourlets – Do and Vetterli • Developed in discrete domain Pointillism-like

  8. Background: Curvelet Decomposition Directional filter bank.(a) Frequency partitioning where l=3 and there are 23 = 8 real wedge-shaped frequency bands.

  9. Methods: Algorithm • Take 2 input images, how to combine them? • Tale contourlet transform of each • Each level apears to gain a factor of 2 in angular resolution Yang’s decomposition • How does this effect the quality?

  10. Algorithm: LowpassSubband • Treat the lowest level of pyramid differently • This is a tiny thumbnail of the original information • Higher-level detail is added to this to reconstruct the whole image • 2 modes of operation: selection or averaging • Choose based on a threshold criterion: salience • If the correlation between the input windows in a 3x3 patch in curvelet space is above a threshold -> weighted averaging • Else choose the one with more energy (sum sq. over window) • Averaging is only done at a fixed alpha blend amount • Not variable dependent on data • A bit ad hoc in that there are many unspecified preset tunable parameters: thresholds, blend factors • They can be optimized for nuclei

  11. HighpassSubbandAlgo • Contrast = (L-Lb)/Lb = Lh/Lb • Ratio of a high level curvelet coefficient to lowest level • Compute contrast as above, Lbcomes from the pixels in the lowest level that contribute to the highest level • Blur this to get weighted neighborhood contrast • Select coefficients from the image that has the higher value on this metric, i.e. the one that has more local contrast • “Using contourlet contrast, more dominant features can be preserved precisely at all the resolution levels”

  12. Reconstruction • Take the inverse curvelet transform of the blended pyramid

  13. Methods • Test cases • CT-MR • Gd and T2w • PD and T1w • Compared against existing methods: average, PCA, wavelet maximum • Metrics • Standard deviation – image variability • Entropy – how much information is in the image • Overall cross entropy – how close are the distributions, is information preserved in the fused result? • Spatial frequency – amount of energy energy in high frequencies • Only looking at horizontal and vertical freqs. • Correlation – how similar is the fusion to the inputs

  14. Results: Metrics • Proposed algorithm generally shows to have more variability and capture more information from the inputs

  15. Notes • PCA table values seem off? • There is a Matlab implementation of curvelets by the creators • How to handle multiple image fusion?

More Related