1 / 44

Some thoughts on regularization for vector-valued inverse problems

Some thoughts on regularization for vector-valued inverse problems. Eric Miller Dept. of ECE Northeastern University. Outline. Caveats Motivating examples Sensor fusion: multiple sensors, multiple objects Sensor diffusion: single modality, multiple objects Problem formulation

esme
Download Presentation

Some thoughts on regularization for vector-valued inverse problems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Some thoughts on regularization for vector-valued inverse problems Eric Miller Dept. of ECE Northeastern University

  2. Outline • Caveats • Motivating examples • Sensor fusion: multiple sensors, multiple objects • Sensor diffusion: single modality, multiple objects • Problem formulation • Regularization ideas • Markov-random fields • Mutual information • Gradient correlation • Examples • Conclusions

  3. Caveats • My objective here is to examine some initial ideas regarding multi-parameter inverse problems • Models will be kept simple • Linear and 2D • Consider two unknowns. • Case of 3 or more can wait • Regularization parameters chosen by hand. • Results numerical. • Whatever theory there may be can wait for later

  4. Motivating Applications • Sensor fusion • Multiple modalities each looking at the same region of interest • Each modality sensitive to a different physical property of the medium • Sensor diffusion • Single modality influenced by multiple physical properties of the medium

  5. GE Tomosynthesis Optical measurement done under mammographic compression Optical Imager Sensor Fusion Example • Multi-modal breast imaging • Limited view CT • Sensitive to attenuation • High resolution, limited data • Diffuse optical tomography • Sensitive to many things. Optical absorption and scattering or chromophore concentrations • Here assume just absorption is of interest • Low resolution, fairly dense data • Electrical impedance tomography coming on line

  6. Linear Physical Models Tomosynthesis Diffuse optical Source Source Region of interest Detector Detector

  7. Sensor Fusion (cont) • Overall model relating data to objects • Assume uncorrelated, additive Gaussian noise. Possibly different variances for different modalities • All sorts of caveats • DOT really nonlinear • Tomosynthesis really Poisson • Everything really 3D • Deal with these later

  8. De-Mosaicing Bayer pattern • Color cameras sub-sample red, green and blue on different pixels in the image • Issues: filling in all of the pixels with all three colors • yred = observed red pixels over sub-sampled grid. 9 vector for example • frwd= red pixels values over all pixels in image. 30 vector in example • Kred = selection matrix with a single “1” in each row, all others 0. 9x30 matrix for example

  9. Sensor Diffusion Example • Diagnostic ultrasound guidance for hyperthermia cancer treatment • Use high intensity focused ultrasound to cook tissue • Need to monitor treatment progress • MRI state of the art but it is expensive • Ultrasound a possibility • Absorption monotonic w/ temperature • Also sensitive to sound speed variations • Traditional SAR-type processing cannot resolve regions of interest • Try physics-based approach Thanks to Prof. Ron Roy of BU

  10. Ultrasound model • As with diffuse optical, exact model is based on Helmholtz-type equation and is non-linear • Here we use a Born approximation even in practice because problem size quite large (10’s of wavelengths on a side) • Model • f1 = sound speed • f2= absorption •  = frequency dependent“filters” for each parameter

  11. Estimation of parameters • Variational formulation/penalized likelihood approach • Issue of interest here is the prior Prior information, regularizer Gaussian log likelihood

  12. Prior Models • Typical priors based on smoothness of the functions •  = regularization parameter • p = 1 gives total variation reconstruction with edges well preserved • p = 2 gives smooth reconstructions

  13. Priors (cont) • What about co-variations between f1 and f2? • Physically, these quantities are not independent • Tumors, lesions, etc. should appear in all unknowns • Speculate that spatial variations in one correlate with such variations in the other • Looking to supplement existing prior with mathematical measure of similarity between the two functions or their gradients • Three possibilities examined today

  14. i+1,j i,j i,j-1 i,j+1 i-1,j Option 1: Gauss-Markov Random Field-Type Prior • Natural generalization of the smoothness prior that correlates the two functions f2 f1 i+1,j i,j i,j-1 i,j+1 i-1,j w1 i,j

  15. GMRF (cont) • Matrix form • The GMRF regularizer • Implies that covariance of f is equal to What does this “look” like?

  16. GMRF: Middle Pixel Correlation Lag y Lag x

  17. GMRF: Comments • Motivated by / similar to use of such models in hyperspectral processing • Lots of things one could do • One line parameter estimation • Appropriate neighborhood structures • Generalized GMRF a la Bouman and Sauer • More than two functions

  18. Option 2: Mutual Information • An information theoretic measure of similarity between distributions • Great success as a cost function for image registration (Viola and Wells) • Try a variant of it here to express similarity between f1 and f2

  19. Mutual Information: Details • Suppose we had two probability distributions p(x) and p(y) • Mutual information is • Maximization of mutual information (basically) minimizes joint entropy, -H(x,y), while also accounting for structure of the marginals

  20. Mutual Information: Details • Mutual information registration used not the images but their histograms • Estimate histograms using simple kernel density methods and similarly for p(y) and p(x,y)

  21. Mutual Information: Example f1(x,y) y Peak when overlap is perfect Mutual Information x f2(x,y)= f2(x+,y) 

  22. Mutual Information: Regularizer • For simplicity, we use a decreasing function of MI as a regularizer • Larger the MI implies smaller the cost

  23. Gradient Correlation • Idea is simple: gradients should be similar • Certainly where there are physical edges, one would expect jumps in both f1 and f2 • Also would think that monotonic trends would be similar OK OK Not OK

  24. A Correlative Approach • A correlation coefficient based metric

  25. Let’s See How They Behave 5 f1(x,y) -5 f2(x,y)= f2(x+,y)

  26. Example 1: Sensor Fusion X-ray source DOT source/detector • Noisy, high resolution X ray. 15 dB • Cleaner, low resolution DOT, 35 dB 5 cm 6 cm DOT detectors X-ray detector

  27. DOT Reconstructions GMRF Tikhonov Truth Corr. Coeff MI

  28. X Ray Reconstructions Tikhonov GMRF Truth Corr. Coeff MI

  29. DOT Reconstructions Tikhonov GMRF Truth Corr. Coeff MI

  30. X-ray Reconstructions GMRF Tikhonov Truth Corr. Coeff MI

  31. Mean Normalized Square Error

  32. source receiver 5 cm 6 cm Example 2: Sensor Diffusion • Ultrasound problem • Tissue-like properties • 5 frequencies between 5kHz and 100 kHz • Wavelengths between 1 cm and 30 cm • Image sound speed and attenuation • High SNR (70 dB), but sound speed about 20x absorption and both in cluttered backgrounds

  33. Sound Speed Reconstructions GMRF Tikhonov Truth Corr. Coeff MI

  34. Absorption Reconstructions GMRF Tikhonov Truth Corr. Coeff MI

  35. Sound Speed Reconstructions Tikhonov GMRF Truth Corr. Coeff

  36. Absorption Reconstructions GMRF Tikhonov Truth Corr. Coeff

  37. Mean Normalized Square Error

  38. Demosaicing

  39. Eye Region: Red Original Tikhonov Corr. Coeff.

  40. Eye Region: Green Original Tikhonov Corr. Coeff.

  41. Chair Region: Red Original Tikhonov Corr. Coeff.

  42. Chair Region: Green Original Tikhonov Corr. Coeff.

  43. Normalized Square Error

  44. Conclusions etc. • Examined a number of methods for building similarity into inverse problem involving multiple unknowns • Lots of things that could be done • Objective performance analysis. Uniform CRB perhaps • Parameter selection, parameter selection, parameter selection • 3+ unknowns • Other measures of similarity

More Related