1 / 19

Aug 18, 2014 Jason Su

Aug 18, 2014 Jason Su. Motivation. Traditional fitting methods for exponentials have pros and cons Nonlinear LS ( Levenberg -Marquardt) – slow, may converge to local minimum Log-Linear – fast but sensitive to noise Can we improve upon them? Surprisingly, yes!.

katima
Download Presentation

Aug 18, 2014 Jason Su

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Aug 18, 2014 Jason Su

  2. Motivation • Traditional fitting methods for exponentials have pros and cons • Nonlinear LS (Levenberg-Marquardt) – slow, may converge to local minimum • Log-Linear – fast but sensitive to noise • Can we improve upon them? • Surprisingly, yes!

  3. Background: Numerical Integration • Approximating the value of a definite integral • Trapezoidal Rule: the area under a 2-pt linear interpolation of the interval • Simpson’s Rule: the area under a 3-pt. quadratic interpolation of the interval • Newton-Cotes formulas:

  4. Theory • Log-Linear: linearize the signal equation with a nonlinear transformation to fit a line • ARLO: integrate the signal equation to fit a linear approximation (Simpson’s rule) • Assuming decay curve sampled linearly at intervals

  5. Theory • An auto-regressive time-series • Find T2* to minimize the error between model and data,

  6. Methods • Rician noise compensation • Data truncation, only keep points with high SNR • Values > μ + 2σnoise in background • Apply a bias correction based on a Bayesian model table look-up depending on the number of coils

  7. Methods • Simulation to assess bias and variance • Fitting method vs T2* range, # channels, SNR • 10,000 trials with Rician noise • In vivo • 1.5T, 8ch, 15 patients, 2D GRE, TR=27.4, α=20deg, TE = 1.3-23.3ms (16 linearly sampled), liver • 3T, 8ch?, 2 volunteers, 3D GRE, α=20deg, 7/12 echoes with 6.5/4.1ms spacing, brain • 1.5T, 2D GRE, TR=19ms, α=35deg, TE=2.8-16.8ms (8 echoes), heart with iron overload • Manual segmentation of liver and brain structures • Statistical • Linear regression, Bland-Altman, and t-tests

  8. Results: Simulation • LM and ARLO are effectively equivalent • ARLO is generally equivalent to LM except at T2*=1.5ms • Log-linear is sensitive to T2*, SNR, and channels

  9. Results: In Vivo, Liver ROI • Computation time per voxel • 8.81 ± 1.00ms for LM • 0.57 ± 0.04ms for LL • 0.07 ± 0.02ms for ARLO

  10. Results: In Vivo, Whole Liver

  11. Results: In Vivo, Whole Liver

  12. Results: In Vivo, Brain

  13. Results: In Vivo, Brain

  14. Results: In Vivo, Heart

  15. Discussion • ARLO is more robust than LL to noise with accuracy as good as LM at 10x the speed of LL • Noise is amplified by log-transform • ARLO is a single-variable linear regression, O(N) • LL is a two-variable linear regression, O(6N) • LM is nonlinear LS, O(N3) • ARLO provides an effective linearization of the nonlinear estimation problem • Does not require an initial guess, immune to convergence issues like in LM

  16. Discussion • Simpson’s rule much better approximation than Trapezoidal • Higher order gave little improvement • Could also use differentiation but not as good as integration in low SNR and need finer sampling • Other applications: • Other exponential decay models like diffusion, T2, off-resonance and T2* • T1 recovery “from data measured at various timing parameters such as TR or TI” • Can also be adapted to multi-exponential fitting

  17. Discussion • Limitations • Requires at least 3 data points vs 2 for LM and LL • Linear sampling of echo times • Results in minimum T2* of 1.5ms by ARLO • Probably due to poor protocol

  18. Thoughts • Nonlinear sampling • Generally linear sampling is not ideal for experimental design, are there approximations that don’t require this? • “Gaussian quadrature and Clenshaw–Curtis quadrature with unequally spaced points (clustered at the endpoints of the integration interval) are stable and much more accurate” • For protocols varying multiple parameters, we would integrate over multiple dimensions? • Higher-dimensional integral approximations? • Simpson’s in each dimension would be a lot of sample points

  19. Thoughts • Seems important to have an operation that is equivalent to a linear combination of the acquired data • e.g. integral of exponential is difference of exponentials • Consider SPGR:

More Related