1 / 39

Applications of Automatic Differentiation and the Cramér-Rao Lower Bound to Parameter Mapping

Applications of Automatic Differentiation and the Cramér-Rao Lower Bound to Parameter Mapping. July 31, 2013 Jason Su. Outline. Cramér-Rao Lower Bound. How precisely can I measure something with this pulse sequence?. CRLB: What is it?.

evette
Download Presentation

Applications of Automatic Differentiation and the Cramér-Rao Lower Bound to Parameter Mapping

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Applications of Automatic Differentiation and the Cramér-Rao Lower Bound to Parameter Mapping July 31, 2013 Jason Su

  2. Outline

  3. Cramér-Rao Lower Bound How precisely can I measure something with this pulse sequence?

  4. CRLB: What is it? • A lower limit on the variance of an estimator of a parameter. • The best you can do at estimating say T1 with a given pulse sequence and signal equation: g(T1) • Estimators that achieve the bound are called “efficient” • The minimum variance unbiased estimator (MVUE) is efficient

  5. CRLB: Fisher Information Matrix • Typically calculated for a given tissue, θ • Interpretation • J captures the sensitivity of the signal equation to changes in a parameter • Its “invertibility” or conditioning is howseparable parameters are from each other,i.e. the specificity of the measurement

  6. CRLB: How does it work? • A common formulation • Unbiased estimator • A signal equation with normally distributed noise • Measurement noise is independent

  7. CRLB: Computing the Jacobian

  8. Automatic Differentiation The most criminally underused tool in your computational toolbox?

  9. AD: What is it? • Automatic differentiation is NOT: • Analytic differentiation

  10. AD: What is it? • Automatic differentiation is NOT: • Analytic differentiation • Symbolic differentiation syms x1 x2; f = 1/(1 + exp(-x1/x2)); df_dx1 = diff(f, x1) >> 1/(x2*exp(x1/x2)*(1/exp(x1/x2) + 1)^2)

  11. AD: What is it? • Automatic differentiation is NOT: • Analytic differentiation • Symbolic differentiation • Numeric differentiation (finite difference) f = @(x1, x2) 1/(1 + exp(-x1/x2)); eps = 1e-10; df_dx1 = f(x1+eps, x2) – f(x1, x2)) df_dx1 = df_dx1/eps

  12. AD: What is it? • Automatic differentiation IS: • Fast, esp. for many input partial derivatives • Symbolic requires substitution of symbolic objects • Numeric requires multiple function calls for each partial

  13. AD: What is it? • Automatic differentiation IS: • Fast, esp. for many input partial derivatives • Effective for computing higher derivatives • Symbolic generates huge expressions • Numeric becomes even more inaccurate

  14. AD: What is it? • Automatic differentiation IS: • Fast, esp. for many input partial derivatives • Effective for computing higher derivatives • Adept at analyzing complex algorithms • Bloch simulations • Loops and conditional statements • 1.6 million-line FEM model

  15. AD: What is it? • Automatic differentiation IS: • Fast, esp. for many input partial derivatives • Effective for computing higher derivatives • Adept at analyzing complex algorithms • Accurate to machine precision

  16. AD: What is it? • Some disadvantages: • Exact details of the implementation are hidden • Hard to accelerate

  17. AD: How does it work? • Numeric: implement definition of derivative • Symbolic: N-line function -> single line expression • Automatic: N-line function -> M-line function • A technology to automaticallyaugment programs with statements to compute derivatives

  18. AD: How does it work? f = @(x1, x2) 1/(1 + exp(-x1/x2)); • Build a graph of intermediate quantities and apply the chain rule

  19. AD: How does it work? f = @(x1, x2) 1/(1 + exp(-x1/x2));

  20. AD: How does it work? f = @(x1, x2) 1/(1 + exp(-x1/x2));

  21. AD: How does it work? f = @(x1, x2) 1/(1 + exp(-x1/x2));

  22. AD: How does it work? f = @(x1, x2) 1/(1 + exp(-x1/x2));

  23. AD: How does it work? f = @(x1, x2) 1/(1 + exp(-x1/x2));

  24. AD: How does it work? f = @(x1, x2) 1/(1 + exp(-x1/x2));

  25. AD: How do I use it? • Applications • Gradient-based optimization methods • Uncertainty propagation • Transparent calculation of the Jacobian of a multiple-input, multiple-output function • Packages • MATLAB • ADiMat • AD for MATLAB, Adiff • Python • pyautodiff • uncertainties, algopy, CasADi

  26. Parameter Mapping

  27. What is Parameter Mapping? • Start with a signal model for your data • Collect a series of scans, typically with only 1 or 2 sequence variables changing • Fit model to data • Motivation • Reveals quantifiable physical properties of tissue unlike conventional imaging • Maps are ideally scanner independent

  28. Parameter Mapping • Some examples • FA/MD mapping with DTI – most widely known mapping sequence • T1 mapping – relevant in study of contrast agent relaxivity and diseases • B1 mapping – important for high field applications

  29. Evaluating all the choices • T1 mapping methods • Spin-echo inversion recovery • Look-Locker • DESPOT1/VFA • MPnRAGE family

  30. Evaluating all the choices • T1 mapping methods • Spin-echo inversion recovery • Look-Locker • DESPOT1/VFA • MPnRAGE family

  31. DESPOT1 T1 Mapping • Protocol optimization • What is the acquisition protocol which best maximizes our T1 precision? Christensen 1974, Homer 1984, Wang 1987, Deoni 2003

  32. DESPOT1: Protocol Optimization • Acquiring N images: with what flip angles and how long should we scan each? • Cost function, λ = 0 for M0 • Coefficient of variation (CoV = ) for a single T1 • The sum of CoVs for a range of T1s

  33. Problem Setup • Minimize the CoV of T1 with Jacobiansimplemented by AD • Constraints • TR = TRmin = 5ms • Solver • Sequential least squares programming with multiple start points(scipy.optimize.fmin_slsqp)

  34. Results: T1=1000ms • N = 2 • α = [2.373 13.766]° • This agrees with Deoni 2003 • Corresponds to the pair of flip angles producing signal at of the Ernst angle • Previously approximated as 0.71

  35. Results: T1=500-5000ms • N = 2 • α = [1.4318 8.6643]° • Compare for single T1 = 2750ms, optimalα = [1.4311 8.3266]° • Contradicts Deoni 2004, which suggests to collect a range of flip angles to cover more T1s

  36. Results: T1=500-5000ms

  37. Future Work • More protocol optimization • DESPOT2-FM: free parameters incl. SPGR or bSSFP, αs, phase-cycle • mcDESPOT: precision of MWF has recently been under question (Lankford 2012) • Exploration of other pulse sequences • Comparison of competing methods

  38. Summary

  39. Questions? • Slides available at http://mr.jason.su/ • Python source code available soon

More Related