1 / 40

CRLB via Automatic Differentiation: DESPOT2

CRLB via Automatic Differentiation: DESPOT2. Jul 12, 2013 Jason Su. Overview. Converting Matlab DESPOT simulation functions to Python and validation Extension of pyautodiff package to Jacobians Application to CRLB in DESPOT1, DESPOT2, and DESS Other projects:

chas
Download Presentation

CRLB via Automatic Differentiation: DESPOT2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CRLB via Automatic Differentiation: DESPOT2 Jul 12, 2013 Jason Su

  2. Overview • Converting Matlab DESPOT simulation functions to Python and validation • Extension of pyautodiff package to Jacobians • Application to CRLB in DESPOT1, DESPOT2, and DESS • Other projects: • 7T Thalamus ROI analysis • 7T MS segmentation

  3. Validation: SPGR & SSFP • Just a sanity check that I correctly ported the single-component functions • 700 SPGR values are compared between Matlab and Python • 10 T1s from 1000-2000ms, and 70 flip angles from 1-70deg • 14,000 SSFP values are compared each for before and after RF excitation equations • All results are the same within machine accuracy

  4. Validation: AD/FD vs Symbolic

  5. Extending pyautodiff with Jacobian

  6. Extending pyautodiff with Jacobian

  7. CRLB: Goals • With these new tools, I set out to explore all forms of single-component DESPOT with the CRLB formalism • Expected to see results as in Deoni 2004 “Determination of Optimal Angles…” • Also, if yes, then validates CRLB as a useful criterion to find an optimal protocol because would then be an equivalent analysis

  8. CRLB: Parameters • TR = 5ms, M0 = 1, T1 = 50-5000ms (100pts) • σS = 2e-3 (SNR≈20 over mean of SPGR curve) • The “ideal” protocol is scaled by sqrt(5) to match SNR of the others • I use the protocols considered in Deoni 2004: spgr_angles = { 'tuned': np.array([2, 3, 4, 5, 6, 7, 8, 9, 10, 12]), 'ideal': np.array([3, 12]), 'ga': np.array([2, 3, 4, 5, 7, 9, 12, 14, 16, 18]), 'weighted ga': np.array([2, 3, 4, 5, 7, 9, 11, 14, 17, 22]), }

  9. DESPOT1-SPGR: Constant Noise

  10. Constant SNR: Converting to H1 • n=10, the equivalent NEX • mp/n is a scale factor that scales the noise down if less flip angles are collected, e.g. mp=2 for “ideal” so variance reduces by 5x

  11. Constant SNR: Converting to H1 • Suppose we normalize such that σS = SE, SNRE = 1 • Scale noise to Ernst angle signal, i.e. it becomes a function of T1 • This is like effectively simulating with Σ = sqrt(mp)SEI

  12. argmaxα(SPGR): Ernst Angle

  13. DESPOT1-SPGR: Constant SNR

  14. Results • Curve shapes are similar except for “weighted ga” • I’m not sure how to incorporate this with CRLB • Intuitively it would be to weight Σ, but the interpretation would then equivalently be that the noise is changing with different angles • Does that make sense? • Not exactly the same, note the intersection point • There is still a scaling difference though • Something different about H1 calculation?

  15. Results • Note how fundamentally different approaches arrive at a similar answer • Deoni 2004 approximates the solution to the curve fitting cost function with a Taylor expansion • CRLB computes relationships based on the Jacobian of SPGR • Both involve squaring derivatives of SPGR, maybe not surprising • I think the preliminary CRLB results with constant noise is actually more realistic/useful • Noise is independent of tissue (or at least scanner noise dominates)

  16. CRLB: Parameters • TR = 5ms, M0 = 1, T1 = 50-5000ms (100pts), T2 = 10-200ms (50pts) • Protocols: despot2_angles = { 'tuned': { 'spgr': np.array([2, 3, 4, 5, 6, 7, 8, 9, 10, 12]), 'ssfp_before': np.array([8, 12, 18, 24, 29, 36, 48, 61, 72, 83]), 'ssfp_after': np.array([8, 12, 18, 24, 29, 36, 48, 61, 72, 83]), }, 'ideal': { 'spgr': np.array([3, 12]), 'ssfp_before': np.array([20, 80]), 'ssfp_after': np.array([20, 80]), }, 'ga': { 'spgr': np.array([2, 3, 4, 5, 7, 9, 12, 14, 16, 18]), 'ssfp_before': np.array([8, 13, 19, 25, 37, 45, 58, 71, 81, 87]), 'ssfp_after': np.array([8, 13, 19, 25, 37, 45, 58, 71, 81, 87]), }, 'weighted ga': { 'spgr': np.array([2, 3, 4, 5, 7, 9, 11, 14, 17, 22]), 'ssfp_before': np.array([5, 10, 16, 24, 32, 43, 54, 66, 75, 88]), 'ssfp_after': np.array([5, 10, 16, 24, 32, 43, 54, 66, 75, 88]), }, }

  17. DESPOT2-SSFP • T1 is assumed from DEPOT1 and those rows/columns eliminated from JbSSFP

  18. argmaxα(bSSFP): “bErnst” • For simplification, I set phase π • Mx = 0 in this case • Recall the observation that the max signal is constant over phase • I start with the post-RF equations from Freeman-Hill, which differ from those in Deoni 2009 (pre-RF)

  19. DESPOT2: Constant SNR – Tuned

  20. DESPOT2: Constant SNR – Ideal

  21. DESPOT2: Constant SNR – GA

  22. DESPOT2: Constant SNR – Wtd GA

  23. Results • Take Weighted GA with a grain of salt as it may not be correct as we saw with SPGR • Ideal has focused itself with a smaller cone of high T2NR • GA has the most uniform T2NR as expected • As a side note, we can include T1 in J and try to evaluate the precision of bSSFP to estimate all 3 parameters • F (Fisher Information Matrix) is then ill-conditioned, a known property that SSFP can’t do everything by itself

  24. Joint DESPOT2 • Next I perform the analysis assuming a joint fit of T1, T2, and M0 with both the SPGR and bSSFP equations • The noise normalization is increased by sqrt(2) to account for the doubling of total images collected

  25. J-DESPOT2: Constant SNR – Tuned

  26. J-DESPOT2: Constant SNR – Ideal

  27. J-DESPOT2: Constant SNR – GA

  28. Results • Interpretation of the scale is not intuitive? • Mixed sequences but each is normalized to have peak SNR of 1 at Ernst/bErnst • Does it make sense that estimate of T1 or M0 can be better than peak SNR of underlying images? • With a joint fit, the tuned set appears to trade more uniform T1NR for a small price in T2NR • GA isn’t a clear winner here, but may be if we only care about a certain range

  29. Joint DESSPOT2 • Next I perform the analysis assuming a joint fit of T1, T2, and M0 with both the SPGR and dual-echo SSFP equations • I model DESS as the magnetization before and after RF excitation, equations from Freeman-Hill 1971 • TR is kept the same and the same flip angle protocols are examined • Noise is now scaled up by sqrt(3)

  30. J-DESSPOT2: Constant SNR – Tuned

  31. J-DESSPOT2: Constant SNR – Ideal

  32. J-DESSPOT2: Constant SNR – GA

  33. Results • Perhaps unsurprisingly, we lose some T1NR • Since SPGRs have more noise • But we gain some T2NR • Interestingly, tuned protocol splits into 2 cones for T2 estimation

  34. bSSFP • Plots show signal curves at different θ • Mx is symmetric about θ=π (0.5 cycles here) • My has a crossover point where all phase curves intersect • Its x,y location is defined by E1 and E2 • Magnitude max over flip angle is constant with phase • Phase is constant with phase • Can we use this to get off-resonance?

  35. Next Steps • CRLB with real/imag or mag/phase? • DESPOT2-FM • Analysis, visualization will be trickier as want to see precision as a function of off-resonance too • Estimation of M0, T1, T2, θ as a function of T1, T2, θ • Protocol optimization, including TR • A protocol that optimizes precision in B1 inhomogeneity? Assuming B1 map is known • Maximize ∫T1NR(κ)dκ for a given (or range of) T1 • mcDESPOT • Protocol optimization • Exploration of DESS and other sequences

  36. 7T MS: PVF Pipeline • Goal is to compute atrophy by generating 2 masks reliably: • Intracranial mask • Parenchyma mask • In MSmcDESPOT we did: • Intracranial mask = BET mask on T1w, includes ventricles but generally excludes space between brain and skull -> not exactly what is stated • Parenchyma mask = WM+GM, essentially excludes ventricle CSF

  37. 7T MS: PVF Pipeline

  38. Future Segmentation

  39. 7T Thalamus • Used gaussian KDE to approximate distribution • Width of kernel is automatically determined by the function

  40. 7T Thalamus • In the future, I imagine we want to do ANOVA testing of T1 values between all the ROIs • Or some non-parametric variant (Kruskall-Wallis I think) • We can then multiply that elementwise by an “adjacency matrix” that flags which nuclei are neighbors • Gives us which nuclei are separable from each other if we take their spatial location into account

More Related