1 / 63

Experiments on a New Inter-Subject Registration Method

This study presents a new method for inter-subject registration of brain images that offers improved precision and speed compared to current software. The method incorporates a one-to-one mapping and uses a Levenberg-Marquardt optimization strategy for rapid and accurate alignment.

bsattler
Download Presentation

Experiments on a New Inter-Subject Registration Method

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experiments on a New Inter-Subject Registration Method John Ashburner 2007

  2. Abstract • The objective of this work was to devise a more precise method of inter-subject brain image registration than those currently available in the SPM software.  This involved a model with many more degrees of freedom, but which still enforces a one-to-one mapping.  Speed considerations were also important.  The result is an approach that models each warp by single velocity field.  These are converted to deformations by a scaling and squaring procedure, and the inverses can be generated in a similar way.  Registration is via a Levenberg-Marquardt optimization strategy, which uses a full multi-grid algorithm to rapidly solve the necessary equations. • The method has been used for warping images of 471 subjects.  This involved simultaneously matching grey matter with a grey matter template, and white matter with a white matter template.  After every few iterations, the templates were re-generated from the means of the warped individual images.  Evaluations involved applying pattern recognition procedures to the resulting deformations, in order to assess how well information such as the ages and sexes of the subjects could be predicted from the encoded deformations.  A slight improvement in prediction accuracy was obtained when compared to a similar procedure using a small deformation model.

  3. Overview • Motivation • Dimensionality • Inverse-consistency • Principles • Geeky stuff • Example • Validation • Future directions

  4. Motivation • More precise inter-subject alignment • Improved fMRI data analysis • Better group analysis • More accurate localization • Improve computational anatomy • More easily interpreted VBM • Better parameterization of brain shapes • Other applications • Tissue segmentation • Structure labeling

  5. Image Registration • Figure out how to warp one image to match another • Normally, all subjects’ scans are matched with a common template

  6. Current SPM approach • Only about 1000 parameters. • Unable model detailed deformations

  7. A simple 2D example Individual brain Warped Individual Reference

  8. Residual Differences Individual brain Warped Individual

  9. Expansion and contraction • Relative volumes encoded by Jacobian determinants of deformation

  10. Tissue volume comparisons Warped grey matter Jacobian determinants Absolute grey matter volumes

  11. A one-to-one mapping • Many models simply add a smooth displacement to an identity transform • One-to-one mapping not enforced • Inverses approximately obtained by subtracting the displacement • Not a real inverse Small deformation approximation

  12. Overview • Motivation • Principles • Geeky stuff • Example • Validation • Future directions

  13. Principles Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra Deformations parameterized by a single flow field, which is considered to be constant in time.

  14. DARTEL • Parameterizing the deformation • φ(0)(x) = x • φ(1)(x) = ∫u(φ(t)(x))dt • u is a flow field to be estimated 1 t=0

  15. Euler integration • The differential equation is dφ(x)/dt = u(φ(t)(x)) • By Euler integration φ(t+h) = φ(t) + hu(φ(t)) • Equivalent to φ(t+h) = (x + hu) oφ(t)

  16. Flow Field

  17. Simple integration φ(1/8) = x + u/8 φ(2/8) = φ(1/8)oφ(1/8) φ(3/8) = φ(1/8)oφ(2/8) φ(4/8) = φ(1/8)oφ(3/8) φ(5/8) = φ(1/8)oφ(4/8) φ(6/8) = φ(1/8)oφ(5/8) φ(7/8) = φ(1/8)oφ(6/8) φ(8/8) = φ(1/8)oφ(7/8) 7 compositions Scaling and squaring φ(1/8) = x + u/8 φ(2/8) = φ(1/8)oφ(1/8) φ(4/8) = φ(2/8)oφ(2/8) φ(8/8) = φ(4/8)oφ(4/8) 3 compositions Similar procedure used for the inverse. Starts with φ(-1/8) = x - u/8 For (e.g) 8 time steps

  18. Scaling and squaring example

  19. DARTEL

  20. Jacobian determinants remain positive

  21. Overview • Motivation • Principles • Geeky stuff • Feel free to sleep • Example • Validation • Future directions

  22. Registration objective function • Simultaneously minimize the sum of • Likelihood component • From the sum of squares difference • ½∑i(g(xi) – f(φ(1)(xi)))2 • φ(1) parameterized by u • Prior component • A measure of deformation roughness • ½uTHu

  23. Regularization model • DARTEL has three different models for H • Membrane energy • Linear elasticity • Bending energy • H is very sparse An example H for 2D registration of 6x6 images (linear elasticity)

  24. Regularization models

  25. Optimisation • Uses Levenberg-Marquardt • Requires a matrix solution to a very large set of equations at each iteration u(k+1) = u(k) - (H+A)-1b • b are the first derivatives of objective function • A is a sparse matrix of second derivatives • Computed efficiently, making use of scaling and squaring

  26. Relaxation • To solve Mx = c Split M into E and F, where • E is easy to invert • F is more difficult • Sometimes: x(k+1) = E-1(c – F x(k)) • Otherwise: x(k+1) = x(k) + (E+sI)-1(c – M x(k)) • Gauss-Siedel when done in place. • Jacobi’s method if not • Fits high frequencies quickly, but low frequencies slowly

  27. H+A = E+F

  28. Highest resolution Full Multi-Grid Lowest resolution

  29. Overview • Motivation • Principles • Geeky stuff • Example • Simultaneous registration of GM & WM • Tissue probability map creation • Validation • Future directions

  30. Grey matter Grey matter Grey matter Grey matter White matter White matter White matter White matter Simultaneous registration of GM to GM and WM to WM Subject 1 Subject 3 Grey matter White matter Template Subject 2 Subject 4

  31. Template Initial Average Iteratively generated from 471 subjects Began with rigidly aligned tissue probability maps Used an inverse consistent formulation After a few iterations Final template

  32. Grey matter average of 452 subjects – affine

  33. Grey matter average of 471 subjects

  34. White matter average of 471 subjects

  35. Initial GM images

  36. Warped GM images

  37. Overview • Motivation • Principles • Geeky stuff • Example • Validation • Sex classification • Age regression • Future directions

  38. Validation • There is no “ground truth” • Looked at predictive accuracy • Can information encoded by the method make predictions? • Registration method blind to the predicted information • Could have used an overlap of fMRI results • Chose to see whether ages and sexes of subjects could be predicted from the deformations • Comparison with small deformation model

  39. ? ? ? ? Training and Classifying Control Training Data Patient Training Data

  40. ? ? ? ? Classifying Controls Patients y=f(aTx+b)

  41. Support Vector Classifier

  42. Support Vector Classifier (SVC) a is a weighted linear combination of the support vectors Support Vector Support Vector Support Vector

  43. Some Equations • Linear classification is by y = f(aTx + b) • where a is a weighting vector, x is the test data, b is an offset, and f(.) is a thresholding operation • a is a linear combination of SVs a = Si wixi • So y = f(Si wixiTx + b)

  44. Going Nonlinear • Nonlinear classification is by y = f(Si wi(xi,x)) • where (xi,x) is some function of xi and x. • e.g. RBF classification • (xi,x) = exp(-||xi-x||2/(2s2)) • Requires a matrix of distance measures (metrics) between each pair of images.

  45. Nonlinear SVC

  46. Cross-validation • Methods must be able to generalise to new data • Various control parameters • More complexity -> better separation of training data • Less complexity -> better generalisation • Optimal control parameters determined by cross-validation • Test with data not used for training • Use control parameters that work best for these data

  47. Two-fold Cross-validation Use half the data for training. and the other half for testing.

  48. Two-fold Cross-validation Then swap around the training and test data.

  49. Leave One Out Cross-validation Use all data except one point for training. The one that was left out is used for testing.

  50. Leave One Out Cross-validation Then leave another point out. And so on...

More Related