1 / 75

FMRI - An Introduction to Data Analysis and Visualisation

erimentha
Download Presentation

FMRI - An Introduction to Data Analysis and Visualisation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. fMRI - An Introduction to Data Analysis and Visualisation Krish Singh singhkd@cardiff.ac.uk Psychology, Cardiff, 3rd March 2006

    3. fMRI - functional Magnetic Resonance Imaging

    4. The Haemodynamic Response

    5. The BOLD signal

    6. fMRI - A typical experiment

    8. http://www.ssc.uwo.ca/psychology/culhamlab/Jody_web/fmri4newbies.htm

    12. Image conditioning

    13. Your Data Usually EPI. Low resolution, some distortions inevitable. X by Y pixels in each slice Z slices Typically: X=64, Y=64, Z=40. 3x3x3 mm is a good isotropic matrix size. One timepoint every TR (commonly: 3 seconds). 5 minute run is typical (100 timepoints). 64x64x40x100x2 bytes = 32 Mbytes / run.

    14. Data Pre-processing Slice timing correction Motion correction Co-registration to an anatomical scan Noise rejection Temporal Filtering Spatial Filtering Global Intensity Scaling

    15. A single subject study Data is analysed in order to produce a single map of activation for that subject. Can be quick - important for clinical applications. Assume the data is coregistered to the subject’s own anatomical scan, and has been pro-processed using optimal smoothing etc.

    16. Parameter Estimation Decide on the model we want to fit to the data (depends on haemodynamic parameters and our experimental design). Fit the model to each voxel’s timecourse - this provides a quantitative measure of the magnitude of each parameter of interest.

    18. Parameter Estimation (continued) Note that the amplitude (amp) estimated in the previous slide measures the magnitude of the brain’s response to a particular stimulus/task. Most useful in parametric studies. The amplitude is not a statistical parameter. However, it does contain some measure of “goodness of fit”. Formally, it is the component of the signal variance (amplitude) which is explained by this particular model component. (c.f. % variance explained). Most popular (and generally useful) approach is to estimate these parameters as part of an all encompassing General Linear Model.

    19. Parameter Estimation Simple Implementation In the previous simple example, we could implement our single subject-single condition model in the following way: Form a model haemodynamic response function which “follows” our task-design boxcar. For each voxel, calculate the correlation coefficient, r, between the voxel’s timecourse and our model. At the same time, calculate the amplitude of the response. This can be estimated by multiplying the standard-deviation of the timeseries by r.

    20. Sample results

    21. Amplitude results

    22. Amplitude results - SPM version

    23. More complicated designs More complicated designs can be used, with multiple tasks, for example (A,B,C,D). A model can be specified separately for each task and a parameter image generated for each of A,B,C,D. Differences between tasks can be assessed using algebraic manipulations of the parameter maps e.g A-B reveals areas which are more activated by A than B. Such manipulations are known as Contrasts. SPM - con images FSL - COPE images (Contrast of Parameter Estimates) Contrast images provide great flexibility in the kinds of questions that can be asked in functional imaging studies, such as factorial analysis.

    24. Statistical Thresholding In PET/fMRI, the main (over-?) emphasis has been on statistical thresholding, rather than a consideration of parameter amplitude. This is especially important for cognitive studies, where the amplitude of a response may be small and we need, therefore, to decide whether it is “really there”. In general, we need to convert our statistic images to probability maps.

    25. Classical Inference Given a particular activation, A, we have to decide what the probability, P, is that A could have occurred by chance. This is the Null Hypothesis, H. We reject H, and label the region as active, if P is less than some given threshold, PCritical (usually 0.05).

    26. How is it done? Convert our statistic images (r or T) to P-values. May be via an intermediate statistical transform to a Z-score (Normal distribution with zero mean and unit standard deviation). Amp -> r -> Z -> P or Amp -> T -> Z -> P These transformations require an accurate estimate of the effective degrees of freedom (in time). This can be problematical because of temporal correlation. The above use parametric statistical results. Non parametric methods offer some advantages over these conventional approaches (as well as helping with multiple comparisons correction).

    27. Problems/Issues 1-P is not the probability of activation, and P is not the probability that A is inactivated, it is the probability that A could have occurred by chance if there was no response. If more trials/subjects are run so that signal-to-noise increases, we may well be able to reject H (in fact we always can!). This means we can never state that a particular area is definitely not activated by a task.

    28. A Big Issue: Correction for multiple comparisons P is assessed separately at each voxel. We may fit many tens of thousands of voxels (N repeated measures). So, the probability of any one voxel in the brain being active, by chance, is actually much greater that PCritical. We need to correct our thresholds for multiple comparisons. PCorrected= PCritical / N (Bonferroni correction) In practice need to modify N. This is because the number of independent tests, N*, is actually much lower than N because of spatial correlation (smoothness). Gaussian Field Theory (one approach) used to calculate this smoothness and to correct for multiple comparisons.

    29. Correction for multiple comparisons: Illogical? PCorrected= PCritical / N* Note, perhaps counter-intuitively, PCorrected depends on how much of the brain we ‘look at’. If we concentrate on a sub-region (perhaps because we have a-priori knowledge that activation should be in that region) then we can use a smaller value for N* and hence less stringent corrected P thresholds. (SPM speak: Small Volume Correction) This is not illogical, we just have to remember to state the full inference statement: The probability of any voxel, within the analysed volume, being activated by chance is less than PCritical. Note that moving away from classical inference, to more sophisticated approaches such as Bayesian Inference, may have many advantages.

    30. Statistics Results from SPM

    31. Statistics Results from SPM

    33. Single-subject - Pre-surgical mapping

    34. The “group blob” study Transforming data to a common spatial coordinate system before group statistical analysis

    40. Group Parameter Estimation Let’s suppose we have a single condition study: 5 subjects. One run (Passive-Active) design. 8 repeats of the boxcar in each run.

    42. Problems with a fixed-effects analysis The group effect could be dominated by a single subject (or sub-group). This is because inter-subject variability is not taken into account. Formally, a fixed effects analysis only allows us to make statements about the subjects studied - it does not allow a generalisation to the population. Need a random (mixed) effects or Conjunction analysis.

    44. Problems with Group Analyses Assumes all subjects perform the task in the same way - strategy effects. Assumes cortical (gyral) anatomy can be accurately registered using spatial normalisation. Some sulci/gyri very variable. Assumes functional areas can be accurately registered using spatial normalisation. Some functional areas are not well delineated using anatomical information (the borders of the visual areas, V1, V2 etc. (for example) are not well registered). In the case of anatomical variability, areas which are least variable will show the strongest group effect. So only the most spatially similar components of a task-related network could be revealed even though there are other cortical regions which are more important.

    46. Reporting group activations Although 2D and 3D visualisations are useful summaries, and can be very pleasing to the eye, we need to be able to report in a quantitative way, where activations have occurred. Single subject studies (e.g. patients) should be reported with reference to the subject’s own anatomy. Group studies can be reported using a standard coordinate system (such as Talairach) and use can be made of probabilistic databases.

    47. Talairach Coordinate System

    48. Template brains

    49. Clusters Once your functional results are in a standard coordinate space, you have at least two ways of reporting the loci of activation: Cluster reporting: Software such as SPM “breaks” your functional map into discrete clusters, and then reports the size, statistics and the Talairach coordinates of each cluster centre. Database reporting: Software reports the amount of activation in given anatomical regions, defined in published databases (Brodmann, Gyral Atlases, Talairach Daemon).

    51. Problems This is, in general, a useful approach and is one adopted by most of the published papers. However, if the connected cluster is large and spans many anatomical regions, the centre of the cluster is meaningless.

    53. Parametric Region-Of-Interest studies Group studies without spatial normalisation. Most useful if you wish to investigate the response to a parametrically varying stimulus, in anatomically varying regions. Or: If you have a-priori knowledge about a discrete anatomical area.

    56. Example: Temporal Frequency Tuning This study investigated the response in various areas of visual cortex to sinsusoidal gratings drifting at different speeds. In each subject, the visual areas V1, v2, V3 etc. were defined using retinotopic mapping i.e. a separate functional experiment.

    58. The Temporal Frequency Tuning Experiment

    60. Visualisation Making your data look pretty and Increasing your publication costs

    61. 2D functional overlays

    62. 3D functional overlays

    63. Multiple 2D overlays

    64. Grey matter inflation and flattening

    65. Grey matter inflation and flattening

    66. Grey matter inflation and flattening

    67. Grey matter inflation and flattening

    68. Retinotopic mapping

    69. 2D and 3D coordinates

    70. Gyral identification

    71. Grey matter inflation - gyral identification

    72. Grey matter flatmap - gyral identification

    73. Grey matter flatmap - visuotopic areas

    74. Grey matter flatmap - Brodmann areas

    75. Free Software Used/Mentioned SPM - http://www.fil.ion.ucl.ac.uk/spm/ Analysis, Reporting, Visualisation. FSL - http://www.fmrib.ox.ac.uk/fsl/ Analysis, Reporting, Visualisation. mri3dX/mriWorkshopX - http://www.aston.ac.uk/lhs/staff/singhkd/mri3dX/ Simple Analysis, Reporting, Visualisation. FreeSurfer - http://surfer.nmr.mgh.harvard.edu/ Analysis, Reporting, Visualisation, Inflation, Flatmap. Caret -http://stp.wustl.edu/resources/caretnew.html Analysis, Reporting, Visualisation, Inflation, Flatmap, Atlases. mrUnfold - http://white.stanford.edu/~brian/mri/segmentUnfold.htm Visualisation, Flatmap.

    76. Summary There are many complicated issues associated with analysing fMRI data. These include study design, analysis design, statistics, activation reporting and visualisation. Almost everything you need to do can be done by free software packages.

More Related