1 / 55

FMRI Data Analysis II: Connectivity Analyses

FMRI Data Analysis II: Connectivity Analyses. Dr. George Andrew James Research Associate The Wallace H. Coulter Department of Biomedical Engineering at the Georgia Institute of Technology & Emory University School of Medicine Tuesday, November 17, 2009. Overview of Connectivity Analyses.

gad
Download Presentation

FMRI Data Analysis II: Connectivity Analyses

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FMRI Data Analysis II:Connectivity Analyses Dr. George Andrew James Research Associate The Wallace H. Coulter Department of Biomedical Engineering at the Georgia Institute of Technology & Emory University School of Medicine Tuesday, November 17, 2009

  2. Overview of Connectivity Analyses • Functional connectivity analyses • Typically correlational • Seed analyses • “Component” analyses (PCA, ICA, SVD) • Effective connectivity analyses • Infer causality • Structural equation modeling • Granger causality analysis • Dynamic causal modeling

  3. Connectivity Analyses Advantages • Require no a priori hypotheses • Can capture influences not relating to model Disadvantages • Computationally & statistically complex • Less intuitive than model-dependent methods

  4. Measure correlation between a voxel’s timecourse and all other voxels’ timecourses Like GLM using a voxel’s timecourse as the paradigm r Correlational Analyses Anterior Cingulate Orbitofrontal cortex Thalamus Subgenual Cingulate

  5. Seed Analysis • Pick a region of interest (ROI) as “seed” • Make SPM where each voxel is color-coded by the strength of correlation between voxel and ROI seed Correlations can be measured in absence of task! Frequency filtering is essential for resting correlations! Peltier 2002

  6. Changing Correlations in Language Network with Task He et al, HBM 2003

  7. Changing Correlations in Language Network with Task Meaningless pinyins aloud Meaningful pinyins aloud Tongue movement Pinyins: chinese words written in English letters (i.e. “xiexie” for “thank you”) He, HBM 2003

  8. Changes in correlation with time He, HBM 2003

  9. The Good, The Bad, and the Ugly Brain activation during naturalistic viewing of movie stimuli Fig. 1. Intersubject correlation during free viewing of an uninterrupted movie segment.( A) Average percentage of functionally correlated cortical surface across all pairwise comparisons between subjects for the entire movie time course (All), for the regionally specific movie time course (after the removal of the nonselective component, Regional) and for the darkness control experiment (In darkness).( B) Voxel-by-voxel intersubject correlation between the source subject (ZO) and the target subject (SN).Correlation maps are shown on unfolded left and right hemispheres (LH and RH, respectively).Color indicates the significance level of the intersubject correlation in each voxel.Black dotted lines denote borders of retinotopic visual areas V1, V2, V3, VP, V3A, V4/V8, and estimated border of auditory cortex (A1).The face-, object-, and building-related borders (red, blue, and green rings, respectively) are also superimposed on the map. Note the substantial extent of intersubject correlations and theextension of the correlations beyond visual and auditory cortices.

  10. The Good, The Bad, and the Ugly Fig. 2. Nonselective activation across regions.( A) Correlation between the averaged time course of the VOT cortex in one cortical hemisphere (correlation seed marked by the red contour) and the rest of the cortex, shown on unfolded left and right hemispheres.( B) The average nonselective time course across all activated regions obtained during the first 10 min of the movie for all five subjects.Red line represents the across-subject average time course.There is a striking degree of synchronization among different individuals watching the same movie. Hasson, Science 2004

  11. The Good, The Bad, and the Ugly Reverse correlation: what is subject viewing during timecourse below? Hasson, Science 2004

  12. The Good, The Bad, and the Ugly Reverse correlation: what is subject viewing during timecourse below? Hasson, Science 2004

  13. The Good, The Bad, and the Ugly

  14. The Good, The Bad, and The Ugly Intersubject correlation shows us what brain regions are co-activated across subjects when watching a movie… …AND (perhaps more importantly) brain regions that are not correlated across subjects!

  15. “Component” Analyses • We have a correlation matrix for many ROIs • How can we simplify or distill this correlation matrix into a network of regions? • ex: visuomotor learning most likely involves several independent networks that are simultaneously co-activated. • We’ll use PCA (principle components analysis) to extract these networks

  16. Conceptualizing PCA

  17. Conceptualizing PCA brain’s total spatial and temporal variance variance of voxels’ individual timecourses =

  18. Conceptualizing PCA brain’s total spatial and temporal variance variance of voxels’ individual timecourses = BUT, some voxels better explain the brain’s overall variance than others! PCA asks: How can we cluster voxels into components to best explain the brain’s variance?

  19. PCA: visuomotor example Visual Stimuli Subject Response time Some brain regions (V1, M1, cerebellum, thalamus, SMA) should have greater temporal variability (more variance) than others (Broca’s area, sylvian fissure, amygdala, etc.)

  20. Component 1: Visual cortex Comp 1 Component 2: Motor system Comp 2 Comp 1 Comp 3 Component 3: Frontal-visual Comp 2 Comp 1 TotalVariance Voxels’ Variance

  21. How do we do Component Analyses? • Linear Matrix Algebra • Eigenvector: given a linear transformation,an eigenvector of that transformation is a nonzero vector which, when a transformation is applied to it, may change in length but not direction • Eigenvalue: describes manipulation to eigenvector • +2 : same direction, 2x length • +1 : same direction, same length • -1 : opposite direction, same length

  22. How do we do Component Analyses? Wikipedia, 2008

  23. Principal Components Analysis and Singular Value Decomposition • Given square matrix A with order r x r • A principal components analysis of A yields: U*S*U' = A where U containing the eigenvectors is r x r, S is a diagonal matrix r x r containing the eigenvalues A = U(1)*S(1)*U(1)' + U(2)*S(2)*U(2)'…+ U(r)*S(r)*U(r)' • The computed principal components, or latent variables (LV), are mutually uncorrelated • The first LV accounts for the largest part of A (largest variance), and the next LV accounts for the second largest variance not related to the first LV PLS Workshop 2008; University of Toronto

  24. Principal Components Analysis and Singular Value Decomposition • Eigenvalues • Indicate the proportion of total variance in the matrix that is captured by a each LV • If λi is the ith eigenvalue from a PCA on a matrix PLS Workshop 2008; University of Toronto

  25. Principal Components Analysis and Singular Value Decomposition • Conceptual - Regression Analogy Step 1: Derive a latent variable (LV) that accounts for as much of matrix A as possible LV1 = S1 + u1*X1 + u2*X2 + u2*X3 where S1 is a constant scaling factor (eigenvalue) and uj is the weight for the Xj in LV1 Step 2: Regress LV1 out of matrix A and repeat step. Note that because we have remove LV1 from the data, LV2 is necessarily orthogonal to LV1 Note: this will not work in practice, it's only an analogy PLS Workshop 2008; University of Toronto

  26. Principal Components Analysis and Singular Value Decomposition 1.0000 0.5685 0.2558 0.5685 1.0000 0.2424 0.2558 0.2424 1.0000 Correlation Matrix (A) PLS Workshop 2008; University of Toronto

  27. In other Words… Helpful hint: in Matlab, just use command [U,S]=eig(A) A = correlation matrix A = U * S * U’ 1.0000 0.5685 0.2558 0.5685 1.0000 0.2424 0.2558 0.2424 1.0000 = 0.6402 -0.2898 -0.7115 0.6357 -0.3202 0.7024 0.4313 0.9019 0.0207 1.7369 0 0 0 0.8318 0 0 0 0.4313 0.6402 0.6357 0.4313 -0.2698 -0.3202 0.6357 -0.7115 0.7024 0.0207 * * LV1 = U1 * S1 * U1’ 0.7118 0.7068 0.4796 0.7068 0.7019 0.4763 0.4796 0.4763 0.3232 0.6402 0.6357 0.4313 = * 1.7369 * 0.6402 0.6357 0.4313

  28. Principal Components Analysis and Singular Value Decomposition LVi = Ui*Si*Ui' 0.7118 0.7068 0.4796 0.7068 0.7019 0.4763 0.4796 0.4763 0.3232 LV1 0.0699 0.0772 -0.2174 0.0772 0.0853 -0.2402 -0.2174 -0.2402 0.6766 LV2 1.0000 0.5685 0.2558 0.5685 1.0000 0.2424 0.2558 0.2424 1.0000 0.2183 -0.2156 -0.0064 -0.2156 0.2128 0.0063 -0.0064 0.0063 0.0002 LV3 PLS Workshop 2008; University of Toronto

  29. Principal Components Analysis and Singular Value Decomposition • For non-square matrices, we use singular value decomposition (SVD) rather than principal components analysis • Given matrix B, that is r x c, an SVD of B yield: U*S*V' = B, where U is r x r, S is a diagonal matrix r x r, and V is c x r

  30. PCA limitations • “If task-related fMRI changes are only a small part of total signal variance… capturing the greatest variance in the data may reveal little information about task-related activations.” (McKeown, 1998) • ex: V1 and images of disgust vs. horror • Components must be “orthogonal”, making components difficult to conceptualize and less significant as their order increases.

  31. Independent Component Analysis • Related to PCA, ICA deconvolves a mixture of signals into sources. • Generally accepted as more powerful and sensitive than PCA. • GIFT, Matlab’s FastICA McKeown (1998)

  32. Another ICA illustration (McKeown, 1998)

  33. Conceptualizing ICA Axis 1 Axis 2 (PCA) Axis 2 (ICA)

  34. ICA Comparisons 3 participants performed the Stroop test. ICA yielded multiple components; including one whose timecourse closely matched the paradigm (shown right) (McKeown, 1998)

  35. (McKeown 1988) Additional comments A voxel can contribute to multiple components. ICA reveals non-task specific components. ICA could be valuable for masking unwanted voxels (i.e. slowly-varying activity)

  36. Regional Homogeneity Regional homogeneity estimates how correlated a voxel is with its immediate neighbors – i.e. a region’s homogeneity

  37. Regional Homogeneity and Anesthesia Peltier*, Kerssens, Hamann, Sebel, Byas-Smith & Hu. (2005). NeuroReport, 16, 285-288. Regional homogeneity describes how strongly a brain region communicates with its immediate neighbors. This analysis provides insight into aberrant connectivity patterns within a neural region. We have demonstrated progressive reduction in the local coherence of frontal and sensorimotor cortices with increasing anesthesia.

  38. Regional Homogeneity and Epilepsy James* & Drane. (unpublished) Preliminary findings: The hippocampus in the epileptogenic hemisphere shows less regional homogeneity than its counterpart.

  39. Effective Connectivity • Unlike correlational methods (aka functional connectivity), effective connectivity attempts to find causal relationships • Simultaneous influences among variables (structural equation modeling, dynamic causal modeling) • Temporal influences among variables (Granger causality analysis)

  40. Structural Equation Modeling • SEM is a statistical technique to assess both the strength and directionality of interactions between variables • SEM aka ‘path analysis’ or ‘causal modeling’ • SEM is traditionally confirmatory • SEM assess how well a model fits a given dataset… i.e. SEM tests the model, not the data!

  41. Mother Father Son Daughter But correlation doesn’t imply causality!(so how does SEM work?)

  42. Interpreting Factor Analysis / SEM Observed covariance matrix

  43. Interpreting Factor Analysis Find values of a, b, etc. so that predicted covariance best matches observed covariance. Factor 1 a b d c Test 1 Test 2 Test 3 Test 4 u v w x Predicted covariance matrix Observed covariance matrix

  44. Interpreting Factor Analysis r Factor 1 Factor 2 a b d c Test 1 Test 2 Test 3 Test 4 u v w x Predicted covariance matrix Observed covariance matrix

  45. Interpreting Factor Analysis r = 0 Factor 1 Factor 2 a b d c Test 1 Test 2 Test 3 Test 4 u v w x Predicted covariance matrix Observed covariance matrix

  46. Examples of SEM from neuroimaging Path loading of PPITp significantly increases as subjects learn object/spatial associations. (Büchel 1999)

  47. Structural Equation Modeling Path weighting expresses strength of connection. (Analagous to correlation... but directional!) SMA: supplementary motor area PM: premotor (R/L) M1: primary motor (R/L) Zhang et al., 2005

  48. Granger Causality Analysis • Directly measures temporal associations • Given two ROI timecourses X(t)=[x1 x2 x3 … xN] and Y(t)=[y1 y2 y3 … yN] Build autoregressive model so that past values of X(t) can predict future values. If including past values of Y improves the ability for past values of X to predict current value of X, then we say “Y granger causes X”

  49. Granger Causality Analysis Experiment: Subjects engage in motor fatigue task – sinusoidal contraction of a hand weight Activation of many regions: Motor, premotor, SMA, cerebellum, S1, parietal

  50. Granger Causality Analysis M1 SMA PM S1 C P

More Related