1 / 38

General Linear Model & Classical Inference

General Linear Model & Classical Inference. London, SPM-M/EEG course May 2017. C. Phillips, Cyclotron Research Centre, ULg, Belgium http://www.cyclotron.ulg.ac.be. Overview. Introduction ERP example General Linear Model Definition & design matrix Parameter estimation & interpretation

akiko
Download Presentation

General Linear Model & Classical Inference

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. General Linear Model & Classical Inference London, SPM-M/EEG course May 2017 C. Phillips, Cyclotron Research Centre, ULg, Belgium http://www.cyclotron.ulg.ac.be

  2. Overview • Introduction • ERP example • General Linear Model • Definition & design matrix • Parameter estimation & interpretation • Contrast & inference • Correlated regressors • Conclusion

  3. Overview • Introduction • ERP example • General Linear Model • Definition & design matrix • Parameter estimation & interpretation • Contrast & inference • Correlated regressors • Conclusion

  4. Design matrix Contrast: c’ = [-1 1] Parameter estimates Overview of SPM Statistical Parametric Map (SPM) Raw EEG/MEG data Pre-processing: • Converting • Filtering • Resampling • Re-referencing • Epoching • Artefact rejection • Time-frequency transformation • … General Linear Model Inference & correction for multiple comparisons Image convertion

  5. Statistical Parametric Maps mm time time 1D time mm frequency mm 3D M/EEG sourcereconstruction,fMRI, VBM 2D+t scalp-time mm time 2D time-frequency mm

  6. ERP example • Random presentation of ‘faces’ and ‘scrambled faces’ • 70 trials of each type • 128 EEG channels Question: is there a difference between the ERP of ‘faces’ and ‘scrambled faces’?

  7. ERP example: channel B9 Focus on N170 compares size of effect to its error standard deviation

  8. Overview • Introduction • ERP example • General Linear Model • Definition & design matrix • Parameter estimation & interpretation • Contrast & inference • Correlated regressors • Conclusion

  9. Faces Scrambled X2 Y = b1 • X1 + b2 • +  Data modeling Data = b1 + b2 + Error

  10. b1 b2 Y = X • b +  Design matrix Parameter vector Error vector Design matrix Data vector = +

  11. X = + Y General Linear Model N: # trials p: # regressors GLM defined by The design matrix embodies all available knowledge about experimentally controlled factors and potential confounds.

  12. General Linear Model • Applied to all channels & time points • Mass-univariate parametric analysis • one sample t-test • two sample t-test • paired t-test • Analysis of Variance (ANOVA) • factorial designs • correlation • linear regression • multiple regression

  13. b1 Residuals: b2 If iid. error assumed: Ordinary Least Squares parameter estimate such that minimal Parameter estimation = + Estimate parameters

  14. Mass-univariate analysis Voxel-wise GLM: Evokedresponseimage Sensor tovoxel transform Time Transform data for all subjects and conditions SPM: Analyse data at each voxel

  15. Hypothesis Testing The Null Hypothesis H0 Typically what we want to disprove (i.e. no effect).  Alternative Hypothesis HA = outcome of interest.

  16. SPM-t over time & space c’ = -1 +1 ^ ^ ^ b1 < b2 ?(bi : estimation ofbi) = -1xb1+ 1xb2 > 0 ? = ^ ^ ^ testH0: c´ ´ b > 0 ? contrast ofestimatedparameters c’b ^ T = T = varianceestimate s2c’(X’X)+c Contrast & t-test Contrast : specifies linear combination of parameter vector:c´ ERP: faces< scrambled ? =

  17. Null Distribution of T Hypothesis Testing The Null Hypothesis H0 Typically what we want to disprove (i.e. no effect).  Alternative Hypothesis HA = outcome of interest. • The Test Statistic T • summarises evidence about H0. • (typically) small in magnitude when H0 is true and large when false. •  know the distribution of T under the null hypothesis.

  18. u  Null Distribution of T Hypothesis Testing Significance level α: Acceptable false positive rate α.  threshold uα, controls the false positive rate Observation of test statistic t, a realisation of T  Conclusion about the hypothesis: reject H0 in favour of Ha if t > uα  P-value: summarises evidence against H0. = chance of observing value more extreme than t under H0. t P-val Null Distribution of T

  19. vs. HA: H0: Contrast & T-test, a few remarks • Contrasts = simple linear combinations of the betas • T-test = signal-to-noise measure (ratio of estimate to standard deviation of estimate). • T-statistic, NO dependency on scaling of the regressors or contrast • Unilateral test:

  20. X0 X0 X1 RSS0 RSS Or reduced model? Extra-sum-of-squares & F-test Model comparison: Full vs. Reduced model? Null Hypothesis H0:True model is X0 (reduced model) Test statistic: ratio of explained and unexplained variability (error) 1 = rank(X) – rank(X0) 2 = N – rank(X) Full model ?

  21. Tests multiple linear hypotheses: test H0 : cTb = 0 ? H0: b3 = b4 = 0 0 0 1 0 0 0 0 1 cT = F-test& multidimensional contrasts H0: True model isX0 X0 X1 (b3-4) X0 Full or reduced model?

  22. F-test summary • F-tests can be viewed as testing for the additional variance explained by a larger model w.r.t. a simpler (nested) model  model comparison. • F tests a weighted sum of squares of one or several combinations of the regression coefficients b. • In practice, we don’t have to explicitly separate X into [X1X2] thanks to multidimensional contrasts. • Hypotheses: • In testing uni-dimensional contrast with an F-test, for example b1 – b2, the result will be the same as testing b2 – b1. It will be exactly the square of the t-test, testing for both positive and negative effects.

  23. Orthogonal regressors Variability in Y

  24. Correlated regressors Shared variance Variability in Y

  25. Correlated regressors Variability in Y

  26. Correlated regressors Variability in Y

  27. Correlated regressors Variability in Y

  28. Correlated regressors Variability in Y

  29. Correlated regressors Variability in Y

  30. Correlated regressors Variability in Y

  31. Inference & correlated regressors • implicitly test for an additional effect only • be careful if there is correlation • orthogonalisation = decorrelation (not generally needed)  parameters and test on the non modified regressor change • always simpler to have orthogonal regressors and therefore designs. • use F-tests in case of correlation, to see the overall significance. There is generally no way to decide to which regressor the « common » part should be attributed to. • original regressors may not matter: it’s the contrast you are testing which should be as decorrelated as possible from the rest of the design matrix

  32. Multiple covariance components enhanced noise model at voxel i error covariance components Q and hyperparameters Q1 Q2 V 1 + 2 = Estimation of hyperparameters  with ReML (Restricted Maximum Likelihood).

  33. Weighted Least Squares Let Then where WLS equivalent to OLS on whitened data and design

  34. Overview • Introduction • ERP example • General Linear Model • Definition & design matrix • Parameter estimation & interpretation • Contrast & inference • Correlated regressors • Conclusion

  35. Summary • Mass-univariate GLM: • Fit GLMs with design matrix, X, to data at different points in space to estimate local effect sizes, • GLM is a very general approach(one-sample, two-sample, paired t-tests, ANCOVAs, …) • Hypothesis testing framework • Contrasts • t-tests • F-tests

  36. Contrast: e.g. [1 -1 ] Experimental effects effects estimate statistic data model error estimate Modelling? Why? Make inferences about effects of interest • Decompose data into effects and error • Form statistic using estimates of effects (of interest) and error How? Model? Use any available knowledge

  37. Thank you for your attention! Any question? Thanks to Klaas, Guillaume, Rik, Will, Stefan, Andrew & Karl for the borrowed slides!

More Related