1 / 35

The principles of DCM

The principles of DCM. Vladimir Litvak. Wellcome Trust Centre for Neuroimaging. Epigraph. What I cannot create I do not understand. Richard P. Feynman. What do we need for DCM?. Data (possibly data features). 250. 0. input. depolarization. 1 st and 2d order moments. 200. -20.

finna
Download Presentation

The principles of DCM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The principles of DCM Vladimir Litvak Wellcome Trust Centre for Neuroimaging

  2. Epigraph What I cannot create I do not understand. Richard P. Feynman

  3. What do we need for DCM? • Data (possibly data features). 250 0 input depolarization 1st and 2d order moments 200 -20 150 -40  DCM for ERP (+second-order mean-field DCM) 100 -60 50 -80 0 -100 0 100 200 300 0 100 200 300 time (ms) time (ms) time (ms) auto-spectral density LA auto-spectral density CA1 cross-spectral density CA1-LA  DCM for steady-state responses frequency (Hz) frequency (Hz) frequency (Hz) 0 -20 -40  DCM for induced responses -60 -80  DCM for phase coupling -100 0 100 200 300

  4. What do we need for DCM (in this talk)?

  5. What do we need for DCM? • 2. Generative model for the data (features). • Usually represented by a system of differential • equations • Can usually be divided into two parts • - Model of the underlying neural process • - Measurement-specific observation model(s). • Also includes specification of the input(s). • Prior values for model parameters (priors). • - Specification includes the most likely value and • precision. • - Priors are part of the model. Changing the • priors changes the model.

  6. A really simple example Analytic solution Numerical solution x t 0

  7. A really simple example numerical integration ‘Neural’ equation (exponential decay) Observation equation U x a t0

  8. Even simpler numerical integration ‘Neural’ equation (exponential decay) Observation equation U x t0 - the only free parameter

  9. What do we need for DCM? • 3. Optimization scheme for fitting the parameters to the data. • The objective function for optimization is the free energy which approximates the (log) model evidence: • There are many possible schemes based on different assumptions. Present DCM implementations in SPM use variational Bayesian scheme. • Once the scheme converges it yields • - The highest value of free energy the scheme could attain. • - Posterior distribution of the free parameters • -Simulated data as similar to the original data as the model could • generate.

  10. Now lets take a step back What does this mean? • Given a prediction of the model, probability of the data given the model can be computed. • This requires some assumptions about the distribution of the error (e.g. IID, normal distribution). • Parameters of error distribution (e.g. variance) also need to be estimated.

  11. Prior distribution of t0 What does this mean? Goodness of fit (likelihood) Prior probability

  12. What does this mean? GOF = E(GOF) Goodness of fit Prior probability Model evidence = Goodness of fit expected under the priors

  13. Now to some consequences – example 1 Model 1 Model 2 F = -534.51 F = -536.95

  14. Now to some consequences – example 1 Model 1 Model 2 F = -534.51 F = -536.95

  15. Now to some consequences – example 2 Model 1 Model 2 F = -534.51 F = -532.59

  16. Now to some consequences – example 2 Model 1 Model 2 F = -534.51 F = -532.59

  17. Now to some consequences – example 3 Model 1 Model 2 F = -554.42 F = -536.95

  18. Now to some consequences – example 3 Model 1 Model 2 F = -554.42 F = -536.95

  19. So that’s where this is coming from • The best model is the one with precise priors that yield good fit to the data.

  20. One more thing… F = -597.6

  21. FitzHugh-Nagumo model Richard FitzHugh with analogue computer at NIH, ca. 1960. (from Scholarpedia) • Parameters: • Input: • Input onset • Input amplitude • Neural model • a • b • c • d • tau – time constant • Observation model: • Output scaling from Gerstner and Kistler , 2002

  22. Differences from the 1-parameter case • There is a hidden state variable w that does not affect the output directly but is estimated based on its interaction with the observable variable V. • In addition to prior variances, also prior co-variances between parameters must be specified and the model inversion yields posterior co-variances. • Parameters: • Input: • Input onset • Input amplitude • Neural model • a • b • c • d • tau – time constant Prior covariance Posterior correlation

  23. FHN model – posterior variance • Parameters: • Input: • Input onset • Input amplitude • Neural model • a • b • c • d • tau – time constant

  24. Effect of adding a junk parameter t1 t2 t1 t2

  25. Effect of correlated parameters t1 Contrasts of parameters t2 t1 t2 t1 t2 t1+t2 t1-t2

  26. DCM as integrative framework Data Prior Posterior

  27. And finally – the real thing A. L. Hodgkin A. Huxley

  28. HH model – the results

  29. And the winner is… • Exponential with 1-parameter • FitzHugh-Nagumo model • Hodgkin-Huxley model Very different models can be compared as long as the data is the same

  30. However… The result of model comparison can be different for different data

  31. Summary • The main principle of DCM is the use of data and generative models in a Bayesian framework to infer parameters and compare models. • Implementation details may vary. • Model inversion is an optimization procedure where the objective function is the free energy which approximates the model evidence. • Model evidence is the goodness of fit expected under the prior parameter values. • The best model is the one with precise priors that yield good fit to the data. • Parameters not affecting the model fit stay at their priors and do not affect the model evidence. • Very different models can be compared as long as they were fitted to the same data. • Models and priors can be gradually refined from one study to the next, making it possible to use DCM as an integrative framework in neuroscience.

  32. Epilogue The first principle is that you must not fool yourself - and you are the easiest person to fool. So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists. Richard P. Feynman from “Cargo cult science”, 1974

  33. Thank you for your attention

More Related