1 / 42

IEEE CDC Workshop, Dec. 8, 2008

IEEE CDC Workshop, Dec. 8, 2008. Modeling Neuronal Multi-Unit Activity and Neural Dynamics. Zhe (Sage) Chen zhechen@mit.edu Neuroscience Statistics Research Laboratory MGH/HMS/MIT. Acknowledgment. Emery N. Brown (MIT/Harvard/MGH) Matt A. Wilson (MIT) Sujith Vijayan (Harvard/BU)

Download Presentation

IEEE CDC Workshop, Dec. 8, 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IEEE CDC Workshop, Dec. 8, 2008

  2. Modeling Neuronal Multi-Unit Activity and Neural Dynamics • Zhe (Sage) Chen • zhechen@mit.edu • Neuroscience Statistics Research Laboratory • MGH/HMS/MIT

  3. Acknowledgment • Emery N. Brown (MIT/Harvard/MGH) • Matt A. Wilson (MIT) • Sujith Vijayan (Harvard/BU) • Riccardo Barbieri (MGH/HMS)

  4. Outline • Brief overview of extracellular recordings: SUA, MUA, LFP • MUA: properties and examples • Neural dynamics and the “state” of network • Case study: probabilistic models for estimating neuronal “UP/DOWN” states with MUA

  5. LFP (<100~150 Hz) MUA (300~6000 Hz) MSP (spike detection) SUA (spike sorting)

  6. MUA Properties • What is MUA? : weighted sum of N single-unit activity (N can be either known or unknown) • a weighted average of neural activity around the electrode tip in a region (140~300 μm) smaller than LFP but larger than SUA • easy to process, recordings stable, information rich • with spike detection-->might contain noisy spikes (false positives) • mutual info of MUA is greatest when SUA are independent

  7. How MUA can be used? Examples • Simultaneously measuring brain activity during behavior task (Buchwald et al., Nature, 1965) • Analyzing neuronal dynamics during sleep (Ji and Wilson, Nature Neurosci., 2007) • Analyzing coherence between LFP, SUA, MUA (Zeitler, Fries, and Gielen, Neural Comput, 2006) • Neuronal decoding (Stark and Abeles, J. Neurosci, 2007; Won and Wolf, Network, 2004)

  8. Example: MUA across different whisker intensity levels Time (s) somatosensory cortex (rat) from Dr. Anna Devor (MGH/UCSD)

  9. Example: Motor cortex LFP raw signal SUA Neuronal decoding (movement prediction) MUA Stark and Abeles, “Predicting Movement from multiunit activity” J. Neurosci., 2007

  10. Neural (State) Dynamics • The neural systems (neurons) can shift between different computational modes or between ranges of operation (e.g., change in response gain) at different timescales. • externally-driven dynamics: results from changes in external stimuli factors (e.g., sensory adaptation, encoding of sensory percepts) • internally-driven dynamics: results from changes in internal factors (e.g., attention shift, change of conductance or membrane potential)

  11. State of Neurons/Network • Modified by many factors, such as membrane potentials, synaptic activities, conductance, top-down attention, etc. • change at different levels of anesthesia or sleep stages • spontaneous self-organization by a balance of excitatory and inhibitory neurons in recurrent network • measurements of neuronal state (MUA, LFP) Buzaki/Kopell/Wilson/McCormick/Donald Katz Haider et al. J. Neurosci 2006; Fontanini & Katz, J. Neurophy., 2008

  12. A case study: Probabilistic models for estimating neuronal UP/DOWN states

  13. UP/DOWN states • the periodic fluctuations between increased/decreased spiking activity of a neuronal population (Sirota et al., PNAS, 2003; Hahn et al., PNAS, 2007) • intracellular: membrane potential fluctuations define two states of neocortex (membrane potential greater/smaller than the resting threshold, -70~-80 mV) • extracellular: LFP, MUA • observed from anesthetized/behaving animals (during sleep) in a variety of brain regions (visual, somatosensory, hippocampus, basal ganglia ...)

  14. visual Ji & Wilson, Nature Neurosci., 2007

  15. Threshold-based method Ji & Wilson, 2007

  16. Objectives • Building statistical probabilistic (generative) models to represent the MUA spike trains data • Representing uncertainties with prob. • Modeling the transition prob. (survival prob.) of two states

  17. Data Recording by Sujith Vijayan • Pre-RUN sleep --- RUN --- Post-RUN sleep • Simultaneous recordings of primary somatosensory cortex (SI) & hippocampus from several behaving rats • EEG/EMG/MUA (6-9 tetrodes) • Sleep classification (REM, SWS, transitory) Monitoring location of rat using LED

  18. Modeling & Estimation S(t-1) S(t-1) S(t) S(t+T) • UP and DOWN states, S(t), are treated as discrete r.v. (1/0) drawn from a hidden Markov process • Observations: MUA spike trains (modulated by the hidden state) --- doubly stochastic process • Estimation task: infer the hidden states (when and how many transitions occur) and the transition prob. y(t-1) y(t) y(t+T)

  19. spike counts spikes (0/1) bin size (10-20 ms) arbitrary small (1 ms) transition constrained by bin size any time first-order Markovian semi-Markovian Comparsion of discrete- & continuous-time models

  20. Hidden Markov model (HMM) • stationary probabilistic model with three elements: initial prob., transition prob., emission prob • 1 inhomogeneous Poisson -1 baseline state spiking history

  21. EM algorithm • Missing data problems (statistics, speech recognition, signal processing, & communication) • principle: maximum likelihood estimation • expectation maximization (EM) algorithm E-step: Estimate the hidden state sufficient stat. M-step: Estimate the unknown parameters Iterative procedure until convergence • HMM --- Baum-Welch (forward-backward) algorithm & Viterbi algorithm

  22. EM (continued) • Q-function • sufficient stat.

  23. EM (continued) • forward-backward • M-step (reestimation) • Newton (iterative fixed-point optimization)

  24. Continuous-time model • “single-step transition’’ is no longer valid • state transition prob. time-varying (depending on time), modeled either parametrically or nonparametrically • survival function • Choices of F(z): Markov or semi-Markov

  25. UP and DOWN States Duration Histograms Ji & Wilson, 2007 Non-exponential!!!

  26. Examples • In our case, we fit the histogram with 2-parameter parametric model for UP/DOWN state • To be more precise, the model pdf is a censored version of the original pdf

  27. Details • Conditional intensity function (generalized “rate”) • Observed likelihood • Joint log-likelihood

  28. Monte Carlo EM • Continuous-time model: E-step is difficult to compute analytically (since the total number & the locations of state transitions are both unknown) • Idea: replace E-step with Monte Carlo E-step, M-step remains unchanged (if the M-step uses Gibbs sampling, then it is a fully Bayesian estimation) • Use HMM-EM estimate as the initial condition • Derive a reversible jump MCMC algorithm to allow dimensionality change (the number of transitions)

  29. Reversible-jump MCMC • A Metropolis-Hastings type MCMC with trans-dimensional proposal (Green, 1995) • Allow the Markov chain to jump between two parameter spaces with different dim. • example: change point detection, mixture modeling, object detection, etc. • Design of efficient proposal distribution (non-unique, prior knowledge helps, likelihood-based data-driven) --- improve the acceptance rate

  30. A bit detail • Consider seven proposals, 6 of them change dim. • 1) move the boundary; 2) merge two neighboring states; 3) delete one intermediate sojourn; 4-5) insert/delete the first sojourn; 6-7) insert/delete the last sojourn • Acceptance prob.

  31. Algorithmic procedure Convergence diagnosis: single chain, log-likelihood Posterior dist. of S(t): burn-in period & thinning

  32. Experimental results • Validate the models and algorithms via synthetic spike train data • Apply the models to real-world 15-min SWS cortical spike train data • Plot EEG-triggered average • Goodness-of-fit tests for the model

  33. Synthetic data

  34. Real data

  35. K-complex

  36. Method comparison no. jumps 2986 3223 2576 2576 (occurrence rate: 82 /min)

  37. KS plot Goodness-of-fit: Kolmogorov-Smirnov (KS) test from the time-rescaling theorem (Brown et al., 2002) • go

  38. Possible model extension • Incorporation of continuous-time LFP observations into the likelihood model • Fully Bayesian method (imposing priors to the model parameters) • discrete prob. model for the state duration distribution

  39. Take home message • Probabilistic generative models are useful for representing the uncertainties of data (in contrast, threshold-based method is ad hoc and very data-dependent) • “All models are wrong, but some are useful.” --- George Box • Continuous-time model is more accurate in characterizing & segmenting the UP/DOWN states, but it is more computationally demanding • Model limitation: currently assumed to be stationary and model is identical across all spike trains (independent)

More Related