Brain Machine Interfaces: Modeling Strategies for Neural Signal Processing. Jose C. Principe, Ph.D. Distinguished Professor ECE, BME Computational NeuroEngineering Laboratory Electrical and Computer Engineering Department University of Florida www.cnel.ufl.edu pr[email protected]
Jose C. Principe, Ph.D.
Distinguished Professor ECE, BME
Computational NeuroEngineering Laboratory
Electrical and Computer Engineering Department
University of Florida
Computational Neuroscience and Technology developments are playing a larger role in the development of each of these areas.
General Architecture Signal Processing
BCI (BMI)bypasses the brain’s normal pathways of peripheral nerves (and muscles)
J.R. Wolpaw et al. 2002
Neural Interface Physical Interface
Need to understand how brain processes information.
The level of BMI performance may be attributed to selection of electrode technology, choice of model, and methods for extracting rate, frequency, or timing codes.
Coarse(mm) Signal Processing
Develop a experimental paradigm with a nested hierarchy for studying neural population dynamics.
NRG IRB Approval for Human Studies
NRG IACUC Approval for Animal Studies
Posterior Parietal (PP) – Visual to motor transformation
Premotor (PM) and Dorsal Premotor (PMD) -
Planning and guidance (visual inputs)
Primary Motor (M1) – Initiates muscle contraction
Accessing 2 types of signals (cortical activity and behavior) leads us to a general class of I/O models.
Data for these models are rate codes obtained by binning spikes on 100 msec windows.
Consider a set of spike counts from M neurons, and a hand position vector dC (C is the output dimension, C = 2 or 3). The spike count of each neuron is embedded by an L-tap discrete time-delay line. Then, the input vector for a linear model at a given time instance n is composed as x(n) = [x1(n), x1(n-1) … x1(n-L+1), x2(n) … xM(n-L+1)]T, xLM, where xi(n-j) denotes the spike count of neuron i at a time instance n-j.
A linear model estimating hand position at time instance n from the embedded spike counts can be described as
where yc is the c-coordinate of the estimated hand position by the model, wji is a weight on the connection from xi(n-j) to yc, and bc is a bias for the c-coordinate.
x Signal Processing1(n)
yz(n)Linear Model (Wiener-Hopf solution)
In a matrix form, we can rewrite the previous equation as
where y is a C-dimensional output vector, and W is a weight matrix of dimension (LM+1)C. Each column of W consists of [w10c, w11c, w12c…, w1L-1c, w20c, w21c…, wM0c, …, wML-1c]T.
For the MIMO case, the weight matrix in the Wiener filter system is estimated by
R is the correlation matrix of neural spike inputs with the dimension of (LM)(LM),
where rij is the LL cross-correlation matrix between neurons i and j (i ≠ j), and rii is the LL autocorrelation matrix of neuron i.
P is the (LM)C cross-correlation matrix between the neuronal bin count and hand position, where pic is the cross-correlation vector between neuron i and the c-coordinate of hand position. The estimated weights WWiener are optimal based on the assumption that the error is drawn from white Gaussian distribution and the data are stationary.
The predictor WWiener minimizes the mean square error (MSE) cost function,
Each sub-block matrix rij can be further decomposed as
where rij() represents the correlation between neurons i and j with time lag . Assuming that the random process xi(k) is ergodic for all i, we can utilize the time average operator to estimate the correlation function. In this case, the estimate of correlation between two neurons, rij(m-k), can be obtained by
The cross-correlation vector pic can be decomposed and estimated in the same way, substituting xj by the desired signal cj.
From the equations, it can be seen that rij(m-k) is equal to rji(k-m). Since these two correlation estimates are positioned at the opposite side of the diagonal entries of R, the equality leads to a symmetric R.
The symmetric matrix R, then, can be inverted effectively by using the Cholesky factorization. This factorization reduces the computational complexity for the inverse of R from O(N3) using Gaussian elimination to O(N2) where N is the number of parameters.
Signal to error ratio (dB) Signal Processing
Identify the neurons that affect the output the most.
Feedforward Linear Eq.
Feedforward RMLP Eqs.
General form of Linear Sensitivity
General form of RMLP Sensitivity
Decay trend appears in all animals and behavioral paradigms
Significance: Sensitivity analysis through trained models automatically delivers deeply tuned cells that span the space.
How does each cortical area contribute to the reconstruction of this movement?
Train 15 separate RMLPs with every combination of cortical input.
Adapted by on-line LAR
(Kim et. al., MLSP, 2004)
Adapted by LMS
. . .
r= y-X = y
Find argmaxi |xiTr|
r= y-X = y-xjj
r= y-(xjj+ xkk)
Adjustj & k s.t.
Neuronal Channel Index
Difficulty of spike train Analysis:
Spike trains are point processes, i.e. all the information is contained in the timing of events, not in the amplitude fo the signals!
Build an adaptive signal processing framework for BMI decoding in the spike domain.
Time-series model movement?
P(state|observation)Recursive Bayesian Approach
First equation (system model) defines a first order Markov process.
Second equation (observation model) defines the likelihood of the observations p(zt|xt) . The problem is completely defined by the prior distribution p(x0).
Although the posterior distribution p(x0:t|u1:t,z1:t) constitutes the complete solution, the filtering density p(xt|u1:t, z1:t) is normally used for on-line problems.
The general solution methodology is to integrate over the unknown variables (marginalization).
System model p(xt|xt-1) propagates into the future the posterior density
Uses Bayes rule to update the filtering density. The following equations are needed in the solution.
For Gaussian noises and linear prediction and observation models, there
is an analytic solution called the Kalman Filter.
Neuron tuning function
[Wu et al. 2006]
In general the integrals need to be approximated by sums using Monte Carlo integration with a set of samples drawn from the posterior distribution of the model parameters.
Neuron tuning function
[Brockwell et al. 2004]
kState estimation framework for BMI decoding in spike domain
Neural Tuning function
Multi-spike trains observation
Kinematic dynamic model
Key Idea: work with the probability of spike firing which is a
continuous random variable
Neuron tuning function
[Brown et al. 2001]
Neuron tuning function
Sequential Estimate PDF
[Wang et al. 2006]
1. Generate spike trains from stored spike times 10ms interval, (99.62% binary train)
2. Synchronize all the kinetics with the spike trains.
3. Assign the kinematic vector to reconstruct.
X=[position velocity acceleration]’
(more information, instantaneous state avoid error accumulation,
kinematics in spike domain
Neural spike trainsSTEP 2- Neural tuning analysis
Neuron 72: Tuning Depth 1
kinematics direction angle
velocity in spike domain
Poisson modelStep 2- Tuning Function Estimation
generation of the spikes depends only on the kinematic vector we choose.
2nd Principal component
1st Principal component
Ref: Paradoxical cold
[Hensel et al. 1959]
Posterior density at a time index Filtering
Step 3: Information Estimated Delays Filtering
Figure 3-14 Mutual information as function of time delay for 5 neurons.
For 185 neurons, average delay is 220.108 ms
Neural Tuning function Filtering
P(state|observation)Step 4: Monte Carlo sequential kinematics estimation
Results comparison Filtering
Table 3-2 Correlation Coefficients between the Desired Kinematics and the Reconstructions
Table 3-3 Correlation Coefficient Evaluated by the Sliding Window