1 / 32

Statistical learning and optimal control: A framework for biological learning and motor control

Statistical learning and optimal control: A framework for biological learning and motor control Lecture 1: Iterative learning and the Kalman filter Reza Shadmehr Johns Hopkins School of Medicine. Stochastic optimal control. Parameter estimation. Kalman filter. State change. Goal

eldora
Download Presentation

Statistical learning and optimal control: A framework for biological learning and motor control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical learning and optimal control: A framework for biological learning and motor control Lecture 1: Iterative learning and the Kalman filter Reza Shadmehr Johns Hopkins School of Medicine

  2. Stochastic optimal control Parameter estimation Kalman filter State change Goal selector Motor command generator Body + environment Belief about state of body and world Predicted sensory consequences Forward model Integration Sensory system Proprioception Vision Audition Measured sensory consequences

  3. Results from classical conditioning

  4. Effect of time on memory: spontaneous recovery

  5. ITI=2 ITI=14 Effect of time on memory: inter-trial interval and retention Performance during training Testing at 1 day or 1 week (averaged together) ITI=98 Test at 1 week ITI=98 ITI=14 ITI=2

  6. Integration of predicted state with sensory feedback

  7. eye velocity deg/sec Saccade size 5 10 15 30 40 50 Time (sec) 500 400 300 200 100 0 0 0.05 0.1 0.15 0.2 0.25 Choice of motor commands: optimality in saccades and reaching movements

  8. Helpful reading: • Mathematical background • Raul Rojas, The Kalman Filter. Freie Universitat Berlin. • N.A. Thacker and A.J. Lacey, Tutorial: The Kalman Filter. University of Manchester. • Application to animal learning • Peter Dayan and Angela J. Yu (2003) Uncertainty and learning. IETE Journal of Research 49:171-182. • Application to sensorimotor control • D. Wolpert, Z. Ghahramani, MI Jordan (1995) An internal model for sensorimotor integration. Science

  9. Linear regression, maximum likelihood, and parameter uncertainty A noisy process produces n data points and we form an ML estimate of w. We run the noisy process again with the same sequence of x’s and re-estimate w: The distribution of the resulting w will have a var-cov that depends only on the sequence of inputs, the bases that encode those inputs, and the noise sigma.

  10. Bias of the parameter estimates for a given X • How does the ML estimate behave in the presence of noise in y? The “true” underlying process What we measured Our model of the process nx1 vector ML estimate: Because e is normally distributed: In other words:

  11. Assume: vector of random variables Matrix of constants Variance of the parameter estimates for a given X For a given X, the ML (or least square) estimate of our parameter has this normal distribution: mxm

  12. The Gaussian distribution and its var-cov matrix A 1-D Gaussian distribution is defined as In n dimensions, it generalizes to When x is a vector, the variance is expressed in terms of a covariance matrix C,where ρij corresponds to the degree of correlation between variables xi and xj

  13. 3 3 2 2 1 1 4 3 0 0 2 -1 -1 1 -2 -2 0 -1 -2 -1 0 1 2 3 -2 -1 0 1 2 3 -2 -3 -3 -2 -1 0 1 2 3 4 x1 and x2 are positively correlated x1 and x2 are not correlated x1 and x2 are negatively correlated

  14. 2 1.5 1 0.5 0 -0.5 -0.5 0 0.5 1 1.5 2 • Parameter uncertainty: Example 1 • Input history: x1 was “on” most of the time. I’m pretty certain about w1. However, x2 was “on” only once, so I’m uncertain about w2.

  15. 2 1.5 1 0.5 0 -0.5 -0.5 0 0.5 1 1.5 2 • Parameter uncertainty: Example 2 • Input history: x1 and x2 were “on” mostly together. The weight var-cov matrix shows that what I learned is that: I do not know individual values of w1 and w2 with much certainty. x1 appeared slightly more often than x2, so I’m a little more certain about the value of w1.

  16. 2 1.5 1 0.5 0 -0.5 -0.5 0 0.5 1 1.5 2 • Parameter uncertainty: Example 3 • Input history: x2 was mostly “on”. I’m pretty certain about w2, but I am very uncertain about w1. Occasionally x1 and x2 were on together, so I have some reason to believe that:

  17. Effect of uncertainty on learning rate • When you observe an error in trial n, the amount that you should change w should depend on how certain you are about w. The more certain you are, the less you should be influenced by the error. The less certain you are, the more you should “pay attention” to the error. mx1 mx1 error Kalman gain Rudolph E. Kalman (1960) A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering, 82 (Series D): 35-45. Research Institute for Advanced Study 7212 Bellona Ave, Baltimore, MD

  18. Example of the Kalman gain: running estimate of average w(n) is the online estimate of the mean of y Past estimate New measure As n increases, we trust our past estimate w(n-1) a lot more than the new observation y(n) Kalman gain: learning rate decreases as the number of samples increase

  19. Example of the Kalman gain: running estimate of variance sigma_hat is the online estimate of the var of y

  20. Objective: adjust learning gain in order to minimize model uncertainty Hypothesis about data observation in trial n my estimate of w* before I see y in trial n, given that I have seen y up to n-1 error in trial n my estimate after I see y in trial n parameter error before I saw the data (a prior error) parameter error after I saw the data point (a posterior error) a prior var-cov of parameter error a posterior var-cov of parameter error

  21. Some observations about model uncertainty We note that P(n) is simply the var-cov matrix of our model weights. It represents the uncertainty in our model. We want to update the weights so to minimize a measure of this uncertainty.

  22. Trace of parameter var-cov matrix is the sum of squared parameter errors Our objective is to find learning rate k (Kalman gain) such that we minimize the sum of the squared error in our parameter estimates. This sum is the trace of the P matrix. Therefore, given observation y(n), we want to find k such that we minimize the variance of our estimate w.

  23. Find K to minimize trace of uncertainty

  24. Find K to minimize trace of uncertainty scalar

  25. The Kalman gain If I have a lot of uncertainty about my model, P is large compared to sigma. I will learn a lot from the current error. If I am pretty certain about my model, P is small compared to sigma. I will tend to ignore the current error.

  26. Update of model uncertainty Model uncertainty decreases with every data point that you observe.

  27. Hidden variable In this model, we hypothesize that the hidden variables, i.e., the “true” weights, do not change from trial to trial. Observed variables A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial

  28. In this model, we hypothesize that the hidden variables change from trial to trial. A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial

  29. Uncertainty about my model parameters Uncertainty about my measurement • Learning rate is proportional to the ratio between two uncertainties: my model vs. my measurement. • After we observe an input x, the uncertainty associated with the weight of that input decreases. • Because of state update noise Q, uncertainty increases as we form the prior for the next trial.

  30. Comparison of Kalman gain to LMS See derivation of this in homework In the Kalman gain approach, the P matrix depends on the history of all previous and current inputs. In LMS, the learning rate is simply a constant that does not depend on past history. With the Kalman gain, our estimate converges on a single pass over the data set. In LMS, we don’t estimate the var-cov matrix P on each trial, but we will need multiple passes before our estimate converges.

  31. 5 4.5 4 3.5 5 3 4.5 2.5 4 2 3.5 3 2 4 6 8 10 2.5 0.8 2 0.75 2 4 6 8 10 0.7 0.65 0.8 0.6 0.55 0.75 0.5 2 4 6 8 10 0.7 0.65 2 4 6 8 10 Effect of state and measurement noise on the Kalman gain High noise in the state update model produces increased uncertainty in model parameters. This produces high learning rates. High noise in the measurement also increases parameter uncertainty. But this increase is small relative to measurement uncertainty. Higher measurement noise leads to lower learning rates.

  32. 5 4 3 2 1 2 4 6 8 10 0.8 0.75 0.7 0.65 0.6 0.55 0.5 2 4 6 8 10 Effect of state transition auto-correlation on the Kalman gain Learning rate is higher in a state model that has high auto-correlations (larger a). That is, if the learner assumes that the world is changing slowly (a is close to 1), then the learner will have a large learning rate.

More Related