1 / 57

Statistical learning and optimal control: A framework for biological learning and motor control

Statistical learning and optimal control: A framework for biological learning and motor control Lecture 3: Introduction to optimal control Reza Shadmehr Johns Hopkins School of Medicine.

Download Presentation

Statistical learning and optimal control: A framework for biological learning and motor control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical learning and optimal control: A framework for biological learning and motor control Lecture 3: Introduction to optimal control Reza Shadmehr Johns Hopkins School of Medicine

  2. Estimation: Given observations x and y, estimate the hidden state w. Your estimates have no bearing on your observations. Example: classical conditioning. The actions of the learner has no effect on the stimuli. Control: figure out the u that you need to give so that your observations y behave as you want them to. Example: operant conditioning, where the learner’s actions affect whether it gets rewarded or not.

  3. The linear quadratic tracking problem qx1 We are trying to track a reference trajectory r(k) qx1 We observe y(k), which is related to x(k) mx1 We generate command u(k), which causes a change in x(k) nx1 We wish to find the control sequence u(0), u(1), …, u(p-1) such that we minimize the cost function, Given the constraint that: tracking cost control cost

  4. mxm mxn nx1 mx1 Suppose we have a linear dynamical system: We have the history of inputs u(k), where k=0…p-1. We want to write the history of state x(k). (p.m)x1 (p.n)x1 (p.m)x(p.n) (p.m)xm (p.m)x(p.m)

  5. tracking cost control cost Total cost constraint

  6. Constraint minimization with Lagrange multipliers: Example 1 Suppose we want to find the point (xs,ys) along the line y=mx+b that is closest to the point (xo,yo). cost constraint We want to find point (xs,ys) that belongs to the line, and among the points that belong to the line, gives us the smallest cost. The points along each line are of equal cost

  7. The point where the line meets the cost contour is where the vector normal to the constraint and the vector normal to the cost are in the same direction. Vector normal to the constraint The point that we are looking for satisfies the condition: Vector normal to the cost Lagrange multiplier

  8. This is 2 equations with 3 unknowns. Here is our 3rd equation.

  9. Constraint minimization with Lagrange multipliers: Example 2 Example from: Steuard Jensen Suppose the milkmaid wants to get to the cow so that she travels the shortest distance possible, given the constraint that she first washes her milk pan in the river. So we want to find the shortest route that includes a line from the milkmaid to the river edge, and a line from the river edge to the cow. Find the point P that minimizes the following: milkmaid cost cow constraint An ellipse can be defined as the set of points P for which the total distance from one focus to P and then to the other focus is constant. If we keep points M and C as the foci of this ellipse, then as soon as we have an ellipse that touches the river edge, we have found the point P that is our solution. Note that at point P, the normal vector to g(x) and the normal vector to f(x) are in the same direction.

  10. Constraint minimization with Lagrange multipliers A scalar constraint In order to minimize the scalar function: Subject to scalar constraint: We form an augmented cost: Note that when we find the x that satisfies the constraint g(x), g(x) will be zero and so we have not changed our cost function. To minimize the augmented cost, we have: So, to find the x that minimizes the cost subject to the constraint, we find the (x, lambda) that satisfies: This should look familiar from last two examples.

  11. Constraint minimization with Lagrange multipliers A “vector” constraint In order to minimize the scalar function: Subject to constraint: We form an augmented cost with two Lagrange multipliers: To minimize the augmented cost, we have: So, to find the x that minimizes the cost subject to the constraint, we find the (x, lambda1, lambda2) that satisfies: We have as many multipliers as we have constraints.

  12. Cost: a scalar constraint: a vector Eq. 1 Solve for lambda in Eq. 1, and then plug it into Eq. 2. Eq. 2

  13. Let us construct a simple model of the eye’s dynamics and produce a saccade using optimal control Force in the bottom spring Force in the top spring Force in the viscous element Force in the motor command If we re-define x so that we measure it from xo/2, then the equivalent system is shown on right, where the equilibrium point of the spring is at x=0.

  14. System dynamics in continuous form Our observation Goal: find the motor commands that move the mass (the eye) to a certain location by a certain time while minimizing a cost the depends on endpoint accuracy and motor commands. First step: re-formulate the system dynamics from continuous to discrete time. Second step: solve the optimum control problem.

  15. Relating discrete and continuous representation of a linear system(approximate solution) Continuous system Discrete system Simple (but approximate method) is to use Euler’s approximation:

  16. Solution of continuousLTI state equations (scalar condition) Suppose that our state is a scalar variable and the state update equation is of the form: The solution will have the exponential form: Suppose that our state is a scalar variable and the state update depends on an external input u(t):

  17. Matrix exponential Suppose that our state is a vector variable: We can imagine that the solution will have a “matrix exponential” form: For any square matrix A, the matrix exponential exp(A) is a square matrix function. We can compute it using Taylor series expansion. In Matlab, exp(A) is computed as expm(A). In Mathematica, use MatrixExp[A].

  18. Some properties of the matrix exponential Using Taylor series expansion, one can show the following properties of the matrix exponential: Other properties of the matrix exponential:

  19. Solution of continuous LTI state equations (vector condition)

  20. Solution of discrete LTI state equations

  21. Relating discrete and continuous representation of a linear system Assume that u(t) is constant between the two sampling intervals.

  22. Discrete and continuous representation of a linear system (noise free scenario) Continuous system Discrete system

  23. State noise in the continuous system State noise in continuous domain We note that for small D, the term inside the exponential is near zero over the range kD to (k+1)D. Therefore, we can approximate the matrix exponential with an identity matrix. Equivalent state noise in discrete domain

  24. Measurement noise in the continuous system Measurement noise in continuous domain Suppose that we imagine that we average the sample y(t) over the discrete interval D to get our discrete sample: Noise in discrete domain is: Equivalent state noise in discrete domain

  25. Discrete and continuous representation of a linear system with noise Continuous system Equivalent discrete system

  26. Continuous time model of the eye Discrete time model of the eye Optimal control problem

  27. 4 0.175 6 2 10 ´ 0.15 2 6 1.5 10 ´ 0.125 0.1 6 0 1 10 ´ 0.075 0.05 500000 -2 0.025 0 0.02 0.04 0.06 0.08 0 0 0.175 0 0.02 0.04 0.06 0.08 0 0.02 0.04 0.06 0.08 0.15 6 0.125 0.1 4 0.075 0.05 2 0.025 0 0 0 0.02 0.04 0.06 0.08 0 0.02 0.04 0.06 0.08 Make a 30 deg (~0.5 rad) saccade in 0.5 seconds Time (sec)

  28. 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0 0.02 0.04 0.06 0.08 10 8 6 4 2 0 0 0.02 0.04 0.06 0.08 8 6 4 2 0 -2 0 0.02 0.04 0.06 0.08 20 deg 15 deg 10 deg Position (rad) 5 deg Velocity (rad/s) Eye muscle activity for a 10 deg saccade. Motor command (N.m) Time (sec)

  29. 1 2 0.8 1 0.6 0 0.4 0.2 -1 0 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 Resolving redundancies Suppose that we have a cursor that its position depends on the sum of positions of left and right joysticks. Suppose that the left joystick is heavier than the right joystick. We want to move the cursor to some location. How much should we move each joystick?

  30. Summary: Optimal control of a linear system with quadratic cost

  31. Issues with the control policy: • What if the system gets perturbed during the control policy? With the current approach, there is no compensation for the perturbation. • In reality, both the state update equation and the measurement equation are subject to noise. How do we take that into account? • To resolve this, we need a way to figure out what command to produce, given that we find ourselves at some state x at some time k. Once we figure this out, we will consider the situation where we cannot measure x directly, but have noise to deal with. Our best estimate will be through the Kalman filter. This will link estimation with control. Starting at state Sequence of actions Observations Cost to minimize

  32. Note that at the last time step, cost is a quadratic function of state Cost at the last time point Cost-to-go at the next to the last time point

  33. We will now show that if we choose the optimal u at step p-1, then cost to go is once again a quadratic function of state x. Can be simplified to: Can be simplified to:

  34. We just showed that for the last time step, the cost to go is a quadratic function of x: The optimal u to at time point p-1 minimizes cost to go J(p-1): If at time point p-1 we indeed carry out this optimal policy u, then the cost to go at time p-1 also becomes a linear function of x: If we now repeat the process and find the optimal u for time point p-2, it will be: And if we apply the optimal u at time points p-2 and p-1, then the cost to go at time point p-2 will be a quadratic function of x: So in general, if for time points t+1, …, p we calculated the optimal policy for u, then the above gives us a recipe to compute the optima policy for time point t.

  35. Summary of the linear quadratic tracking problem Cost to go The procedure is to compute the matrices W and G from the last time point to the first time point.

  36. Modeling of an elbow movement Continuous time model of the elbow Discrete time model of the elbow

  37. 6 20000 2 10 ´ 2.5 2.25 6 15000 1.5 10 ´ t t s s o o 2 c c L 6 s l 10000 1 10 ´ o e 1.75 P V 1.5 500000 5000 1.25 0 0 1 0 0.1 0.2 0.3 0.4 0 0.1 0.2 0.3 0.4 0 0.1 0.2 0.3 0.4 sec sec sec 0.5 0.5 0.4 0.4 n o n i o t i 0.3 0.3 i t s i o s P o 0.2 0.2 P 0.1 0.1 0 0 0 0.1 0.2 0.3 0.4 0 0.1 0.2 0.3 0.4 sec sec 75 15 10 d 50 n d d a n n 10 m a a m 0 25 m m o m m c o o c c 5 r 0 -10 o r r t o o o t t M -25 o o 0 M M -20 -50 -5 -30 -75 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.05 0.1 0.15 0.2 0.25 0.3 0.35 sec sec sec Goal: Reach a target at 30 deg in 300 ms time and hold it there for 100 ms. Unperturbed movement Arm held at start for 200ms Force pulse to the arm for 50ms

  38. 6 2 10 ´ 0.8 800 6 1.5 10 ´ t 0.6 600 s n n o i o c a i 6 1 10 ´ t G s i 0.4 400 o s s P o o P P 500000 0.2 200 0 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 sec sec sec 30 d n a 20 m m o c 10 r o t 0 o M -10 0 0.1 0.2 0.3 0.4 0.5 0.6 sec Movement with a via point: we set the cost to be high at the time when we are supposed to be at the via points.

  39. Stochastic optimal control Biological processes have noise. For example, neurons fire stochastically in response to a constant input, and muscles produce a stochastic force in response to constant stimulation. Here we will see how to solve the optimal control problem with additive Gaussian noise. Cost to minimize Because there is noise, we are no longer able to observe x directly. Rather, the best we can do is to estimate it. As we saw before, for a linear system with additive noise the best estimate of state is through the Kalman filter. So our goal is to determine the best command u for the current estimate of x so that we can minimize the global cost function. Approach: as before, at the last time point p the cost is a quadratic function of x. We will find the optimal motor command for time point p-1 so that it minimizes the expected cost to go. If we perform the optimal motor command at p-1, then we will see that the cost to go at p-1 is again a quadratic function of x.

  40. Preliminaries: Expected value of a squared random variable. In the following example, we assume that x is the random variable. Scalar x Vector x

  41. Cost at the last time point

  42. Cost-to-go at the next to the last time point So we see that if our system has additive state or measurement noises, the optimal motor command remains the same as if the system had no noises at all. When we use the optimal policy at time point p-1, we see that, as before, the cost-to-go at p-1 is a quadratic function of x. The matrix W at p-1 remains the same as when the system had no noise. The problem is that we do not have x. The best that we can do is to estimate x via the Kalman filter. We do this in the next slide.

  43. On trial p-1, our best estimate of x is the prior. We compute the prior for the current trial from the posterior of the last trial. Kalman gain The posterior estimate. Our short-hand way to note the prior estimate of x on trial p-1. Although the noises in the system do not affect the gain G, the estimate of x is of course affected by the noises because the Kalman gain is influenced by them.

  44. Summary of stochastic optimal control for a linear system with additive Gaussian noise and quadratic cost Cost to go at the start Cost to go at the end

  45. The duality of the Kalman filter and optimal control In the estimation problem, we have a model of how we think the hidden states x are related to observations y. Given an observation y, we have a rule with which we can change our estimates. Our objective is to minimize the trace of the variance of our estimate xhat. This variance is P. This trace is our scalar cost function, which is quadratic in terms of xhat. We minimize it by finding the optimal gain k. If we use this optimal k, then we can compute the variance in the next time step. Our cost (i.e., variance) of course still remains quadratic in terms of xhat.

  46. The duality of the Kalman filter and optimal control, continued. In the control problem, we have a model of how we think the hidden states x are related to commands u and observations y. Our objective is to find the u that minimizes a scalar cost. To find this u, we run time backwards! We start at the end time point and find the optimal u that minimizes the cost to go. When we find this u, we then move to the next time point and so on. The cost to go is a quadratic function of hidden states. This is very similar to the Kalman filter, where the cost was a quadratic function of the hidden states as well.

  47. Duality of optimal control and Kalman filter, continued. Optimal control Weighting of state Motor cost Tracking cost State uncertainty Measurement noise State noise Kalman Filter So W is like an estimate of state uncertainty matrix, BTB is like state update noise Q, and L is like measurement noise R. In optimal control, the motor commands are generated by applying a gain to the state. This gain is like the Kalman gain.

  48. Noise characteristics of biological systems are not additive Gaussian Noise in the motor output grows with the size of the motor command Voluntary contraction of the muscle Electrical stimulation of the muscle A B The standard deviation of noise grows with mean force in an isometric task. Participants produced a given force with their thumb flexors. In one condition (labeled “voluntary”), the participants generated the force, whereas in another condition (labeled “NMES”) the experimenters stimulated their muscles artificially to produce force. To guide force production, the participants viewed a cursor that displayed thumb force, but the experimenters analyzed the data during a 4-s period in which this feedback had disappeared. A. Force produced by a typical participant. The period without visual feedback is marked by the horizontal bar in the 1st and 3rd columns (top right) and is expanded in the 2nd and 4th columns. B. When participants generated force, noise (measured as the standard deviation) increased linearly with force magnitude. Abbreviations: NMES, neuromuscular electrical stimulation; MVC, maximum voluntary contraction. From Jones et al. (2002) J Neurophysiol 88:1533.

  49. Representing signal dependent noise Zero mean Gaussian noise signal dependent motor noise Zero mean Gaussian noise signal dependent sensory noise Vector of zero mean, variance 1 random variables

  50. Control problem with signal dependent noise (Todorov 2005) Cost per step: To find the motor commands that minimize the total cost, we start at the last time step p and work backwards. At time step p, the cost is a quadratic function of x. At time step p-1, we can find the optimal u that minimizes the cost to go. When we find this optimal u, the cost to go at p-1 will be a quadratic function of x plus a quadratic function of x-xhat. In general, by induction we can prove that as long as we apply the optimal u, the cost to go will have this quadratic form. This proof is due to E. Todorov, Neural Computation, 2005.

More Related