1 / 52

Lecture 25. Introduction to Control

Lecture 25. Introduction to Control. in which we enlarge upon the simple intuitive control we’ve seen. We generally want a system to be in some “equilibrium” state. If the equilibrium is not stable, then we need a control to stabilize the state.

pancho
Download Presentation

Lecture 25. Introduction to Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 25. Introduction to Control in which we enlarge upon the simple intuitive control we’ve seen We generally want a system to be in some “equilibrium” state If the equilibrium is not stable, then we need a control to stabilize the state I will talk a little bit about this in the abstract, but first, let me repeat some of what we did Tuesday evening

  2. I want to start on control at this point We have open loop control and closed loop control Open loop control is simply: guess what input u we need to control the system and apply that Closed loop control sits on top of open loop control in a sense we will shortly see. In closed loop control we measure the error between what we want and what we have and adjust the input to reduce the error: feedback control

  3. Cruise control About the simplest feedback control system we see in everyday life is cruise control We want to go at a constant speed If the wind doesn’t change and the road is smooth and level we can do this with an open loop system Otherwise we need a closed loop system Recall the diagram from Lecture 1, and modify it to describe a cruise control

  4. CRUISE CONTROL the open loop part desired speed INVERSE PLANT GOAL: SPEED nominal fuel flow Actual speed + + PLANT: DRIVE TRAIN Input: fuel flow disturbance - - the closed loop part error Feedback: fuel flow CONTROL

  5. We have some open loop control — a guess as to the fuel flow We have some closed loop control — correct the fuel flow if the speed is wrong It happens that this is not good enough, but let’s just start naively

  6. Simple first order model of a car driving along: drive force, air drag and disturbance Nonlinear, with an open loop control If s = 0, and v = vdwe’ll have an “equilibrium” that determines the open loop f Split the force and the velocity into two parts (on the way to linearizing) Substitute into the original ode

  7. Expand v2, and cancel the common parts Linearize by crossing out the square of the departure speed, v’ and the goal is to make v’ go to zero

  8. Let’s say a little about possible disturbances hills are probably the easiest to deal with analytically mgsinf f I’ll say more as we go on

  9. The open loop picture - + f’ v’ 1/m -

  10. I’m not in a position to simply ask f’to cancel the disturbance (because I don’t know what it is!) I need some feedback mechanism to give me more fuel when I am going too slow and less fuel when I am going too fast The linearized equation (still open loop) Negative feed back from the velocity error

  11. The closed loop picture The open loop picture + + - f’ v’ 1/m - control feedback

  12. We have a one dimensional state So we have Its solution

  13. We do not need this whole apparatus to get a sense of how this works Consider a hill, for which s(t) is constant, call it s0 We can find the particular solution by inspection The homogeneous solution decays, leaving the particular solution, and we see that we have a permanent error in the speed The bigger K, the smaller the error, but we can’t make it go away (and K will be limited by physical considerations in any case)

  14. What we’ve done so far is called proportional (P) control We can fix this problem by adding integral (I) control. There is also derivative (D) control (and we’ll see that in another example) PID control incorporates all three types, and you’ll hear the term

  15. Add a variable and its ode Let the force depend on both variables Then rearrange define k

  16. Convert to state space We remember that x denotes the error so the initial condition for this problem is y’= 0 = v’: x(0) = {0 0}T

  17. The homogeneous solution (closed loop without the disturbance) and we see that it will decay as long as K2 > 0

  18. What happens now when we go up a hill? This means a nonzero disturbance, and it requires a particular solution We can now let the displacement take care of the particular solution

  19. Wait a minute here! What’s going on!? Have I pulled a fast one? Not at all. Let’s think a little bit here.

  20. What did we just do? What can we say in general? We did some linearization, and we changed a one dimensional state into a two dimensional state We also dealt with disturbances, which is actually an advanced topic We can look at all of this in Mathematica if we have the time and inclination at the end of this lecture. For now, let’s look at some results for the second order control. I will scale to make the system nondimensional and of general interest.

  21. The scaled response to a constant hill is

  22. Suppose we have a more varied terrain? Let scaled response

  23. The control still works very well, and tracks nicely once it is in place. Let’s look at a much more complicated roadway

  24. The scaled response looks like the scaled forcing and the velocity error is minuscule — this really works

  25. Overlay the two: forcing in black, response in yellow

  26. What did we do here? We started with a one dimensional system — and tried to find a force to cancel and exterior force That didn’t work We added a variable to the mix found a new feedback got the velocity to be controlled at the expense of its integral about which we don’t care very much

  27. Now let’s review this in a more abstract and general sense (without worrying about disturbances for now)

  28. Consider a basic single input linear system We need a steady goal, for which Typically ud = 0 and xd = 0, and I’ll assume that to be the case here (xd might be the state corresponding to an inverted pendulum pointing up)

  29. We can look at departures from the desired state (errors) where remember that we want x’ —> 0 If u’ = 0, then the system is governed by and its behavior depends on the eigenvalues of A if they all have negative real parts, then x’ —> 0 We call that a stable system

  30. If we have a stable system, we don’t need to control it although we might want to add a control to make it more stable If it is not stable, then we must add feedback control to make it stable The behavior of this closed loop system depends on the eigenvalues of

  31. We can get the eigenvalues of by forming the determinant of The terms in this equation will depend on the values of the gains which are the components of the gain vector g You might imagine that, since there will be as many of these as there are roots that we can make the eigenvalues be anything we please. This is often, but not always true. We’ll learn more about this in Lecture 31.

  32. Let’s look at a simple example to see how this works

  33. Last time we looked at an electric motor and set it up as a second order system without being too specific Make a vector equation out of this or

  34. Suppose we want the angle to be fixed at π/3 The desired state satisfies the differential equations with no input voltage

  35. We can write the differential equations for the primed quantities We want the perturbations to go to zero Is the homogeneous solution stable? What are the eigenvalues of A?

  36. There is one stable root and one marginally stable root (s = 0). The homogeneous solution in terms of the eigenvectors is

  37. If the initial value of q’ is not equal to its desired value (here 0) then it will not ever get there. The problem as posed is satisfied for everything equal to zero but that’s not good enough We need control. If q’ is too big we want to make it smaller and vice versa Let’s look at this in block diagram mode

  38. Here’s a block diagram representation of the differential equations open loop - + e q’ w’

  39. We can close the loop by feeding the q’ signal back to the input closed loop - - w’ q’ feedback loop

  40. So we have new equations governing the closed loop system There’s no disturbance, so the equations are homogeneous

  41. We’ve gone from an inhomogeneous set of equations to a homogeneous set This what closing the loop does; there’s no more undetermined external input. We want q and w to go to zero, and that will depend on the eigenvalues of the new system

  42. This will converge to zero for any positive g Let’s put in some numbers: K = 0.429, R = 2.71, Ix = 0.061261 (10 cm steel disk) We are overdamped for small g and underdamped for large g We can get at the behavior by applying what we know about homogeneous problems

  43. The eigenvectors I will select the gain g = 0.2379 (to make some things come out nicely) This leads to the eigenvaluess = -0.544 ± 0.544j

  44. The homogeneous solution for the closed loop system is With the numbers we have

  45. At t = 0, we have A little algebra Now we have the complete solution in terms of the initial conditions

  46. Let’s plot this and see what happens for q’0 = π/3 and w’0 = 0

  47. I did this in something of an ad hoc fashion that did not really illustrate the general principle Let me go back and repeat it more formally I can clean up the algebra a bit with a definition

  48. Let me define a gain vector If the output is the angle, then g1 is a proportional gain and g2 a derivative gain The matrix and the closed loop characteristic polynomial is the determinant of

  49. which I can expand Denote the roots of this by s1 and s2 from which

More Related