Lecture 25.
Download
1 / 52

Lecture 25. Introduction to Control - PowerPoint PPT Presentation


  • 53 Views
  • Uploaded on

Lecture 25. Introduction to Control. in which we enlarge upon the simple intuitive control we’ve seen. We generally want a system to be in some “equilibrium” state. If the equilibrium is not stable, then we need a control to stabilize the state.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Lecture 25. Introduction to Control' - pancho


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Lecture 25. Introduction to Control

in which we enlarge upon the simple intuitive control we’ve seen

We generally want a system to be in some “equilibrium” state

If the equilibrium is not stable, then we need a control to stabilize the state

I will talk a little bit about this in the abstract, but first,

let me repeat some of what we did Tuesday evening


I want to start on control at this point

We have open loop control and closed loop control

Open loop control is simply:

guess what input u we need to control the system and apply that

Closed loop control sits on top of open loop control in a sense we will shortly see.

In closed loop control we measure the error between what we want and what we have

and adjust the input to reduce the error: feedback control


Cruise control

About the simplest feedback control system we see in everyday life is cruise control

We want to go at a constant speed

If the wind doesn’t change and the road is smooth and level

we can do this with an open loop system

Otherwise we need a closed loop system

Recall the diagram from Lecture 1, and modify it to describe a cruise control


CRUISE CONTROL

the open loop part

desired speed

INVERSE

PLANT

GOAL: SPEED

nominal fuel flow

Actual speed

+

+

PLANT:

DRIVE TRAIN

Input: fuel flow

disturbance

-

-

the closed loop part

error

Feedback: fuel flow

CONTROL


We have some open loop control — a guess as to the fuel flow

We have some closed loop control — correct the fuel flow if the speed is wrong

It happens that this is not good enough, but let’s just start naively


S flowimple first order model of a car driving along: drive force, air drag and disturbance

Nonlinear, with an open loop control

If s = 0, and v = vdwe’ll have an “equilibrium” that determines the open loop f

Split the force and the velocity into two parts (on the way to linearizing)

Substitute into the original ode


Expand flowv2, and cancel the common parts

Linearize by crossing out the square of the departure speed, v’

and the goal is to make v’ go to zero


Let’s say a little about possible disturbances flow

hills are probably the easiest to deal with analytically

mgsinf

f

I’ll say more as we go on


The open loop picture flow

-

+

f’

v’

1/m

-


I’m not in a position to simply ask flowf’to cancel the disturbance

(because I don’t know what it is!)

I need some feedback mechanism to give me

more fuel when I am going too slow

and less fuel when I am going too fast

The linearized equation (still open loop)

Negative feed back from the velocity error


The closed loop picture flow

The open loop picture

+

+

-

f’

v’

1/m

-

control feedback


We have a one dimensional state flow

So we have

Its solution


We do not need this whole apparatus to get a sense of how this works

Consider a hill, for which s(t) is constant, call it s0

We can find the particular solution by inspection

The homogeneous solution decays, leaving the particular solution,

and we see that we have a permanent error in the speed

The bigger K, the smaller the error, but we can’t make it go away

(and K will be limited by physical considerations in any case)


What we’ve done so far is called proportional (P) control this works

We can fix this problem by adding integral (I) control.

There is also derivative (D) control (and we’ll see that in another example)

PID control incorporates all three types, and you’ll hear the term


Add a variable and its ode this works

Let the force depend on both variables

Then

rearrange

define k


Convert to state space this works

We remember that x denotes the error

so the initial condition for this problem is y’= 0 = v’: x(0) = {0 0}T


The homogeneous this workssolution (closed loop without the disturbance)

and we see that it will decay as long as K2 > 0


What happens now when we go up a hill this works?

This means a nonzero disturbance, and it requires a particular solution

We can now let the displacement take care of the particular solution


Wait a minute here! this works

What’s going on!?

Have I pulled a fast one?

Not at all. Let’s think a little bit here.


What did we just do? this works

What can we say in general?

We did some linearization,

and we changed a one dimensional state into a two dimensional state

We also dealt with disturbances, which is actually an advanced topic

We can look at all of this in Mathematica if we have the time and inclination at the end of this lecture. For now, let’s look at some results for the second order control. I will scale to make the system nondimensional and of general interest.


The this worksscaled response to a constant hill is


Suppose we have a more varied terrain? this works

Let

scaled response


The control still works very well, and tracks nicely once it is in place.

Let’s look at a much more complicated roadway


The scaled response looks like the scaled forcing is in place.

and the velocity error is minuscule — this really works



What did we do here? is in place.

We started with a one dimensional system

— and tried to find a force to cancel and exterior force

That didn’t work

We added a variable to the mix

found a new feedback

got the velocity to be controlled at the expense of its integral

about which we don’t care very much


Now let’s review this in a more abstract and general sense is in place.

(without worrying about disturbances for now)


Consider a basic single input linear system is in place.

We need a steady goal, for which

Typically ud = 0 and xd = 0, and I’ll assume that to be the case here

(xd might be the state corresponding to an inverted pendulum pointing up)


We can look at departures from the desired state (errors) is in place.

where remember that we want x’ —> 0

If u’ = 0, then the system is governed by

and its behavior depends on the eigenvalues of A

if they all have negative real parts, then x’ —> 0

We call that a stable system


If we have a stable system, we don’t need to control it is in place.

although we might want to add a control to make it more stable

If it is not stable, then we must add feedback control to make it stable

The behavior of this closed loop system depends on the eigenvalues of


We can get the eigenvalues of is in place.

by forming the determinant of

The terms in this equation will depend on the values of the gains

which are the components of the gain vector g

You might imagine that, since there will be as many of these as there are roots

that we can make the eigenvalues be anything we please.

This is often, but not always true. We’ll learn more about this in Lecture 31.



Last time we looked at an electric motor is in place.

and set it up as a second order system without being too specific

Make a vector equation out of this

or


Suppose we want the angle to be fixed at π/3 is in place.

The desired state satisfies the differential equations with no input voltage


We can write the differential equations for the primed quantities

We want the perturbations to go to zero

Is the homogeneous solution stable?

What are the eigenvalues of A?


There is one stable root and one marginally stable root ( quantitiess = 0).

The homogeneous solution in terms of the eigenvectors is


If the initial value of quantitiesq’ is not equal to its desired value (here 0)

then it will not ever get there.

The problem as posed is satisfied for everything equal to zero

but that’s not good enough

We need control.

If q’ is too big we want to make it smaller and vice versa

Let’s look at this in block diagram mode



We can close the loop by equationsfeeding the q’ signal back to the input

closed loop

-

-

w’

q’

feedback loop


So we have new equations governing the closed loop system equations

There’s no disturbance, so the equations are homogeneous


We’ve gone from an inhomogeneous set of equations to a homogeneous set

This what closing the loop does; there’s no more undetermined external input.

We want q and w to go to zero,

and that will depend on the eigenvalues of the new system


This will converge to zero for any positive homogeneous setg

Let’s put in some numbers: K = 0.429, R = 2.71, Ix = 0.061261 (10 cm steel disk)

We are overdamped for small g and underdamped for large g

We can get at the behavior by applying what we know about homogeneous problems


The eigenvectors homogeneous set

I will select the gain g = 0.2379 (to make some things come out nicely)

This leads to the eigenvaluess = -0.544 ± 0.544j


The homogeneous solution homogeneous set for the closed loop system is

With the numbers we have


At homogeneous sett = 0, we have

A little algebra

Now we have the complete solution in terms of the initial conditions


Let’s plot this and see what happens for homogeneous setq’0 = π/3 and w’0 = 0


I did this in something of an ad hoc fashion that did not really illustrate the general principle

Let me go back and repeat it more formally

I can clean up the algebra a bit with a definition


Let me define a gain vector really illustrate the general principle

If the output is the angle, then g1 is a proportional gain and g2 a derivative gain

The matrix

and the closed loop characteristic polynomial is the determinant of


which I can expand really illustrate the general principle

Denote the roots of this by s1 and s2

from which


I can choose any values of really illustrate the general principles1 and s2 and find g1 and g2 so that they will be the roots.


That’s what we needed to do today really illustrate the general principle

It was, maybe, a lot.

Let it sink in and we’ll deal with questions as we go along


ad