1 / 66

Sigurd Skogestad Department of Chemical Engineering

Feedback: Still the simplest and best solution Applications to self-optimizing control and stabilization of new operating regimes. Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim. Xi’an May 2009. Trondheim, Norway. Xi’an.

elpida
Download Presentation

Sigurd Skogestad Department of Chemical Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Feedback:Still the simplest and best solutionApplications to self-optimizing control and stabilization of new operating regimes Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim Xi’an May 2009

  2. Trondheim, Norway Xi’an

  3. Arctic circle North Sea Trondheim SWEDEN NORWAY Oslo DENMARK GERMANY UK

  4. NTNU, Trondheim

  5. Outline • I. Why feedback (and not feedforward) ? • The feedback amplifier • II. Self-optimizing control: • How do we link optimization and feedback? • What should we control? • III. Stabilizing feedback control: • Anti-slug control • Conclusion

  6. Example: AMPLIFIER G Amplifier y r • Want: y(t) = α r(t) • Solution 1 (feedforward): • G = α (adjust amplifier gain) • Very difficult to in practice • Cannot get exact value of α • Cannot easily adjust α online • Do not get same amplification at all frequencies • Problems with distortion and nonlinearity

  7. G Amplifier y r K2 Measured y Black’s feedback amplifier (1927) Want: y(t) = α r(t) Solution 2 (feedback): G = k (any large amplifier gain, k > α) K2 = 1/α (adjustable) Closed-loop response MAGIC! Independent of G, provided GK2 >> 1

  8. k=10 time 25 Example: disturbance rejection 1 d Gd G u y Plant (uncontrolled system)

  9. 1. Feedforward control (measure d) d Gd G u y ”Perfect” feedforward control: u = - G-1 Gd d Our case: G=Gd → Use u = -d

  10. 1.Feedforward control: Nominal (perfect model) d Gd G u y

  11. d Gd ys e C G u y 2. Feedback control

  12. d Gd ys e C G u y 2. Feedback PI-control: Nominal case Output y Input u Feedback generates inverse! Resulting output

  13. Robustness comparison • Gain error, k = 5, 10 (nominal), 20 • Time constant error, τ = 5, 10 (nominal), 20 • Time delay error, θ = 0 (nominal), 1, 2, 3

  14. Robustness: Gain error, k = 5, 10 (nominal), 20 1. FEEDFORWARD 2. FEEDBACK

  15. Robustness: Time constant error, τ= 5, 10 (nominal), 20 1. FEEDFORWARD 2. FEEDBACK

  16. Robustness: Time delay error, θ = 0 (nominal), 1, 2, 3 1. FEEDFORWARD 2. FEEDBACK

  17. Conclusion: Why feedback?(and not feedforward control) • Simple: High gain feedback! • Counteract unmeasured disturbances • Reduce effect of changes / uncertainty (robustness) • Change system dynamics (including stabilization) • Linearize the behavior • No explicit model required • MAIN PROBLEM: Potential instability (may occur “suddenly”) with time delay/RHP-zero Unstable (RHP) zero: Fundamental problem with feedback! Does not help with detailed model + state estimator (Kalman filter)…

  18. Outline • I. Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: • How do we link optimization and feedback? • What should we control? • III. Stabilizing feedback control: Anti-slug control • Conclusion

  19. Optimal operation (economics) • Define scalar cost function J(u0,x,d) • u0: degrees of freedom • d: disturbances • x: states (internal variables) • Optimal operation for given d. Dynamic optimization problem: minu0 J(u0,x,d) subject to: Model: f(u0,x,d) = 0 Constraints: g(u0,x,d) < 0 Here: How do we implement optimal operation?

  20. 1. ”Obvious” solution: Optimizing control =”Feedforward” Estimate d and compute new uopt(d) Probem: Complicated and sensitive to uncertainty

  21. 2. In Practice: Feedback implementation Issue: What should we control?

  22. RTO y1 = c ? (economics) MPC PID Process control hierarchy

  23. What should we control? • CONTROL ACTIVE CONSTRAINTS! • Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u0,d) = 0 • Implementation of active constraints is usually simple. • WHAT MORE SHOULD WE CONTROL? • But what about the remaining unconstrained degrees of freedom? • Look for “self-optimizing” controlled variables!

  24. Self-optimizing Control • Definition Self-optimizing Control • Self-optimizing control is when acceptable operation (=acceptable loss) can be achieved using constant set points (cs)for the controlled variables c (without the need for re-optimizing when disturbances occur). c=cs

  25. Optimal operation – Runner • Cost: J=T • One degree of freedom (u=power) • Optimal operation?

  26. Optimal operation - Runner Solution 1: Optimizing control • Even getting a reasonable model requires > 10 PhD’s  … and the model has to be fitted to each individual…. • Clearly impractical!

  27. Optimal operation - Runner Solution 2 – Feedback(Self-optimizing control) • What should we control?

  28. Optimal operation - Runner Self-optimizing control: Sprinter (100m) • 1. Optimal operation of Sprinter, J=T • Active constraint control: • Maximum speed (”no thinking required”)

  29. Optimal operation - Runner Self-optimizing control: Marathon (40 km) • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)? • c1 = distance to leader of race • c2 = speed • c3 = heart rate • c4 = level of lactate in muscles

  30. Optimal operation - Runner Conclusion Marathon runner select one measurement c = heart rate • Simple and robust implementation • Disturbances are indirectly handled by keeping a constant heart rate • May have infrequent adjustment of setpoint (heart rate)

  31. Unconstrained optimum Optimal operation Cost J Jopt copt Controlled variable c

  32. Unconstrained optimum Optimal operation Cost J d Jopt n copt Controlled variable c Two problems: • 1. Optimum moves because of disturbances d: copt(d) • 2. Implementation error, c = copt + n

  33. Good Good BAD Unconstrained optimum Candidate controlled variables c for self-optimizing control Intuitive • The optimal value of c should be insensitive to disturbances (avoid problem 1) • Ideal self-optimizing variable is gradient, c = Jus • Optimal value is always Ju=0 (gradient change sign at optimum) • Optimum should be flat (avoid problem 2 – implementation error). Equivalently: Value of c should be sensitive to degrees of freedom u. • “Want large gain”, |G| • Or more generally: Maximize minimum singular value,

  34. Unconstrained optimum Quantitative steady-state: Maximum gain rule Maximum gain rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the scaled gain (Gs) (minimum singular value of the appropriately scaled steady-state gain matrix Gsfrom u to c)

  35. cost J u uopt Unconstrained optimum Proof: Local analysis c = G u

  36. Unconstrained optimum Optimal measurement combinations Exact solutions for quadratic optimization problems • Nullspace method. No loss for disturbances (d) 2. Generalized (with noise n) • c = Hy can be considered as linear invariants for the quadratic optimization problem – which can be used for feedback implementation of optimal solution! • Application: Explicit MPC * V. Alstad, S. Skogestad and E.S. Hori, Optimal measurement combinations as controlled variables, Journal of Process Control, 19, 138-148 (2009)

  37. Example: CO2 refrigeration cycle pH • J = Ws (work supplied) • DOF = u (valve opening, z) • Main disturbances: • d1 = TH • d2 = TCs (setpoint) • d3 = UAloss • What should we control?

  38. CO2 cycle: Maximum gain rule

  39. Conclusion CO2 refrigeration cycle Self-optimizing c= “temperature-corrected high pressure”

  40. Outline • I. Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: What should we control? • III. Stabilizing feedback control: Anti-slug control • IV. Conclusion

  41. Application stabilizing feedback control:Anti-slug control Two-phase pipe flow (liquid and vapor) Slug (liquid) buildup

  42. Slug cycle (stable limit cycle) Experiments performed by the Multiphase Laboratory, NTNU

  43. Experimental mini-loop

  44. z p2 p1 Experimental mini-loopValve opening (z) = 100%

  45. z p2 p1 Experimental mini-loopValve opening (z) = 25%

  46. z p2 p1 Experimental mini-loopValve opening (z) = 15%

  47. z p2 p1 Experimental mini-loop:Bifurcation diagram No slug Valve opening z % Slugging

  48. Avoid slugging? • Operate away from optimal point • Design changes • Feedforward control? • Feedback control?

  49. z p2 p1 Design change Avoid slugging:1. Close valve (but increases pressure) No slugging when valve is closed Valve opening z %

More Related