1 / 77

Sigurd Skogestad Department of Chemical Engineering

Feedback: The simple and best solution. Applications to self-optimizing control and stabilization of new operating regimes. Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim. Stuttgart July 2006. Abstract.

hpena
Download Presentation

Sigurd Skogestad Department of Chemical Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Feedback:The simple and best solution.Applications to self-optimizing control and stabilization of new operating regimes Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim Stuttgart July 2006

  2. Abstract • Feedback: The simple and best solution • Applications to self-optimizing control and stabilization of new operating regimes • Sigurd Skogestad, NTNU, Trondheim, Norway • Most chemical engineers are (indirectly) trained to be “feedforward thinkers" and they immediately think of “model inversion'' when it comes doing control. Thus, they prefer to rely on models instead of data, although simple feedback solutions in many cases are much simpler and certainly more robust.The seminar starts with a simple comparison of feedback and feedforward control and their sensitivity to uncertainty. Then two nice applications of feedback are considered: 1. Implementation of optimal operation by "self-optimizing control". The idea is to turn optimization into a setpoint control problem, and the trick is to find the right variable to control. Applications include process control, pizza baking, marathon running, biology and the central bank of a country. 2. Stabilization of desired operating regimes. Here feedback control can lead to completely new and simple solutions. One example would be stabilization of laminar flow at conditions where we normally have turbulent flow. I the seminar a nice application to anti-slug control in multiphase pipeline flow is discussed. 

  3. Outline • About Trondheim • I. Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: What should we control? • III. Stabilizing feedback control: Anti-slug control • Conclusion • More information: Skogestad on google

  4. Trondheim, Norway

  5. Arctic circle North Sea Trondheim SWEDEN NORWAY Oslo DENMARK GERMANY UK

  6. NTNU, Trondheim

  7. Outline • About Trondheim • I. Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: What should we control? • III. Stabilizing feedback control: Anti-slug control • Conclusion

  8. k=10 time 25 Example 1 d Gd G u y Plant (uncontrolled system)

  9. d Gd G u y

  10. Model-based control =Feedforward (FF) control d Gd G u y ”Perfect” feedforward control: u = - G-1 Gd d Our case: G=Gd → Use u = -d

  11. d Gd G u y Feedforward control: Nominal (perfect model)

  12. d Gd G u y Feedforward: sensitive to gain error

  13. d Gd G u y Feedforward: sensitive to time constant error

  14. d Gd G u y Feedforward: Moderate sensitive to delay (in G or Gd)

  15. d Gd ys e C G u y Measurement-based correction =Feedback (FB) control

  16. d Gd ys e C G u y Output y Input u Feedback generates inverse! Resulting output Feedback PI-control: Nominal case

  17. d Gd ys e C G u y Feedback PI control: insensitive to gain error

  18. d Gd ys e C G u y Feedback: insenstive to time constant error

  19. d Gd ys e C G u y Feedback control: sensitive to time delay

  20. Comment • Time delay error in disturbance model (Gd): No effect (!) with feedback (except time shift) • Feedforward: Similar effect as time delay error in G

  21. Conclusion: Why feedback?(and not feedforward control) • Simple: High gain feedback! • Counteract unmeasured disturbances • Reduce effect of changes / uncertainty (robustness) • Change system dynamics (including stabilization) • Linearize the behavior • No explicit model required • MAIN PROBLEM • Potential instability (may occur “suddenly”) with time delay/RHP-zero

  22. Problem feedback: Effective delay θ • Effective delay PI-control = ”original delay” + ”inverse response” + ”half of second time constant” + ”all smaller time constants” • Example: Series-cascade of five first-order systems y u u y(t) t

  23. PI-control with single measurement

  24. Improve control? • Some improvement possible with more complex controller • For example, add derivative action (PID-controller) • May reduce θeff from 3.5 s to 2.5 s • Problem: Sensitive to measurement noise • Does not remove the fundamental limitation (recall waterbed) • Add extra measurement and introduce local control • May remove the fundamental waterbed limitation • Waterbed limitation does not apply to first-order system • Cascade

  25. Add four local P-controllers (inner cascades)

  26. RTO MPC PID Further layers: Process control hierarchy

  27. Example amazing power of feedback: MIMO Diagonal plant • Simulations (and for tuning): Add delay 0.5 in each input • Simulations setpoint changes: r1=1 at t=0 and r2=1 at t=20 • Performance: Want |y1-r1| and |y2-r2| less than 1 • Clear that diagonal pairings are preferred

  28. Diagonal pairings Get two independent subsystems:

  29. Diagonal pairings.... Simulation with delay included:

  30. Off-diagonal pairings (!!?) Pair on two zero elements !! Loops do not work independently! But there is some effect when both loops are closed:

  31. Off- diagonal pairings for diagonal plant • Example: Want to control temperature in two completely different rooms (which may even be located in different countries). BUT: • Room 1 is controlled using heat input in room 2 (?!) • Room 2 is controlled using heat input in room 1 (?!) TC ?? 2 1 TC

  32. Off-diagonal pairings.... Controller design difficult. After some trial and error: • Performance quite poor, but it works because of the “hidden” feedback loop g12 g21 k1 k2!! • No failure tolerance

  33. Outline • About Trondheim • Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: What should we control? • Stabilizing feedback control: Anti-slug control • Conclusion

  34. Optimal operation (economics) • Define scalar cost function J(u0,d) • u0: degrees of freedom • d: disturbances • Optimal operation for given d: minu0 J(u0,x,d) subject to: f(u0,x,d) = 0 g(u0,x,d) < 0

  35. ”Obvious” solution: Optimizing control =”Feedforward” Estimate d and compute new uopt(d) Probem: Complicated and sensitive to uncertainty

  36. Engineering systems • Most (all?) large-scale engineering systems are controlled using hierarchies of quite simple single-loop controllers • Commercial aircraft • Large-scale chemical plant (refinery) • 1000’s of loops • Simple components: on-off + P-control + PI-control + nonlinear fixes + some feedforward Same in biological systems

  37. In Practice: Feedback implementation Issue: What should we control?

  38. In Practice: Feedback implementation Issue: What should we control?

  39. RTO y1 = c ? (economics) MPC PID Further layers: Process control hierarchy

  40. Implementation of optimal operation • Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u0,d) = 0 • CONTROL ACTIVE CONSTRAINTS! • Implementation of active constraints is usually simple. • WHAT MORE SHOULD WE CONTROL? • We here concentrate on the remaining unconstrained degrees of freedom.

  41. Optimal operation Cost J Jopt copt Controlled variable c

  42. Optimal operation Cost J d LOSS Jopt Optimization error copt Controlled variable c Two problems: • 1. Optimum moves because of disturbances d: copt(d)

  43. Optimal operation Cost J LOSS Jopt n copt Controlled variable c Two problems: • 1. Optimum moves because of disturbances d: copt(d) • 2. Implementation error, c = copt + n

  44. Optimal operation Cost J WANT FLAT OPTIMUMTO REDUCE EFFECT OF IMPLENTATION ERROR LOSS Jopt n copt Controlled variable c Two problems: • 1. Optimum moves because of disturbances d: copt(d) • 2. Implementation error, c = copt + n

  45. Self-optimizing Control • Define loss: • Self-optimizing Control • Self-optimizing control is when acceptable operation (=acceptable loss) can be achieved using constant set points (cs)for the controlled variables c (without the need for re-optimizing when disturbances occur). c=cs

  46. Self-optimizing Control – Sprinter • Optimal operation of Sprinter (100 m), J=T • Active constraint control: • Maximum speed (”no thinking required”)

  47. Self-optimizing Control – Marathon • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)?

  48. Self-optimizing Control – Marathon • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)? • c1 = distance to leader of race • c2 = speed • c3 = heart rate • c4 = level of lactate in muscles

  49. Self-optimizing Control – Marathon • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)? • c1 = distance to leader of race (Problem: Feasibility for d) • c2 = speed (Problem: Feasibility for d) • c3 = heart rate (Problem: Impl. Error n) • c4 = level of lactate in muscles (=Pain! Used in practice)

  50. Further examples • Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) • Cake baking. J = nice taste, u = heat input. c = Temperature (200C) • Business, J = profit. c = ”Key performance indicator (KPI), e.g. • Response time to order • Energy consumption pr. kg or unit • Number of employees • Research spending Optimal values obtained by ”benchmarking” • Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) • Biological systems: • ”Self-optimizing” controlled variables c have been found by natural selection • Need to do ”reverse engineering” : • Find the controlled variables used in nature • From this possibly identify what overall objective J the biological system has been attempting to optimize

More Related