1 / 65

Feedback: The simple and best solution. Applications to self-optimizing control and stabilization of new operating regim

Feedback: The simple and best solution. Applications to self-optimizing control and stabilization of new operating regimes. Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim. University of Alberta, Edmonton, 29 April 2004.

rebekkah
Download Presentation

Feedback: The simple and best solution. Applications to self-optimizing control and stabilization of new operating regim

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Feedback:The simple and best solution.Applications to self-optimizing control and stabilization of new operating regimes Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim University of Alberta, Edmonton, 29 April 2004

  2. Outline • About myself • I. Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: What should we control? • III. Stabilizing feedback control: Anti-slug control • Conclusion

  3. Trondheim, Norway

  4. Arctic circle North Sea Trondheim SWEDEN NORWAY Oslo DENMARK GERMANY UK

  5. NTNU, Trondheim

  6. Sigurd Skogestad • Born in 1955 • 1978: Siv.ing. Degree (MS) in Chemical Engineering from NTNU (NTH) • 1980-83: Process modeling group at the Norsk Hydro Research Center in Porsgrunn • 1983-87: Ph.D. student in Chemical Engineering at Caltech, Pasadena, USA. Thesis on “Robust distillation control”. Supervisor: Manfred Morari • 1987 - : Professor in Chemical Engineering at NTNU • Since 1994: Head of process systems engineering center in Trondheim (PROST) • Since 1999: Head of Department of Chemical Engineering • 1996: Book “Multivariable feedback control” (Wiley) • 2000,2003: Book “Prosessteknikk” (Norwegian) • Group of about 10 Ph.D. students in the process control area

  7. Research: Develop simple yet rigorous methods to solve problems of engineering significance. • Use of feedback as a tool to • reduce uncertainty (including robust control), • change the system dynamics (including stabilization; anti-slug control), • generally make the system more well-behaved (including self-optimizing control). • limitations on performance in linear systems (“controllability”), • control structure design and plantwide control, • interactions between process design and control, • distillation column design, control and dynamics. • Natural gas processes

  8. Outline • About myself • I. Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: What should we control? • III. Stabilizing feedback control: Anti-slug control • Conclusion

  9. k=10 time 25 Example 1 d Gd G u y Plant (uncontrolled system)

  10. d Gd G u y

  11. Model-based control =Feedforward (FF) control d Gd G u y ”Perfect” feedforward control: u = - G-1 Gd d Our case: G=Gd → Use u = -d

  12. d Gd G u y Feedforward control: Nominal (perfect model)

  13. d Gd G u y Feedforward: sensitive to gain error

  14. d Gd G u y Feedforward: sensitive to time constant error

  15. d Gd G u y Feedforward: Moderate sensitive to delay (in G or Gd)

  16. d Gd ys e C G u y Measurement-based correction =Feedback (FB) control

  17. d Gd ys e C G u y Output y Input u Feedback generates inverse! Resulting output Feedback PI-control: Nominal case

  18. d Gd ys e C G u y Feedback PI control: insensitive to gain error

  19. d Gd ys e C G u y Feedback: insenstive to time constant error

  20. d Gd ys e C G u y Feedback control: sensitive to time delay

  21. Comment • Time delay error in disturbance model (Gd): No effect (!) with feedback (except time shift) • Feedforward: Similar effect as time delay error in G

  22. Conclusion: Why feedback?(and not feedforward control) • Simple: High gain feedback! • Counteract unmeasured disturbances • Reduce effect of changes / uncertainty (robustness) • Change system dynamics (including stabilization) • Linearize the behavior • No explicit model required • MAIN PROBLEM • Potential instability (may occur suddenly) with time delay / RHP-zero

  23. Outline • About myself • Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: What should we control? • Stabilizing feedback control: Anti-slug control • Conclusion

  24. Optimal operation (economics) • Define scalar cost function J(u0,d) • u0: degrees of freedom • d: disturbances • Optimal operation for given d: minu0 J(u0,x,d) subject to: f(u0,x,d) = 0 g(u0,x,d) < 0

  25. Implementation of optimal operation • Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u0,d) = 0 • CONTROL ACTIVE CONSTRAINTS! • Implementation of active constraints is usually simple. • WHAT MORE SHOULD WE CONTROL? • We here concentrate on the remaining unconstrained degrees of freedom u.

  26. Optimal operation Cost J Jopt uopt Independent variable u (remaining unconstrained)

  27. Implementation: How do we deal with uncertainty? • 1. Disturbances d • 2. Implementation error n us = uopt(d0) – nominal optimization n u = us + n d Cost J  Jopt(d)

  28. Problem no. 1: Disturbance d d ≠d0 Cost J d0 Jopt Loss with constant value for u uopt(d0) Independent variable u

  29. Problem no. 2: Implementation error n Cost J d0 Loss due to implementation error for u Jopt us=uopt(d0) u = us + n Independent variable u

  30. ”Obvious” solution: Optimizing control =”Feedforward” Estimate d and compute new uopt(d) Probem: Complicated and sensitive to uncertainty

  31. Alternative: Feedback implementation Issue: What should we control?

  32. Engineering systems • Most (all?) large-scale engineering systems are controlled using hierarchies of quite simple single-loop controllers • Commercial aircraft • Large-scale chemical plant (refinery) • 1000’s of loops • Simple components: on-off + P-control + PI-control + nonlinear fixes + some feedforward Same in biological systems

  33. RTO y1 = c ? (economics) MPC PID Process control hierarchy

  34. Self-optimizing Control • Define loss: • Self-optimizing Control • Self-optimizing control is when acceptable loss can be achieved using constant set points (cs)for the controlled variables c (without re-optimizing when disturbances occur).

  35. Constant setpoint policy: Effect of disturbances (“problem 1”)

  36. Effect of implementation error (“problem 2”) Good Good BAD

  37. Self-optimizing Control – Marathon • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)?

  38. Self-optimizing Control – Marathon • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)? • c1 = distance to leader of race • c2 = speed • c3 = heart rate • c4 = level of lactate in muscles

  39. Further examples • Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) • Cake baking. J = nice taste, u = heat input. c = Temperature (200C) • Business, J = profit. c = ”Key performance indicator (KPI), e.g. • Response time to order • Energy consumption pr. kg or unit • Number of employees • Research spending Optimal values obtained by ”benchmarking” • Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) • Biological systems: • ”Self-optimizing” controlled variables c have been found by natural selection • Need to do ”reverse engineering” : • Find the controlled variables used in nature • From this possibly identify what overall objective J the biological system has been attempting to optimize

  40. Good candidate controlled variables c (for self-optimizing control) Requirements: • The optimal value of c should be insensitive to disturbances (avoid problem 1) • c should be easy to measure and control (rest: avoid problem 2) • The value of c should be sensitive to changes in the degrees of freedom (Equivalently, J as a function of c should be flat) • For cases with more than one unconstrained degrees of freedom, the selected controlled variables should be independent. Singular value rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the minimum singular value of the appropriately scaled steady-state gain matrix G from u to c

  41. Self-optimizing control: Recycle processJ = V (minimize energy) 5 4 1 Given feedrate F0 and column pressure: 2 3 Nm = 5 3 economic (steady-state) DOFs Constraints: Mr < Mrmax, xB > xBmin = 0.98 DOF = degree of freedom

  42. Recycle process: Control active constraints Active constraint Mr = Mrmax Remaining DOF:L Active constraint xB = xBmin One unconstrained DOF left for optimization: What more should we control?

  43. Recycle process: Loss with constant setpoint, cs Large loss with c = F (Luyben rule) Negligible loss with c =L/F or c = temperature

  44. Recycle process: Proposed control structurefor case with J = V (minimize energy) Active constraint Mr = Mrmax Active constraint xB = xBmin Self-optimizing loop: Adjust L such that L/F is constant

  45. Outline • About myself • Why feedback (and not feedforward) ? • Self-optimizing feedback control: What should we control? • III. Stabilizing feedback control: Anti-slug control • Conclusion

  46. Application stabilizing feedback control:Anti-slug control Two-phase pipe flow (liquid and vapor) Slug (liquid) buildup

  47. Slug cycle (stable limit cycle) Experiments performed by the Multiphase Laboratory, NTNU

  48. Experimental mini-loop

  49. z p2 p1 Experimental mini-loopValve opening (z) = 100%

More Related