1 / 60

ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants

ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants. Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Tecnology (NTNU) Trondheim, Norway Mexico, 01 May 2012. Trondheim, Norway. Mexico. Arctic circle. North Sea.

yuli-ramos
Download Presentation

ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECONOMICPLANTWIDE CONTROL: Control structure design for complete processing plants Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Tecnology (NTNU) Trondheim, Norway Mexico, 01 May 2012

  2. Trondheim, Norway Mexico

  3. Arctic circle North Sea Trondheim SWEDEN NORWAY Oslo DENMARK GERMANY UK

  4. NTNU, Trondheim

  5. Outline 1. Introduction plantwide control (control structure design) 2. Plantwide control procedure I Top Down Step 1: Define optimal operation Step 2: Optimize for expected disturbances Step 3: Select primary controlled variables c=y1 (CVs) Step 4: Where set the production rate? (Inventory control) II Bottom Up Step 5: Regulatory / stabilizing control (PID layer) What more to control (y2)? Pairing of inputs and outputs Step 6: Supervisory control (MPC layer) Step 7: Real-time optimization (Do we need it?) y1 y2 MVs Process

  6. How we design a control system for a complete chemical plant? Where do we start? What should we control? and why? etc. etc.

  7. Example: Tennessee Eastman challenge problem (Downs, 1991) TC PC LC AC x SRC Where place ??

  8. Alan Foss (“Critique of chemical process control theory”, AIChE Journal,1973): The central issue to be resolved ... is the determination of control system structure. Which variables should be measured, which inputs should be manipulated and which links should be made between the two sets? There is more than a suspicion that the work of a genius is needed here, for without it the control configuration problem will likely remain in a primitive, hazily stated and wholly unmanageable form. The gap is present indeed, but contrary to the views of many, it is the theoretician who must close it. • Previous work on plantwide control: • Page Buckley (1964) - Chapter on “Overall process control” (still industrial practice) • Greg Shinskey (1967) – process control systems • Alan Foss (1973) - control system structure • Bill Luyben et al. (1975- ) – case studies ; “snowball effect” • George Stephanopoulos and Manfred Morari (1980) – synthesis of control structures for chemical processes • Ruel Shinnar (1981- ) - “dominant variables” • Jim Downs (1991) - Tennessee Eastman challenge problem • Larsson and Skogestad (2000): Review of plantwide control

  9. Theory: Optimal operation ALLMIGHTY GOD? IDEAL COMMUNISM? PROCESS CONTROL? Objectives • Theory: • Model of overall system • Estimate present state • Optimize all degrees of freedom • Problems: • Model not available • Optimization complex • Not robust (difficult to handle uncertainty) • Slow response time • Process control: • Excellent candidate for centralized control CENTRALIZED OPTIMIZER Present state Model of system (Physical) Degrees of freedom

  10. Practice: Process control • Practice: • Hierarchical structure

  11. Process control: Hierarchical structure Director Process engineer Operator Logic / selectors / operator PID control u = valves

  12. Dealing with complexity Plantwide control: Objectives The controlled variables (CVs) interconnect the layers OBJECTIVE Min J (economics) RTO cs = y1s Follow path (+ look after other variables) MPC y2s Stabilize + avoid drift PID u (valves)

  13. y1 = c ? (economics) y2 = ? (stabilization) Translate optimal operation into simple control objectives:What should we control?

  14. Example: Bicycle riding y1 = distance to curb (1 m) y2 = bike tilt (stabilization) u = muscles

  15. Control structure design procedure I Top Down • Step 1: Define operational objectives (optimal operation) • Cost function J (to be minimized) • Operational constraints • Step 2: Identify degrees of freedom (MVs) and optimize for expected disturbances • Step 3: Select primary controlled variables c=y1 (CVs) • Step 4: Where set the production rate? (Inventory control) II Bottom Up • Step 5: Regulatory / stabilizing control (PID layer) • What more to control (y2; local CVs)? • Pairing of inputs and outputs • Step 6: Supervisory control (MPC layer) • Step 7: Real-time optimization (Do we need it?) y1 y2 MVs Process

  16. Step 1. Define optimal operation (economics) • What are we going to use our degrees of freedom u(MVs) for? • Define scalar cost function J(u,x,d) • u: degrees of freedom (usually steady-state) • d: disturbances • x: states (internal variables) Typical cost function: • Optimize operation with respect to u for given d (usually steady-state): minu J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 J = cost feed + cost energy – value products

  17. Step 2. Optimize • Identify degrees of freedom (u) • Optimize for expected disturbances (d) • Identify regions of active constraints • Need model of system • Time consuming, but it is offline

  18. Step 3: Implementation of optimal operation • Optimal operation for given d*: minu J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 → uopt(d*) Problem: Usally cannot keep uopt constant because disturbances d change How should we adjust the degrees of freedom (u)?

  19. Implementation (in practice): Local feedback control! y “Self-optimizing control:” Constant setpoints for c gives acceptable loss Main issue: What should we control? d Local feedback: Control c (CV) Optimizing control Feedforward

  20. Optimal operation - Runner Example: Optimal operation of runner • Cost to be minimized, J=T • One degree of freedom (u=power) • What should we control?

  21. Optimal operation - Runner Sprinter (100m) • 1. Optimal operation of Sprinter, J=T • Active constraint control: • Maximum speed (”no thinking required”)

  22. Optimal operation - Runner Marathon (40 km) • 2. Optimal operation of Marathon runner, J=T • Unconstrained optimum! • Any ”self-optimizing” variable c (to control at constant setpoint)? • c1 = distance to leader of race • c2 = speed • c3 = heart rate • c4 = level of lactate in muscles

  23. Optimal operation - Runner Conclusion Marathon runner select one measurement c = heart rate • Simple and robust implementation • Disturbances are indirectly handled by keeping a constant heart rate • May have infrequent adjustment of setpoint (heart rate)

  24. Further examples • Central bank. J = welfare. c=inflation rate (2.5%) • Cake baking. J = nice taste, c = Temperature (200C) • Business, J = profit. c = ”Key performance indicator (KPI), e.g. • Response time to order • Energy consumption pr. kg or unit • Number of employees • Research spending Optimal values obtained by ”benchmarking” • Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) • Biological systems: • ”Self-optimizing” controlled variables c have been found by natural selection • Need to do ”reverse engineering” : • Find the controlled variables used in nature • From this identify what overall objective J the biological system has been attempting to optimize

  25. Step 3. What should we control (c)?(primary controlled variables y1=c) Selection of controlled variables c • Control active constraints! • Unconstrained variables: Control self-optimizing variables!

  26. Example /QUIZ 1 Operation of Distillation columns in seriesWith given F (disturbance): 4 steady-state DOFs (e.g., L and V in each column) = N=41 αAB=1.33 N=41 αBC=1.5 > 95% B pD2=2 $/mol > 95% A pD1=1 $/mol F ~ 1.2mol/s pF=1 $/mol = = < 4 mol/s < 2.4 mol/s > 95% C pB2=1 $/mol Cost (J) = - Profit = pF F + pV(V1+V2) – pD1D1 – pD2D2 – pB2B2 Energy price: pV=0-0.2 $/mol (varies) QUIZ: What are the expected active constraints? 1. Always. 2. For low energy prices. DOF = Degree Of Freedom Ref.: M.G. Jacobsen and S. Skogestad (2011)

  27. QUIZ 2 Control of Distillation columns in series PC PC LC LC Given LC LC QUIZ. Assume low energy prices (pV=0.01 $/mol). How should we control the columns? HINT: CONTROL ACTIVE CONSTRAINTS Red: Basic regulatory loops

  28. SOLUTION QUIZ 2 Control of Distillation columns in series xB CC xBS=95% PC PC LC LC UNCONSTRAINED CV=? Given MAX V2 MAX V1 LC LC Red: Basic regulatory loops

  29. SOLUTION QUIZ 1 (more details) Active constraint regions for two distillation columns in series Energy price [$/mol] BOTTLENECK Higher F infeasible because all 5 constraints reached [mol/s] CV = Controlled Variable

  30. Unconstrained degrees of freedom Control “self-optimizing” variables • Old idea (Morari et al., 1980): “We want to find a function c of the process variables which when held constant, leads automatically to the optimal adjustments of the manipulated variables, and with it, the optimal operating conditions.” • The ideal self-optimizing variable c is the gradient (c =  J/ u = Ju) • Keep gradient at zero for all disturbances (c = Ju=0) • Problem: no measurement of gradient cost J Ju Ju=0 u

  31. H Ideal: c = Ju In practise: c = H y. Task: Determine H!

  32. Systematic approach: What to control? • Define optimal operation: Minimize cost function J • Each candidate c = Hy: ”Brute force approach”: With constant setpoints cs compute loss L for expected disturbances d and implementation errors n • Select variable c with smallest loss Acceptable loss ) self-optimizing control

  33. Control of Distillation columns in series xB xB CC CC xAS=2.1% xBS=95% PC PC LC LC Given MAX V2 MAX V1 LC LC Cost (J) = - Profit = pF F + pV(V1+V2) – pD1D1 – pD2D2 – pB2B2 Red: Basic regulatory loops

  34. Unconstrained optimum Optimal operation Cost J Jopt copt Controlled variable c

  35. Unconstrained optimum Optimal operation Cost J d Jopt n copt Controlled variable c • Two problems: • 1. Optimum moves because of disturbances d: copt(d) • 2. Implementation error, c = copt + n

  36. Good Good BAD Good candidate controlled variables c (for self-optimizing control) • The optimal value of c should be insensitive to disturbances • Small Fc = dcopt/dd • c should be easy to measure and control • Want “flat” optimum -> The value of c should be sensitive to changes in the degrees of freedom (“large gain”) • Large G = dc/du = HGy

  37. Optimal measurement combination • Candidate measurements (y): Include also inputs u H

  38. Optimal measurement combination: Nullspace method • Want optimal value of c independent of disturbances ) Δcopt = 0 ∙Δ d • Find optimal solution as a function of d: uopt(d), yopt(d) • Linearize this relationship: Δyopt = F ∙Δd • F – optimal sensitivity matrix • Want: • To achieve this for all values of Δd (Nullspace method): • Always possible if • Comment: Nullspace method is equivalent to Ju=0

  39. Example. Nullspace Method for Marathon runner u = power, d = slope [degrees] y1 = hr [beat/min], y2 = v [m/s] F = dyopt/dd = [0.25 -0.2]’ H = [h1 h2]] HF = 0 -> h1 f1 + h2 f2 = 0.25 h1 – 0.2 h2 = 0 Choose h1 = 1 -> h2 = 0.25/0.2 = 1.25 Conclusion: c = hr + 1.25 v Control c = constant -> hr increases when v decreases (OK uphill!)

  40. With measurement noise cs = constant + u + + y K - + + c H Optimal measurement combination, c = Hy “=0” in nullspace method (no noise) “Minimize” in Maximum gain rule ( maximize S1 G Juu-1/2 , G=HGy) “Scaling” S1

  41. Example: CO2 refrigeration cycle pH • J = Ws (work supplied) • DOF = u (valve opening, z) • Main disturbances: • d1 = TH • d2 = TCs (setpoint) • d3 = UAloss • What should we control?

  42. CO2 refrigeration cycle Step 1. One (remaining) degree of freedom (u=z) Step 2. Objective function. J = Ws (compressor work) Step 3. Optimize operation for disturbances (d1=TC, d2=TH, d3=UA) • Optimum always unconstrained Step 4. Implementation of optimal operation • No good single measurements (all give large losses): • ph, Th, z, … • Nullspace method: Need to combine nu+nd=1+3=4 measurements to have zero disturbance loss • Simpler: Try combining two measurements. Exact local method: • c = h1 ph + h2 Th = ph + k Th; k = -8.53 bar/K • Nonlinear evaluation of loss: OK!

  43. Refrigeration cycle: Proposed control structure Control c= “temperature-corrected high pressure”. k = -8.5 bar/K

  44. Case study Control structure design using self-optimizing control for economically optimal CO2 recovery* Step S1. Objective function= J = energy cost + cost (tax) of released CO2 to air • 4 equality and 2 inequality constraints: • stripper top pressure • condenser temperature • pump pressure of recycle amine • cooler temperature • CO2 recovery ≥ 80% • Reboiler duty < 1393 kW (nominal +20%) Step S2. (a) 10 degrees of freedom: 8 valves + 2 pumps 4 levels without steady state effect: absorber 1,stripper 2,make up tank 1 Disturbances: flue gas flowrate, CO2 composition in flue gas + active constraints (b) Optimization using Unisim steady-state simulator. Region I (nominal feedrate):No inequality constraints active 2 unconstrained degrees of freedom =10-4-4 Step S3 (Identify CVs). 1. Control the 4 equality constraints 2. Identify 2 self-optimizing CVs. Use Exact Local method and select CV set with minimum loss. *M. Panahi and S. Skogestad, ``Economically efficient operation of CO2 capturing process, part I: Self-optimizing procedure for selecting the best controlled variables'', Chemical Engineering and Processing, 50, 247-253 (2011).

  45. Proposed control structure with given feed

  46. Step 4. Where set production rate? • Where locale the TPM (throughput manipulator)? • The ”gas pedal” of the process • Very important! • Determines structure of remaining inventory (level) control system • Set production rate at (dynamic) bottleneck • Link between Top-down and Bottom-up parts

  47. Production rate set at inlet :Inventory control in direction of flow* TPM * Required to get “local-consistent” inventory controlC

  48. Production rate set at outlet:Inventory control opposite flow TPM

  49. Production rate set inside process TPM Radiating inventory control around TPM (Georgakis et al.)

  50. Back-off = Lost production Time “Sellers marked” = high product prices: Optimal operation = max. throughput (active constraint) Want tight bottleneck control to reduce backoff! Rule for control of hard output constraints: “Squeeze and shift”! Reduce variance (“Squeeze”) and “shift” setpoint cs to reduce backoff

More Related