Plantwide control: Towards a systematic procedure

Outline
• About Trondheim and myself
• Control structure design (plantwide control)
• A procedure for control structure design
I Top Down
•
•
•
•
Step 1: Degrees of freedom
Step 2: Operational objectives (optimal operation)
Step 3: What to control ? (self-optimizing control)
Step 4: Where set production rate?
II Bottom Up
• Step 5: Regulatory control: What more to control ?
• Step 6: Supervisory control
• Step 7: Real-time optimization
• Case studies
1
Optimal operation (economics)
•
•
What are we going to use our degrees of freedom for?
Define scalar cost function J(u0,x,d)
– u0: degrees of freedom
– d: disturbances
– x: states (internal variables)
Typical cost function:
J = cost feed + cost energy – value products
•
Optimal operation for given d:
minuss J(uss,x,d)
subject to:
Model equations:
Operational constraints:
2
f(uss,x,d) = 0
g(uss,x,d) < 0
Optimal operation distillation column
•
•
Distillation at steady state with given p and F: N=2 DOFs, e.g. L and V
Cost to be minimized (economics)
cost energy (heating+ cooling)
J = - P where P= pD D + pB B – pF F – pV V
value products
•
•
3
cost feed
Constraints
Purity D: For example
xD, impurity · max
Purity B: For example,
xB, impurity · max
Flow constraints:
min · D, B, L etc. · max
Column capacity (flooding): V · Vmax, etc.
Pressure:
1) p given,
2) p free: pmin · p · pmax
Feed:
1) F given
2) F free: F · Fmax
Optimal operation: Minimize J with respect to steady-state DOFs
Example: Paper machine drying section
up to 30 m/s (100 km/h)
(~20 seconds)
water
~10m
water recycle
4
Paper machine: Overall operational objectives
•
Degrees of freedom (inputs) drying section
– Steam flow each drum (about 100)
– Air inflow and outflow (2)
•
Objective: Minimize cost (energy) subject to satisfying operational constraints
–
–
–
–
5
Humidity paper ≤10% (active constraint: controlled!)
Air outflow T < dew point – 10C (active – not always controlled)
ΔT along dryer (especially inlet) < bound (active?)
Remaining DOFs: minimize cost
Optimal operation
minimize J = cost feed + cost energy – value products
Two main cases (modes) depending on marked conditions:
1. Given feed
Amount of products is then usually indirectly given and J = cost energy.
Optimal operation is then usually unconstrained:
“maximize efficiency (energy)”
2. Feed free
Control: Operate at optimal
trade-off (not obvious how to do
and what to control)
Products usually much more valuable than feed + energy costs small.
Optimal operation is then usually constrained:
6
“maximize production”
Control: Operate at bottleneck (“obvious”)
Comments optimal operation
• Do not forget to include feedrate as a degree of freedom!!
– For paper machine it may be optimal to have max. drying and adjust the
speed of the paper machine!
• Control at bottleneck
– see later: “Where to set the production rate”
7
Outline
• About Trondheim and myself
• Control structure design (plantwide control)
• A procedure for control structure design
I Top Down
•
•
•
•
Step 1: Degrees of freedom
Step 2: Operational objectives (optimal operation)
Step 3: What to control ? (self-optimizing control)
Step 4: Where set production rate?
II Bottom Up
• Step 5: Regulatory control: What more to control ?
• Step 6: Supervisory control
• Step 7: Real-time optimization
• Case studies
8
Step 3. What should we control (c)?
(primary controlled variables y1=c)
Outline
• Implementation of optimal operation
• Self-optimizing control
• Uncertainty (d and n)
• Example: Marathon runner
• Methods for finding the “magic” self-optimizing variables:
A. Large gain: Minimum singular value rule
B. “Brute force” loss evaluation
C. Optimal combination of measurements
• Example: Recycle process
• Summary
9
Implementation of optimal operation
• Optimal operation for given d*:
minu J(u,x,d)
subject to:
Model equations:
Operational constraints:
→ uopt(d*)
f(u,x,d) = 0
g(u,x,d) < 0
Problem: Usally cannot keep uopt constant because disturbances d change
How should be adjust the degrees of freedom (u)?
10
Implementation of optimal operation (Cannot keep u0opt constant)
”Obvious” solution:
Optimizing control
Estimate d from measurements
and recompute uopt(d)
Problem: Too complicated
(requires detailed model and
description of uncertainty)
11
In practice: Hierarchical
decomposition with separate layers
What should we control?
12
Self-optimizing control:
When constant setpoints is OK
Constant setpoint
13
Unconstrained variables:
Self-optimizing control
• Self-optimizing control:
Constant setpoints cs give
”near-optimal operation”
(= acceptable loss L for expected
disturbances d and
implementation errors n)
14
Acceptable loss )
self-optimizing control
What c’s should we control?
• Optimal solution is usually at constraints, that is, most of the degrees
of freedom are used to satisfy “active constraints”, g(u,d) = 0
• CONTROL ACTIVE CONSTRAINTS!
– cs = value of active constraint
– Implementation of active constraints is usually simple.
• WHAT MORE SHOULD WE CONTROL?
– Find “self-optimizing” variables c for remaining
unconstrained degrees of freedom u.
15
What should we control? – Sprinter
• Optimal operation of Sprinter (100 m), J=T
– One input: ”power/speed”
– Active constraint control:
• Maximum speed (”no thinking required”)
16
What should we control? – Marathon
• Optimal operation of Marathon runner, J=T
– No active constraints
– Any self-optimizing variable c (to control at constant
setpoint)?
17
Self-optimizing Control – Marathon
• Optimal operation of Marathon runner, J=T
– Any self-optimizing variable c (to control at constant
setpoint)?
•
•
•
•
18
c1 = distance to leader of race
c2 = speed
c3 = heart rate
c4 = level of lactate in muscles
Further examples self-optimizing control
•
•
•
•
•
•
•
Marathon runner
Central bank
Cake baking
Business systems (KPIs)
Investment portifolio
Biology
Chemical process plants: Optimal blending of gasoline
Define optimal operation (J) and look for ”magic” variable
(c) which when kept constant gives acceptable loss (selfoptimizing control)
19
More on further examples
•
•
•
Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%)
Cake baking. J = nice taste, u = heat input. c = Temperature (200C)
Business, J = profit. c = ”Key performance indicator (KPI), e.g.
– Response time to order
– Energy consumption pr. kg or unit
– Number of employees
– Research spending
Optimal values obtained by ”benchmarking”
•
•
Investment (portofolio management). J = profit. c = Fraction of
investment in shares (50%)
Biological systems:
–
–
”Self-optimizing” controlled variables c have been found by natural
selection
Need to do ”reverse engineering” :
•
•
20
BREAK
Find the controlled variables used in nature
From this possibly identify what overall objective J the biological system has
been attempting to optimize
Summary so far: Active constrains and
unconstrained variables
•
•
Optimal operation: Minimize J with respect to DOFs
General: Optimal solution with N DOFs:
–
–
Nactive: DOFs used to satisfy “active” constraints (· is =)
Nu= N – Nactive. remaining unconstrained variables
Often: Nu is zero or small
21
•
It is “obvious” how to control the active constraints
•
Difficult issue: What should we use the remaining Nu degrees of
for, that is what should we control?
Recall: Optimal operation distillation column
•
•
Distillation at steady state with given p and F: N=2 DOFs, e.g. L and V
Cost to be minimized (economics)
cost energy (heating+ cooling)
J = - P where P= pD D + pB B – pF F – pV V
value products
•
•
22
cost feed
Constraints
Purity D: For example
xD, impurity · max
Purity B: For example,
xB, impurity · max
Flow constraints:
min · D, B, L etc. · max
Column capacity (flooding): V · Vmax, etc.
Pressure:
1) p given,
2) p free: pmin · p · pmax
Feed:
1) F given
2) F free: F · Fmax
Optimal operation: Minimize J with respect to steady-state DOFs
Solution: Optimal operation distillation
•
Cost to be minimized
J = - P where P= pD D + pB B – pF F – pV V
•
•
N=2 steady-state degrees of freedom
Active constraints distillation:
–
–
•
Purity spec. valuable product is always active (“avoid giveaway of valuable product”).
Purity spec. “cheap” product may not be active (may want to
overpurify to avoid loss of valuable product – but costs energy)
Three cases:
1. Nactive=2: Two active constraints (for example, xD, impurity = max. xB,
impurity = max, “TWO-POINT” COMPOSITION CONTROL)
Problem : WHAT SHOULD
2. Nactive=1: One constraint active (1 remaining DOF)
WE CONTROL (TO SATISFY
3. Nactive=0: No constraints active (2 remaining DOFs)
THE UNCONSTRAINED DOFs)?
23
Can happen if no purity specifications
(e.g. byproducts or recycle)
Solution:
Often compositions but
not always!
Unconstrained variables:
What should we control?
• Intuition: “Dominant variables” (Shinnar)
• Is there any systematic procedure?
24
What should we control?
Systematic procedure
•
Systematic: Minimize cost J(u,d*) w.r.t.
DOFs u.
1. Control active constraints (constant setpoint
is optimal)
2. Remaining unconstrained DOFs: Control
“self-optimizing” variables c for which
constant setpoints cs = copt(d*) give small
(economic) loss
Loss = J - Jopt(d)
when disturbances d ≠ d* occur
25
c = ? (economics)
y2 = ? (stabilization)
The difficult unconstrained variables
Cost J
Jopt
c
copt
26
Selected controlled variable
(remaining unconstrained)
Example: Tennessee Eastman plant
J
Oopss..
bends backwards
c = Purge rate
Nominal optimum setpoint is infeasible with disturbance 2
Conclusion: Do not use purge rate as controlled variable
27
Optimal operation
d
Cost J
LOSS
Jopt
copt
Controlled variable c
Two problems:
• 1. Optimum moves because of disturbances d: copt(d)
28
Optimal operation
d
Cost J
LOSS
Jopt
n
copt
29
Controlled variable c
Two problems:
• 1. Optimum moves because of disturbances d: copt(d)
• 2. Implementation error, c = copt + n
Effect of implementation error on cost (“problem 2”)
Good
30
Good
BAD
Example sharp optimum. High-purity distillation :
c = Temperature top of column
Ttop
Water (L) - acetic acid (H)
Max 100 ppm acetic acid
100 C: 100% water
100.01C: 100 ppm
99.99 C: Infeasible
Temperature
31
Unconstrained degrees of freedom:
Candidate controlled variables
•
•
We are looking for some “magic” variables c to control.....
What properties do they have?’
Intuitively 1: Should have small optimal range delta copt
– since we are going to keep them constant!
•
•
span(c)
Intuitively 2: Should have small “implementation error” n
“Intuitively” 3: Should be sensitive to inputs u (remaining unconstrained
degrees of freedom), that is, the gain G0 from u to c should be large
– G0: (unscaled) gain from u to c
– large gain gives flat optimum in c
–
•
Charlie Moore (1980’s): Maximize minimum singular value when selecting temperature
locations for distillation
Will show shortly: Can combine everything into the “maximum gain rule”:
– Maximize scaled gain G = Go / span(c)
32
Unconstrained degrees of freedom:
Justification for “intuitively 2 and 3”
J
Optimizer
c
cs
Controller that
adjusts u to keep
cm = cs
cm=c+n
n
Want small n
n
cs=copt
u
d
c
Plant
uopt
Want the slope (= gain G0 from u to c) large –
corresponds to flat optimum in c
33
u
Mathematic local analysis
(Proof of “maximum gain rule”)
cost
J
uopt
34
u
Minimum singular value of scaled gain
Maximum gain rule (Skogestad and Postlethwaite, 1996):
Look for variables that maximize the scaled gain (G)
(minimum singular value of the appropriately scaled
steady-state gain matrix G from u to c)
(G) is called the Morari Resiliency index (MRI) by Luyben
35
Detailed proof: I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad,
``Optimal selection of controlled variables'', Ind. Eng. Chem. Res., 42 (14), 3273-3284 (2003).
Unconstrained degrees of freedom:
Maximum gain rule for scalar system
Juu: Hessian for effect of u’s on cost
Problem: Juu can be difficult to obtain
Fortunate for scalar system: Juu does not matter
36
Maximum gain rule in words
Select controlled variables c for which
the gain G0 (=“controllable range”) is large compared to
its span (=sum of optimal variation and control error)
37
B. “Brute-force” procedure for selecting
(primary) controlled variables (Skogestad, 2000)
•
•
•
•
•
•
•
Step 1 Determine DOFs for optimization
Step 2 Definition of optimal operation J (cost and constraints)
Step 3 Identification of important disturbances
Step 4 Optimization (nominally and with disturbances)
Step 5 Identification of candidate controlled variables (use active constraint
control)
Step 6 Evaluation of loss with constant setpoints for alternative controlled
variables
Step 7 Evaluation and selection (including controllability analysis)
Case studies: Tenneessee-Eastman, Propane-propylene splitter, recycle process,
heat-integrated distillation
38
Unconstrained degrees of freedom:
C. Optimal measurement combination (Alstad, 2002)
39
Unconstrained degrees of freedom:
C. Optimal measurement combination (Alstad, 2002)
Basis: Want optimal value of c independent of disturbances )
  copt = 0 ¢  d
• Find optimal solution as a function of d: uopt(d), yopt(d)
• Linearize this relationship: yopt = F d
• F – sensitivity matrix
• Want:
• To achieve this for all values of  d:
• Always possible if
• Optimal when we disregard implementation error (n)
40
Alstad-method continued
• To handle implementation error: Use “sensitive” measurements, with
information about all independent variables (u and d)
41
Summary unconstrained degrees of freedom:
Looking for “magic” variables to keep at constant setpoints.
How can we find them systematically?
Candidates
A. Start with: Maximum gain (minimum singular value) rule:
B. Then: “Brute force evaluation” of most promising alternatives.
Evaluate loss when the candidate variables c are kept constant.
In particular, may be problem with feasibility
C. More general candidates: Find optimal linear combination (matrix H):
42
Toy Example
43
Toy Example
44
Toy Example
45
EXAMPLE: Recycle plant (Luyben, Yu, etc.)
5
4
1
Given feedrate F0 and
column pressure:
Dynamic DOFs:
Nm = 5
Column levels:
N0y = 2
Steady-state DOFs: N0 = 5 - 2 = 3
46
2
3
Recycle plant: Optimal operation
mT
1 remaining unconstrained degree of freedom
47
Control of recycle plant:
Conventional structure (“Two-point”: xD)
LC
LC
XC
xD
XC
xB
LC
Control active constraints (Mr=max and xB=0.015) + xD
48
Luyben rule
49
Luyben rule (to avoid snowballing):
“Fix a stream in the recycle loop” (F or D)
Luyben rule: D constant
LC
LC
XC
LC
50
Luyben rule (to avoid snowballing):
“Fix a stream in the recycle loop” (F or D)
A. Maximum gain rule: Steady-state gain
Conventional:
Looks good
Luyben rule:
Not promising
economically
51
How did we find the gains in the Table?
1. Find nominal optimum
2. Find (unscaled) gain G0 from input to candidate outputs:  c = G0  u.
•
•
In this case only a single unconstrained input (DOF). Choose at u=L
Obtain gain G0 numerically by making a small perturbation in u=L while
adjusting the other inputs such that the active constraints are constant
(bottom composition fixed in this case)
IMPORTANT!
3. Find the span for each candidate variable
•
•
•
For each disturbance di make a typical change and reoptimize to obtain
the optimal ranges copt(di)
For each candidate output obtain (estimate) the control error (noise) n
span(c) = i |copt(di)| + n
4. Obtain the scaled gain, G = G0 / span(c)
52
B. “Brute force” loss evaluation:
Disturbance in F0
Luyben rule:
Conventional
53
Loss with nominally optimal setpoints for Mr, xB and c
B. “Brute force” loss evaluation:
Implementation error
Luyben rule:
54
Loss with nominally optimal setpoints for Mr, xB and c
C. Optimal measurement combination
• 1 unconstrained variable (#c = 1)
• 1 (important) disturbance: F0 (#d = 1)
• “Optimal” combination requires 2 “measurements” (#y = #u + #d = 2)
– For example, c = h1 L + h2 F
• BUT: Not much to be gained compared to control of single variable
(e.g. L/F or xD)
55
Conclusion: Control of recycle plant
Active constraint
Mr = Mrmax
Self-optimizing
56
L/F constant: Easier than “two-point” control
Assumption: Minimize energy (V)
Active constraint
xB = xBmin
Recycle systems:
Do not recommend Luyben’s rule of
fixing a flow in each recycle loop
(even to avoid “snowballing”)
57
Summary: Self-optimizing Control
Self-optimizing control is when
acceptable operation can be achieved
using constant set points (cs) for the
controlled variables c (without the need to
re-optimizing when disturbances occur).
c=cs
58
Summary:
Procedure selection controlled variables
1.
2.
3.
4.
Define economics and operational constraints
Identify degrees of freedom and important disturbances
Optimize for various disturbances
Identify (and control) active constraints (off-line calculations)
•
May vary depending on operating region. For each operating region do step 5:
5. Identify “self-optimizing” controlled variables for remaining degrees of
freedom
1. (A) Identify promising (single) measurements from “maximize gain rule” (gain =
minimum singular value)
•
(C) Possibly consider measurement combinations if no promising
2. (B) “Brute force” evaluation of loss for promising alternatives
•
•
Necessary because “maximum gain rule” is local.
In particular: Look out for feasibility problems.
3. Controllability evaluation for promising alternatives
59
Summary ”self-optimizing” control
•
Operation of most real system: Constant setpoint policy (c = cs)
–
–
–
–
•
•
•
•
Central bank
Business systems: KPI’s
Biological systems
Chemical processes
Goal: Find controlled variables c such that constant setpoint policy gives
acceptable operation in spite of uncertainty
) Self-optimizing control
Method A: Maximize (G)
Method B: Evaluate loss L = J - Jopt
Method C: Optimal linear measurement combination:
c = H y where HF=0
60
Outline
• Control structure design (plantwide control)
• A procedure for control structure design
I Top Down
•
•
•
•
Step 1: Degrees of freedom
Step 2: Operational objectives (optimal operation)
Step 3: What to control ? (self-optimzing control)
Step 4: Where set production rate?
II Bottom Up
• Step 5: Regulatory control: What more to control ?
• Step 6: Supervisory control
• Step 7: Real-time optimization
• Case studies
61