Escolar Documentos
Profissional Documentos
Cultura Documentos
ADC: The analog signal is converted into digital form by A/D conversion system. The conversion
system usually consists of an A/D converter proceeded by a sample-and-hold device.
DAC: The digital signal coming from digital device is converted into analog signal.
1
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
State variable model of SISO discrete system consists of a set of first-order difference equations
i. Relating state variables x1(k), x2(k) ,…. xn(k) of the discrete time system to the input u(k).
ii. Relating output y(k) algebraically to state variables and the input.
Thus, the dynamics of a linear time-invariant system is described by the following two
equations.
x(k 1) Fx(k ) gu (k )
y (k ) Cx(k) du(k)
Where equation (1) is called ‘state equation ‘and equation (2) is called ‘output equation’. The
equations together give ‘state variable model’ of the system.
In equation (1) and (2)
X(k) is called state vector
U(k) is the system input,
Y(k) is output
‘F’ a constant matrix, ‘g’- a constant column matrix and
‘c’ is a constant row matrix
‘d’ is a scalar coupling between input and output.
2
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
g1
g2
g ..
,
..
g
n
c c1 c2 .. .. cn nx1
Where d is a scalar
3
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
4
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
X (k ) 1
ZI F 1
Z x(0)
Also, F k (k ) 1
ZI F Z 1
1
x ( 0)
0
y ( k ) x2 ( k )
0 1
F
0.21 1
F K PK P 1
z 1 1
z z 0.21
2
z z 0.21
2
zI F
1
0.21 2
2
z z 0.21 z 2 z 0.21
z z z z
1.75 z 0.3 0.75 z 0.7 2.5
z 0.3
2.5
z 0.7
zI F 1 z
z z z z
0.525 z 0.3 0.525 z 0.7 0.75
z 0.3
1.75
z 0.7
5
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
zI F
z 0 0 1
0 z 0.21 1
1K 0 .. .. 0
0
K
2 .. .. 0
P : P 1
:
0 K
.. .. n
0
Where’ is the transformation matrix that transforms ‘F’ into diagonal form
For the previous example, FK can be found as follows using Similarity Transformation method.
Given 0 1
F
0.21 1
z z z z
1.75 z 0.3 0.75 z 0.7 2.5
z 0.3
2.5
z 0.7
zI F 1 z
z z z z
0.525 z 0.3 0.525 z 0.7 0.75
z 0.3
1.75
z 0.7
2 0.21 0
6
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
0 0 1 1
I F
0 0.21 1 0.21 1
1 0.3, 2 0.7
As ‘F’ is in the comparison form, the matrix ‘P’ can be written as
0 .3 0
P 1 FP
0 0.7
1 1 1 1
P
1 2 0.3 0.7
1.75 2. 5
P 1
0.75 2.5
0.3K 0
K
0 0.7 K
Given 0 1
F
0.21 1
(0.3) K 00.3 1
(0.7) K 00.7 1
1 2.5(0.3) K 2.5(0.7) K
( K ) F K 0I 1F
1.75(0.3) K 0.75(0.7) K 2.5(0.3) K 2.5(0.7) K
(K )
0.525(0.3) 0.525(0.7) 0.75(0.3) K 1.75(0.7) K
K K
For the previous example, FK can be found as follows using Caley Hamilton Theorem.
Given 0 1
F
0.21 1
The Eigen values are -0.3 and -0.7. Since, F is of second order, the polynomial will be of
the form g ( )
0 1
(0.3) K 00.3 1
(0.7) K 00.7 1
0 1.75(0.3) K 0.75(0.7) K
1 2.5(0.3) K 2.5(0.7) K
( K ) F K 0I 1F
8
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Therefore, the state diagram, as shown in Figure, consists of a single branch with gain s−1
The necessary and sufficient condition for the system to be completely observable is that the ‘nxn’
observability matrix
c
cF
V cF 2
....
cF n 1
Has rank equal to n, i.e., i.e., (V)=n.
CONTINUOUS STATE SPACE SYSTEM TO DISCRETE STATE SPACE SYSTEM
The continuous state equation of the plant is
x(t ) Ax(t ) bu (t )
y Cx(t )
The Discrete state equation of the plant is
x(k 1) Fx(k ) gu (k )
y (k ) Cx(k )
The transformation is
e At L1 sI A
1. Find 1
2. Calculate F e AT
3. Calculate T
g e A bd
0
Example:
0 1 0
A , b , c 1 0
0 1 1
T 1 sec
The system is
1 1 e t 1 1 e T
e
At
F e AT
0 et 0 e T
T
0 1 e
d
T 1 e
T T
g e bd T
A
1 e T
0
e d
0
LOSS OF CONTROLLABILITY AND OBSERVABILITY DUE TO SAMPLING
The Transfer is
Y ( s) 2
X (s) s 2 2
10
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
U g Fg 2sin T 1 cos T
c
V sin T
cF
The controllability and observability are preserved only if
Re i Re j
2n
Im i j
T
n 1, 2,.......
STABILITY TESTS OF DISCRETE-DATA SYSTEM
Jury's stability test
F ( z ) an z n an 1 z n 1 an 2 z n 2 ..... a2 z 2 a1 z a0
Necessary condition is
F(1)>0 and (-1)nF(-1)>0
The total number of rows are 2n-3
Sufficient condition (2n-3 row will be there)
th
The k element of row-3 is given by
a an k b bn 1 k
bk 0 , ck 0
an ak bn 1 bk
Sufficient conditions
(n-1) conditions
11
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
a0 an
b0 bn 1
c0 cn 2
....
STABILITY ANALYSIS USING BILINEAR TRANSFORMATION
The bilinear transformation maps the interior of unit circle in the z-plane into the left half
of the r-plane.
1 r
z
1 r
STATE OBSERVER
Determination of observer Gain Matrix (m)
Method -1
1. Check the Observability (Qb)
2. Determine the desired char polynomial
1 2 ...... n n 1 n1 .... n1 n
3. Determine the original char polynomial
I F n a1 n1 a2 n2 .... an1 an
4. Determine the transformation matrix (Po)
Po W QbT ,
c an 1 an 2 .... a2 a1 1
cF a
n 2 an 3 .... a1 1 0
Qb cF 2 , W ... ... .... ... ... ...
.... a1 1 .... 0 0 0
cF n 1 1 0 .... 0 0 0
5. Determine the Observer Gain Matrix (m)
n an
m1 a
m n 1 n 1
m 2
P
1
....
... o
2 a2
mn 1 a1
12
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
n 1
P1 F
5. Determine the state feedback gain matrix
K n an n1 an1 .... 2 a2 1 a1 Pc
Note:
If the given system state model is in controllable phase variable form then Pc=I (unit matrix)
13
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Example
Design a state feedback controller which will give closed-loop poles at -2, -1±j.
x1 (k 1) 0 1 0 x1 (k ) 0
x (k 1) 0 0 1 x ( k ) 0 u ( k )
2 2
x3 (k 1) 0 2 3 x3 (k ) 10
Solution:
Qc g Fg F 2 g
0 0 10
0 10 30 1000 0
10 30 70
0.1 0 0
Desired characteristics polynomial
1 2 3 2 1 j 1 j
3 4 2 6 4
Here, 3=4, 2=6, 1=4
Original characteristics polynomial
1 0 0 0 1 0
I F 0 1 0 0 0 1
0 0 1 0 2 3
3 3 2 2 0
Here
a3 =0, a2 =2, a1 =3,
P1 0 0 1 * Qc1
P1 0.1 0 0
Pc P1 F 0 0.1 0
P1 F 2 0 0 0.1
The state feedback gain matrix is
K 3 a3 2 a2 1 a1 Pc
4 0 6 2 4 3 Pc
0.4 0.4 0.1
14
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
BASICS
System identification is a methodology for building mathematical models of
dynamic systems using measurements of the system's input and output signals.
• White box
One could build a so-called white-box model based on first principles, e.g. a model for a
physical process from the Newton equations, but in many cases such models will be overly
complex and possibly even impossible to obtain in reasonable time due to the complex nature
of many systems and processes.
15
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
• Grey box model: although the peculiarities of what is going on inside the system are not
entirely known, a certain model based on both insight into the system and experimental
data is constructed.
• black box model: No prior model is available. Most system identification algorithms are
of this type.
16
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
18
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
f x ax b
N N
Error y i f x i y i ax i b
2 2
i 1 i 1
• The ‘best’ line has minimum error between line and data points
• This is called the least squares approach, since square of the error is minimized.
N
2
Minimize Error y i ax i b
i 1
Take the derivative of the error with respect to a and b, set each to zero
N 2
y i ax i b
Error
i 1 0
a a
N 2
y i ax i b
Error
i 1 0
b b
Error N
2 x i y i ax i b 0
a i 1
Error N
2 y i ax i b 0
b i 1
Solve for the a and b so that the previous two equations both = 0
N N N
a x b x i x i y i
2
i
i 1 i 1 i 1
N N
a x i bN y i
i 1 i 1
19
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
N
N
N xi
b y
i 1
i
i 1
N N
2 a N
x i x i i i
x y
i 1 i 1 i 1
N N N
N x i yi x i yi
a i 1 i 1 i 1
2
N
N
N x x i 2
i
i 1 i 1
N N N N
y x x x y
i
2
i i i i
b i 1 i 1 i 1 i 1
2
N
N
N x x i
2
i
i 1 i 1
FOPDT system
Unit step input Output
K -s
e
u(t) Ts 1 y(t)
21
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
The gain K is given by final value ( after convergence) as shown in Figure 4. The maxima
and minima of the step response occur at times.
tn n (3)
2
0 1
n 1, 2,..
M exp (5)
2
1-
Note: First maxima occurs at n 1 , First minima occurs at n 2 , and second maxima occurs at
n 3 and so on.
From the step response as shown in Figure 4.the first peak time t1 and first peak overshoot M can
be determined from (5).
log M
2
(6)
[ (log M 2 )]1/2
0 (Since n =1 ) (7)
2
t1 1-
From (6) and (7) the parameters and 0 can be determined.
FREQUENCY ANALYSIS
For a frequency analysis, it is convenient to use following system with transfer function model as
Y(s) G(s) U(s) (1)
Where,
Y(s) = Laplace Transform of Output signal Y(t)
U(s) =Laplace Transform of Input Signal u(t)
G(s) = Transform function of a system
Apply a following sinusoidal Input U(t) to a system (described in eq.(4) as shown in Figure 2.
u t a sin ( t)
Where,
a = amplitude of the sinusoidal input u(t)
= Frequency of the sinusoidal input u(t) in rad/sec
If the system G(s) is asymptotically stable, then the output y(t) is also a sinusoidal signal.
23
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
t
s
G(s) h( ) e d (5)
0
i t i t
(e e )
Since sin( t) (6)
2i
Equations (1) (4) (5) and (6)
Y(t) h( )u(t )d
0
t
h( ) a sin ( (t ))d
0
t
(ei (t )
e i (t )
)
h( ) a d
0
2i
t
a
h( )(ei (t )
e i (t )
)d
2i 0
24
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
t
a
h( )(ei t e i
e i t
e i
)d
2i 0
t t
a i t ( i t) i t ( i t)
e h( )e d e h( )e d
2i 0 0
t
G(i ) h( )e ( i t)
d
0
t
( i t)
G( i ) h( )e d
0
a it i t
Y(t) e G(i ) e G( i )
2i
Since we can represent
G(i ) rei
Where,
r magnitude of G(i ) G(i )
argument of G(i ) ei arg G(i )
a it
Y(t) e G(i ) ei arg G (i )
e i t
G(i ) e i arg G (i )
2i
G(i ) G( i ) G(i )
a
Y(t) G(i ) ei t ei arg G (i )
e i t
e i arg G (i )
2i
CORRELATION ANALYSIS
The form of model used in correlation analysis is
Y t h K u t K v t (2)
K 0
25
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Where,
Y(t) = Output signal
u(t) = Input signal
h (K) = Weighting sequence
v t = Disturbance term.
Assume that input u(t) is a stationary stochastic process (white noise) which is
independent of the disturbance v t . Then the following relation holds for the cross covariance
function.
ryu h K ru -K (2)
k 0
Where,
ryu = Cross covariance function between output Y(t) and input u(t)
N min ( ,0)
1
r̂Yu Y t u t 0, 1, 2 (3)
N t 1 min ( ,0)
N
1
r̂u u t u t
N t 1
rˆu rˆu 0, 1, 2
Where,
26
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Then an estimate ĥ(K) of the weighting function h(K) can be determined by solving
following equation
rˆyu hˆ K rˆ u -K (4)
k 0
Solving above infinite dimensional equation is very difficult. This problem can be simplified by
2
using white noise (mean value = 0, variamce 1 as input
For white noise,
r̂u = 0, for t = 1, 2…
rYu k
h (K ) K = 0, 1… (5)
ru k
27
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
rYu 0
ru 0
h 0
rYu 1
h1
. ru 0
h .
rYu
ru 0
From the above equation the weighting function h(K) i.e, h 0 , h 1 ,.....h can be
easily estimated.
T
Y(t) (t) (1)
Where,
Y(t) = Measured Quantity
(t) = n – Vector of known quantities
T
= Y(t 1), Y(t 2),...., Y(t n a ) u(t 1)... u(t nb)
= n – Vector of unknown parameters
The following two examples show how a model can be represented using linear regression model
form.
Example 1:
Consider a following first order linear discrete model
Y(t) a y(t 1) b u(t 1) (2)
The model represented in eq. (2) can be written in linear regression model as follows
a
y(t 1) u(t 1)
b
T
(t)
Where,
(t) y(t 1) u(t 1)
a
b
The elements of (t) are often called regression variables or regressors
while y(t) is called regressed variable. The is called parameter vector. The variable t
takes integer values.
Example 2:
Consider a truncated weighting function model.
The input signal u(t) u(t 1) ..... h m 1 u(t M 1) are recorded during the experiment. Hence
the regression variables
T
Y(2) (2)
.
.
T
Y(N) (N)
Y (4)
Where,
29
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Y(1)
.
Y . an ( N x1 ) vector (5)
Y(N)
T
(1)
.
. an ( N x n ) vector (6)
T
(N)
(t) y(t) T
(t) ˆ (7)
In statistical literature the equation errors are often called residuals. The least square
estimate of is defined as the vector ˆ that minimizes the loss function
N N
1 2 1 T 2
V( ) (t) Y(t) (t) (8)
2 t 1 2 t 1
Note:
The order forms of loss function are
1 T
V( ) . (9)
2
1 2
V( ) (10)
2
Where . denotes the Euclidean vector norm.
30
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
In many cases (t) is known as function of t. Then (13) might be easier to implement
than (11) since the matrix of large dimension is not needed in eq. (13). Also the form eq. (13)
is the starting point in deriving several recursive estimates.
The counter parts of on-line methods are the so called off-line or batch methods, in
which all the recorded data are used simultaneously to find the parameter estimates.
Recursive identification methods have the following general features :
They are central part of adaptive systems (used, for example, for control and signal
processing) where the action is based on the most recent model.
Their requirement on primary memory is quite less compare to offline identification
methods which require large memory to store entire data.
They can be easily modified into real-time algorithms, aimed at tracking time-
varying parameters.
They can be first step in a fault detection algorithm, which is used to find out whether the
system has changed significantly.
31
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Most adaptive systems, for example adaptive control systems as shown in Fig. 1. are based
(explicitly or implicitly) on recursive identification.
Then a current estimated model of the process is available at all times. This time varying
model is used to determine the parameters of the (also time-varying) regulator
(also called controller).
In this way the regulator will be dependent on the previous behavior of the process
Through the information flow: Process --- model --- regulator).
If an appropriate principle is used to design the regulator, then the regulator should adopt
to the changing characteristics of the process.
The various identification methods are
Recursive least squares method
Real time identification method
Recursive instrumental variable method
Recursive prediction error method.
32
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
The argument t has been used to stress the dependence of ˆ on time. The eq.(3) can be
computed in recursive fashion.
Introduce the notation
t 1
T
P(t) (s) (s) (4)
s 1
t
1 T
P (t) (s) (s)
s 1
t 1
P 1 (t) (s) T
(s) (t) T
(t)
s 1
P 1 (t) P 1 (t 1) (t) T
(t)
1 1 T
P (t 1) P (t) (t) (t) (5)
Then using eq.(3) and eq. (4) can be written as
t
ˆ (t) P(t) (s)Y(s) (6)
s 1
Note:
If we replace t by t 1 in eq. (6), we get
t 1
ˆ (t 1) P(t 1) (s)Y(s) (7)
s 1
t 1
ˆ (t 1) P 1 (t 1) (s)Y(s)
s 1
The equation eq. (6) can be written as
t 1
ˆ (t) P(t) (s)Y(s) (t)y(t) (8)
s 1
By substituting eq. (7) in eq. (8) we get
ˆ (t) P(t) P 1 (t 1) ˆ (t 1) (t)y(t)
By substituting eq. (5)
ˆ (t) P(t) P 1 (t) (t) T
(t) ˆ (t 1) (t)y(t)
ˆ (t) P(t) P 1 (t) ˆ (t 1) (t) T
(t) ˆ(t 1) (t) y(t)
ˆ (t) ˆ (t 1) P(t) (t) T
(t) ˆ (t 1) P(t) (t) y(t)
33
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Hence the term (t) should be interpreted as a prediction error. It is the difference between the
measured output y(t) and the one-step-ahead prediction ŷ t |t 1; ˆ (t 1) T
(t) ˆ (t 1)
of y(t) made at t 1 based on the model corresponding to the estimate ˆ (t 1) . If (t) is small,
the estimate ˆ (t 1) is ‘good’ and should not be modified very much. The vector K(t) in eq. (10
b) should be interpreted as a weighting or gain factor showing how much the value of (t) will
modify the different elements of the parameter vector.
To complete the algorithm, eq. (5) must be used to compute P(t) which is needed in eq. (10 b).
However, the use of eq. (5) needs a matrix inversion at each time step. This would be time
consuming procedure. Using matrix inversion lemma, however eq. (5) can be rewritten in updating
equation form as
P(t 1) (t)
K(t) T
(12)
1 (t) P(t 1) (t)
34
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
35
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
-nαnu(-n-1) z 1 |z|>|α|
(1 z 1 ) 2
cos(nω0)u(n) 1 cos 0 z 1 |z|>1
1 2cos 0 z 1 z 2
sin(nω0)u(n)z-a sin 0 z 1 |z|>1
1 2cos 0 z 1 z 2
Problems:
1. f(n)=an
By the definition of Z-transform, z f n f n z n F z
n 0
1 1
z a
2
a a
n
a
Z f n a n z n
a z
1 ... 1 if a 1 if z a
n 0 n 0 z
n
z z z z z za
Now in place of a if we substitute as 1 or -1, the transform will become
z
z (1) if z 1
z 1
z z
z (1) n if z 1
z (1) z 1
2. f(n)=n2
znf (n) z Z f (n)
d
dz
d z z 1 2z z 1
z n 2 zn.n z Z n z
d
2
dz z 1
z ( z 1) 4
z 3
dz z 1 z 1
3. f(n)=n
2
1 2 3 1 1
Z n nz n 2 3 ... 1 2 ... 1
n n 1 2 3
n 0 n 0 z z z z z z z z z
4. z-transform of cosωt
Continuous time function is f(t)=cosωt; put t=kT where T=sampling time period.
By the definition of one sided z-transform,
z{ f (k )} F ( z )
k 0
f (k ) z k cos(kT ) z
k 0
k
i i
e e
We know that cos
2
1 ikT k 1 ikT k
F ( z)
2 k 0
e z
2 k 0
e z
z
We know that z (e kT )
z e kT
36
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
1 z 1 z
F ( z) iT
2 ze 2 z e iT
z 2 z cos T
2
z 2 z cos T 1
z ( z cos T )
F ( z) 2
z 2 z cos T 1
5. f(t)=e-atsinωt
Continuous function is f(t)= e-atsinωt
Substitute t=kT
f(kT)= e-akTsinωkT
By the definition of one sided Z-transform we get,
z[ f (k )] F ( z ) e
k 0
akT
sin kTz k
1 aT iT 1 k
1
(e aT e iT z 1 ) k (e e z )
2i k 0 2i k 0
C
1
From infinite geometric sum series from that, k
k 0 1 C
1 1 1
F ( z) aT iT 1
aT iT 1
2 1 e e z 1 e e z
1 1 1
iT
iT aT
2 1 e / ze aT
1 e / ze
1 ze aT ze aT
aT
2 ze e iT ze aT e iT
ze aT ( ze aT cos T ) e i e i
F ( z) cos
z 2 e 2 aT 2 ze aT cos T 1 2
Modified Z-transform:
To obtain the values of the response between sampling instants modified z-transforms are useful.
Modified z-transform mainly useful in analysing sampled-data control systems containing
transportation lag.
Evaluation of Modified Z-transforms:
Suppose that the transfer function of a process with dead time is represented by the following
expression:
G p ( s) G(s)e d s where G(s) contains no dead time and θd=dead time.
Substitute d NT where
N= Largest integer number of sampling intervals in θd; T=sampling period.
G p (s) G(s)e ( NT ) s
Taking Z-transform; G p (Z ) Z{G(s)e ( NT ) s } Z{G(s)e NTs * G(s)e ) }
We know that, Z{e NTs } Z N Z N z{G(s)e s }
The quantity Z{G(s)e-θs} is defined as the modified z-transform of G(s) and is denoted by Zm{G(s)}
or G(z,m). Thus G{Z , m} Z m {G(s)} Z{G(s)e s }
37
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
38
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
a 1 e amT
z m z 1 1
aT 1
We know that s( s a) 1 z 1 e z
a 5; m 1 / T 0.5
1 e 50.5
G ho z 1 1
5 1
1 z 1 e z
0.918 z 4 0.075 z 5
G ho G p ( z )
5 0.033 z 1
0.918 z 4 0.075 z 5
C ( z) 5 0.0.033 z 1
2
1 2 0.918 z 0.075 z
R( z ) 4 5
5 0.0.033 z 1
z 1.836 z 4 0.15 z 5
C ( z) 1 4 5
z 1 5 0.033 z 1.836 z 0.15 z
1.836 z 3 0.15 z 4
C ( z ) 1 3 4
5
5 z 5.033 0.033 z 1.836 z 1.686 z 0.15 z
C(k) is calculated by long division method.
C ( z ) 0 0 z 1 0 z 2 0 z 3 0.3672 z 4 0.398 z 5 0.398 z 6
c(k ) {0, 0, 0, 0, 0.3672, 0.398, 0.398, ...}
Pulse Transfer function:
The transfer function of Linear Discrete-Time system (LDS) is called pulse transfer function of
the system.
Let h(k)=Impulse response of a LDS system. Now Z-transform of h(k)=Z{h(k)}=H(z).
H(z)=Transfer function of LDS system. The input-output relationship of a LDS system is governed
by a convolution sum of equation.
y (k ) x ( m) g ( k m) x ( k ) * g ( k )
m 0
By taking Z-transform of this convolution sum it can be shown that, G(z) is given by the ratio of
Y(z)/X(z), where Y(z) is the z-transform of output y(k) of LDS system and X(z) is the Z-transform
of input x(k) to the LDS system.
The pulse transfer function is defined as transfer function of the LDS system as the ratio of the Z-
transforms of the output of a system to the Z-transform of the input to the system with zero initial
conditions.
Y ( z)
Pulse Transfer function
X ( z)
The input-output relation of LDS system is governed by the constant coefficient difference
equation,
N M
y (k ) a
m 1
m y (k m) a
m 0
m x(k m)
39
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Z { y (k m)} z mY ( z ) and
Z {x(k m)} z m X ( z ) by shifting property
N M
Y ( z)
m 1
a m z mY ( z ) b
m 0
mz
m
X ( z)
N M
Y ( z)
m 1
a m z mY ( z ) b
m 0
mz
m
X ( z)
40
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
1 e st G p (s)
Z G p ( s ) (1 Z 1 ) Z
s s
4
(1 Z 1 ) Z
s ( s 10)( s 0.2)( s 1)
Taking Partial fraction we get,
4 2 0.0045 2.55 0.55
Z Z _
s ( s 10)( s 0.2)( s 1) s s 10 s 0.2 s 1
After applying Z-transforms and substituting T=1sec we get,
z 1 z 1 z 1
HG p ( z ) 2 0.0045 2.55 0.55
z 0.0000453 z 0.0818 z 0.368
5e td
b) G p ( S ) t d 2T , T 1sec
(3s 1)( s 1)
1 e st 5e td
HG p ( Z ) Z
s (3s 1)( s 1)
1.66e 2T
We know that, (1 z 1 ) Z
s ( s 0.33)( s 1)
1.66
(1 z 1 ) z 2 Z
s ( s 0.33)( s 1)
5 7.5 2.5
After taking Partial fractions, we get, Z
s s 0. 33 s 1
Taking z-transform and substituting T=1sec;
5z 7 .5 z 2. 5
HG p ( Z ) (1 z 1 ) z 2 0.33
z 1 z e z e T
z 1 z 1
z 2 5 7.5 2.5
z 0.718 z 0.368
Digital PID Controller:
The figure shows a continuous data PID controller acting on an error signal e(t). The controller
simply multiplies thee error signal e(t) with a constant Kp. The integral control multiplies time
integral of e(t) by Ki and derivative control generates a signal equal to Kd times the time derivative
of e(t). The function of the integral control is to provide action to reduce the area under e(t) which
leads to reduction of steady state error. The derivative control provides anticipatory action to
41
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
reduce the overshoots and oscillations in time response. In digital control, P-control is still
implemented by a proportional constant Kp. The integrator and differentiator can be implemented
by various schemes.
Numerical Integration:
Since integration is the most time consuming and difficult, basic mathematical operations on a
digital computer using simulation is important. Continuous integration operations are performed
by numerical methods. This replaces the SOH devices at strategic locations in a control system.
X(s)/R(s)=1/s represents the integrator, where r(t) is the input. Area under the curve is
represented by x(t), with output between t=0, t=T.
42
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
DEADBEAT ALGORITHM:
1
Obtain the deadbeat control law for the given G p ( s) where sampling period T=1sec.
0.4s 1
1 z 1
Deadbeat control algorithm, D( z )
G ( z ) 1 z 1
1 e sT 1 1 e sT
G ( z ) Z {Gho ( s ) G p ( s )} Gho G p ( z ) Z Z
Z
s 0. 4 s 1 s ( 0. 4 s 1) s ( 0.4 s 1)
2 .5
Gho G p ( z ) (1 z 1 ) Z
s ( s 2.5)
2.5 1 1
By partial fraction method we get,
s( s 2.5) s s 2.5
2.5 z z
(1 z 1 ) z (1 z 1 )
Taking z transform we get, s ( s 2.5) z 1 z 0.08
0.92
z 0.08
43
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
1 z 1
D( z )
G ( z ) 1 z 1
1 0.08 z 1
0.92 0.92 z 1
To invert deadbeat algorithm into time domain, the following steps are carried out;
M ( z) 1 0.98 z 1
D( z )
E ( z ) 0.92 0.92 z 1
After inverse z-transforms, we get, m(n) m(n 1) 1.08e(n) 0.086e(n 1)
Where, M(z)- Z-transform of controller output
E(z)- Z-transform of error [R(z)-C(z)]
m(n)- controller output at the nth sampling instant
m(n-1)- controller output at the (n-1) sampling instant
e(n)- Error (set point-measurement) at the nth sampling instant
e(n-1)- Error at the (n-1) sampling instant
DAHLIN ALGORITHM
e 0.8 s
Design a Dahlin’s controller algorithm for G p ( s) T 0.4 sec
0.6s 1
44
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
M ( z) 0.6321 0.322 z 1
D( z )
E ( z ) 0.49 0.18 z 1 0.31z 3
After inverse z-transforms, we get,
m(n) 1.29e(n) 0.657 e(n 1) 0.3673m(n 1) 0.6326 m(n 3)
Where, M(z)- Z-transform of controller output
E(z)- Z-transform of error [R(z)-C(z)]
m(n)- controller output at the nth sampling instant
m(n-1)- controller output at the (n-1) sampling instant
e(n)- Error (set point-measurement) at the nth sampling instant
e(n-1)- Error at the (n-1) sampling instant
Smith Predictor scheme:
45
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
As shown in the figure, the process is conceptually split into a pure lag and a pure dead time. If
the fictitious variable (b) could be measured somehow, that can be connected to the controller as
shown in fig.7.40(b). This would move the dead time outside the loop. The controlled variable (c)
would repeat whatever b did after a delay of ɵd. since there is no delay in the feedback signal (b),
the response of the system would be greatly improved. The scheme of course, cannot be
implemented, because b is an unmeasurable signal. Ow a model of the process is developed and a
manipulated variable (m) is applied to the model as shown in the figure. If the model were perfect
and disturbance, L=0, then the controlled variable c will become equal to the error cm and em=c-
cm=0. The arrangement reveals that although the fictitious process variable b is unavailable, the
value of bm can be derived which will be equal to b unless modelling errors or load upsets are
present. It is used as feedback signal. The difference (c-cm) is the error which arises because of
modelling errors or load upsets. To compensate for these errors, a second feedback loop is
46
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
implemented using em. this is called the Smith Predictor control strategy. The Gc(s) is a
conventional PI or PID controller which can be tuned much more tightly because of the elimination
of dead time from the loop. Thus the system consists of a feedback PI algorithm (Gc) that controls
a simulated process Gm-(s), which is easier to control than the real process.
2. Difference approximation is accurate only for execution times that are small compared to lead
and lag times.
General feed-forward controller design,
Gd ( s )
G ff ( s )
G p (s)
MV ( s ) G (s) T 1 ff s
G ff ( s ) d K ff lds e
Dm ( s ) G p (s) Tlds 1
lead / lag a lg orithm Tlds 1 Tlg s 1
feedforward controller gain K ff K d K p
Dead time ff d
Lead time Tld Lag time Tlg d
The internal model principle and structure
The IMC philosophy relies on the internal model principle, which states: Accurate control can be
achieved only if the control systems encapsulates (either implicitly or explicitly) some
representation of the process to be controlled.
~
Suppose G ( S ) is a model of G(s).
• By setting C(s) as the inverse of the model:
~ 1 ~
C ( s) G ( S ) And assuming that G ( s ) G ( s )
Then the output y(t) will track the reference input yd(t) perfectly.
• However in this example no feedback is shown.
48
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
One good thing about the IMC procedure, is that it results in a controller with a single tuning
parameter, the IMC filter (λ). For a system, which is “minimum phase”, λ is equivalent to a closed-
loop time constant (the “speed of response” of the closed-loop system). Although the IMC
procedure is clear and IMC is easily implemented, the most common industrial controller is still
the PID controller.
Why do we care about IMC, if we can show that it can be rearranged into the standard
feedback structure?
The process model is explicitly used in the control system design procedure. The standard
feedback structure uses the process model in an implicit fashion, that is, PID tuning parameters
are “tweaked” on a transfer function model, but it is not always clear how the process model effects
the tuning decision. In the IMC formulation, the controller, q(s), is based directly on the “good”
part of the process transfer function. The IMC formulation generally results in only one tuning
parameter, the closed loop time constant (λ , the IMC filter factor). The PID tuning parameters
are then a function of this closed-loop time constant. The selection of the closed-loop time constant
is directly related to the robustness (sensitivity to model error) of the closed-loop system.
THE EQUIVALENT FEEDBACK FORM TO IMC:
In this section, we derive the feedback equivalence to IMC by using block diagram manipulation.
Begin with the IMC structure shown in Figure.
Notice that r(s) − y(s) is simply the error term used by a standard feedback controller. Therefore,
we have found that the IMC structure can be rearranged to the feedback control (FBC) structure,
as shown in Figure. This reformulation is advantageous because we will find that a PID controller
49
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
often results when the IMC design procedure is used. Also, the standard IMC block diagram cannot
be used for unstable systems, so this feedback form must be used for those cases.
We can use the IMC design procedure to help us design a standard feedback controller. The
~
standard feedback controller is a function of the internal model g p (s ) and internal model controller
q(s) as shown in the equation below.
q( s)
g c ( s) ~
1 g p ( s)q( s)
We will refer to the above equation as the IMC-based PID relationship because the form of gc(s)
is often that of a PID controller. One major difference is that the IMC-Based procedure will, many
times, not require that the controller be proper. Also, the process dead time will be approximated
using the Padé procedure, in order to arrive at an equivalent PID-type controller. Because of the
Padé approximation for dead time, the IMC-based PID controller will not perform as well as IMC
for processes with time-delays.
The IMC-Based PID Control Design Procedure
The following steps are used in the IMC-based PID control system design
1. Find the IMC controller transfer function, q(s), which includes a filter, f(s), to make q(s) semi
proper or to give it derivative action (order of the numerator of q(s) is one order greater that the
denominator of q(s)). Notice that this is a major difference from the IMC procedure.
Here, in the IMC-based procedure, we may allow q(s) to be improper, in order to find an equivalent
PID controller. The bad news is - you must know the answer that you are looking for, before you
can decide whether to make q(s) proper or improper in this procedure.
q( s)
2. Find the equivalent standard feedback controller using the transformation g c ( s) ~
1 g p ( s)q( s)
write this in the form of a ratio between two polynomials.
3. Show this in PID form and find kc, τI, τ D. Sometimes this procedure results in a PID controller
cascaded with a lag term τ.
3. Perform closed-loop simulations for both the perfect model case and cases with model
mismatch. Choose the desired value for λ as a trade-off between performance and
robustness.
50
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Step 3: Rearrange the above two equations by multiplying and dividing with p
p ps 1 p
gc (s) kc I p
k p p s k p
The IMC-based PID design procedure for a first-order process has resulted in a PI control law.
The major difference is that there are no longer two degrees of freedom in the tuning parameters
kc, τI - the IMC-based procedure shows that only the proportional gain needs to be adjusted. The
integral time is simply set equal to the process time constant. Notice that the proportional gain is
inversely related to λ, which makes sense. If λ is small (closed loop is “fast”) the controller gain
must be large. Similarly, if λ is large (closed loop is “slow”) the controller gain must be small.
51
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
LQG CONTROL:
OPTIMAL CONTROL
The main objective of optimal control is to determine control signals that will cause a
process (plant) to satisfy some physical constraints and at the same time extremize (maximize or
minimize) a chosen performance criterion (performance index or cost function). The main interest
is to find the optimal control u*(t) (* indicates optimal condition) that will drive the plant P from
initial state to final state with some constraints on controls and states and at the
same time extremizing the given performance index J.
The formulation of optimal control problem requires
1. a mathematical description (or model) of the process to be controlled (generally in state variable
form),
2. a specification of the performance index, and
3. a statement of boundary conditions and the physical constraints on the states and/or controls.
Plant
For the purpose of optimization, a physical plant is described by a set of linear or nonlinear
differential or difference equations.
Performance Index
Classical control design techniques have been successfully applied to linear, time-
invariant, single-input, single output (8180) systems. Typical performance criteria are system time
response to step or ramp input characterized by rise time, settling time, peak overshoot, and steady
state accuracy; and the frequency response of the system characterized by gain and phase margins,
and bandwidth. In modern control theory, the optimal control problem is to find a control which
causes the dynamical system to reach a target or follow a state variable (or trajectory) and at the
same time extremize a performance index which may take several forms as described below.
1. Performance Index for Time-Optimal Control System:
The main aim is to transfer a system from an arbitrary initial state x(to) to a specified final
state x( t f) in minimum time. The corresponding performance index (PI) is
52
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
or in general,
where, R is a positive definite matrix and prime (') denotes transpose here. Similarly, we can think
of minimization of the integral of the squared error of a tracking system. We then have,
where, Xd(t) is the desired value, xa(t) is the actual value, and x(t) = xa(t) - Xd(t), is the error. Here,
Q is a weighting matrix, which can be positive semi-definite.
4. Performance Index for Terminal Control System:
In a terminal target problem, we are interested in minimizing the error between the desired
target position Xd (t f) and the actual target position Xa (t f) at the end of the maneuver or at the final
time t f.
The terminal (final) error is x (t f) = Xa ( t f) - Xd ( t f ). Taking care of positive and negative
values of error and weighting factors, we structure the cost function as
which is also called the terminal cost function. Here, F is a positive semi-definite matrix.
5. Performance Index for General Optimal Control System:
Combining the above formulations, we have a performance index in general form as
or,
where, R is a positive definite matrix, and Q and F are positive semi-definite matrices, respectively.
the matrices Q and R may be time varying. The particular form of performance index is called
quadratic (in terms of the states and controls) form.
The problems arising in optimal control are classified based on the structure of the
performance index J. If the PI contains the terminal cost function S(x(t), u(t), t) only, it is called
the Mayer problem, if the PI has only the integral cost term, it is called the Lagrange problem, and
the problem is of the Bolza type if the PI contains both the terminal cost term and the integral cost
term. There are many other forms of cost functions depending on our
performance specifications. However, the above mentioned performance indices (with quadratic
forms) lead to some very elegant results in optimal control systems.
Constraints
The control u( t) and state x( t) vectors are either unconstrained or constrained depending
upon the physical situation. The unconstrained problem is less involved and gives rise to some
elegant results. From the physical considerations, often we have the controls and states, such as
currents and voltages in an electrical circuit, speed of a motor, thrust of a rocket, constrained as
where, +, and - indicate the maximum and minimum values the variables can attain.
53
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
with some constraints on the control variables u( t) and/or the state variables x(t). The final time tf
may be fixed, or free, and the final (target) state may be fully or partially fixed or free. The entire
problem statement is also shown pictorially.
where, e( t) is the error, and E {x} represents the expected value of the random variable x. For a
deterministic case, the above error criterion is generalized as an integral quadratic term as
54
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
where, x(t) and u(t) are the n- and r- dimensional state and control unconstrained variables
respectively, and the performance index
The important stages in obtaining optimal control for the previous system are
1. the formulation of the Hamiltonian
to solve for the optimal values x*(t), u*(t), and A*(t), respectively,along with the general boundary
condition
Usually in problem formulation, we assume that the control u(t) and the state x( t) are
unconstrained, hat is, there are no limitations (restrictions or bounds) on the magnitudes of the
control and state variables. But, in reality, the physical systems to be controlled in an optimum
manner have some constraints on their inputs (controls), internal variables (states) and/or outputs
due to considerations mainly regarding safety, cost and other inherent limitations. For example,
consider the following cases :
1. In a D. C. motor used in a typical positional control system, the input voltage to field or
armature circuit are limited to certain standard values, say, 110 or 220 volts. Also, the
magnetic flux in the field circuit saturates after a certain value of field current.
2. Thrust of a rocket engine used in a space shuttle launch control system cannot exceed a
certain designed value.
3. Speed of an electric motor used in a typical speed control system cannot exceed a certain
value without damaging some of the mechanical components such as bearings and shaft.
This optimal control problem for a system with constraints was addressed by Pontryagin et al.,
and the results were enunciated in the celebrated Pontryagin Minimum Principle. In their original
works, the previous optimization problem was addressed to maximize the Hamiltonian
55
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Previously, for finding the optimal control u*(t), performance index, and boundary conditions, we
used arbitrary variations in control u(t) = u*(t)+δu(t) to define the increment ΔJ and the (first)
variation δJ in J as
Also, in order to obtain optimal control of unconstrained systems, we applied the fundamental
theorem of calculus of variationsi.e., the necessary condition of minimization is that the first
variation δJ must be zero for an arbitrary variation δu(t). But now we place restrictions on the
control u(t) such as
or component wise,
where, u j and u j are the lower and upper bounds or limits on the control function Uj (t). Then,
we can no longer assume that the control variation δu(t) is arbitrary for all t E [to, tf]. In other
words, the variation δu(t) is not arbitrary if the external control u*(t) lies on the boundary condition
or reaches a limit. If, for example, an external control u*(t) lies on the boundary during some
interval [ta, tb] of the entire interval [to, tf]' as shown in Figure 6.1(a), then the negative admissible
control variation -δu(t) is not allowable. The reason for taking +δu(t) and -δu(t) the way it is shown
will be apparent later. Then, assuming that all the admissible variations 11δu(t)11 is small enough
that the sign of the increment ΔJ is determined by the sign of the variation δJ, the necessary
condition for u * ( t) to minimize J is that the first variation.
Summarizing, the relation for the first variation is valid if u*(t) lies on
the boundary (or has a constraint) during any portion of the time interval [to, tf] and the first
variation δJ = 0 if u*(t) lies within the boundary (or has no constraint) during the entire time
interval [to, tf]. Next, let us see how the constraint affects the necessary conditions which were
derived by using the assumption that the admissible control values u(t) are unconstrained. Using
the results, we have the first variation as
In the above,
1. if the optimal state x* (t) equations are satisfied, it results in the state relation,
2. if the costate λ* (t) is selected so that the coefficient of the dependent variation δx( t) in the
integrand is identically zero, it results in the costate condition and
3. the boundary condition is selected such that it results in the auxiliary boundary condition.
When the previous items are satisfied, then the first variation becomes,
56
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
The integrand in the previous relation is the first order approximation to change in the
Hamiltonian H due to a change in u(t) alone. This means that by definition
for all admissible δu(t) less than a small value. The relation becomes
57
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
The variations +δu(t) and -δu(t) are taken in such a way that the negative variation -δu(t) is not
admissible and thus we get the condition. On the other hand, by taking the variations +δu(t) and -
δu(t) in such a way that the positive variation +8u(t) is not admissible, we get the corresponding
condition as
2. If the final time t f is free or not specified priori and the Hamiltonian does not depend
explicitly on time t, then the Hamiltonian must be identically zero when evaluated along
the optimal trajectory; that is,
The poles of the controlled system are λ(A+BK). The question is whether or not we can
relocate the poles to arbitrary locations in the complex plane as we desire. The answer is that this
can be done if and only if (A,B) is controllable. To see this, let us first consider single-input–single-
output systems with transfer function of the form.
We can realize this system in state space representation in the following form:
58
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
The above state space representation is called the controllable canonical form. The
characteristic equation of the system is
Our goal is to find a state feedback
so that the poles of the controlled system is in the desired locations represented by the desired
characteristic equation
Example
Consider the inverted pendulum
59
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
We want to find a state feedback to place the poles at −5, −10, −2+j2, −2−j2. Let us first check if
the system is controllable.
1. Randomly pick
60
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Example
Consider the following multi-input system
We want to find a state feedback to place the poles at −5, −10, −20. Let us first check if the
system is controllable.
If a system is controllable, then state feedback can be used to place poles to arbitrary
locations in the complex plane to achieve stability and optimality. However, in practice, not all
states can be directly measured. Therefore, in most applications, it is not possible to use state
feedback directly. What we can do is to estimate the state variables and to use feedback based on
estimates of states. In this section, we present the following results. (1) We can estimate the state
61
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
of a system if and only if the system is observable. (2) If the system is observable, we can construct
an observer to estimate the state. (3) Pole placement for state feedback and observer design can be
done separately.
Let us first consider a naïve approach to state estimation. Suppose that a system (A,B)is
driven by an input u. We cannot measure its state x. To estimate x, we can build a duplicate system
(say, in a computer) and let it be driven by the same input, as shown in Figure. In the figure, x
indicates the actual states and ˆx the estimates. To see how good the estimates are, let us investigate
the estimation error which satisfies the following differential equation.
where G is the observer matrix to feedback the error. Now, the estimation error ~
x satisfies the
following differential equation.
62
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
The dynamics of ~ x is determined by the eigenvalues of A+GC. Since the observer matrix
G can be selected in the design process, we can place the poles of the observer in the right locations
to meet the desired performance requirements. The pole placement problem for the observer can
be solved as a ‘dual’ of the pole placement problem for the state feedback. Note that a matrix and
its transpose have the same eigenvalues
Hence, if we view (AT,CT) as (A,B)and GT as K, then the pole placement problem for the
observer is transferred to the pole placement problem for the state feedback. By duality and our
previous results. The poles A–GC of can be arbitrarily assigned
(AT,CT) is controllable
(A,C)is observable.
Multi-loop Control - Introduction – Process Interaction – Pairing of Inputs and Outputs -The
Relative Gain Array (RGA) – Properties and Application of RGA – Multi – loop PID Controller–
Biggest Log Modulus Tuning Method – De coupler.
63
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
OR
64
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
In matrix form
𝒀𝟏(𝒔) 𝑼𝟏(𝒔)
Y(s)= ; U(s)=
𝒀𝟐(𝒔) 𝑼𝟐(𝒔)
𝑯𝟏𝟏(𝒔) 𝑯𝟏𝟐(𝒔)
H(s)=
𝑯𝟐𝟏(𝒔) 𝑯𝟐𝟐(𝒔)
For example, let’s consider a jacketed CSTR. the following are the controlled variables,
manipulated variables and disturbances associated with this application.
MULTI-LOOP CONTROL
Each manipulated variable depends only on a single controlled variable but the system can a set of
conventional feedback controllers.
65
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
MULTIVARIABLE CONTROL
Each manipulated variable can depend on two or more of the controlled variables. Example MPC
decoupling control.
Consistencyconstraints
N
Xi 1;
N
i 1
Yi 1
i 1
All the above relations of the system are 2N+3 equations with 4N+14 variables
GENERATION OF ALTERNATIVE LOOP CONFIGURATION
Once the manipulated variables and controlled variables are identified, now it is required
to specify which manipulated variable affects the controlled variable.
Fig 6a) Block diagram of the process with two controlled outputs and two manipulations
open loop
Fig 6b) Block diagram of the process with two controlled outputs and two manipulations
closed loop
Thus, the input-output relation of the process is given by
Y11 ( s ) H11 ( s )U1 ( s ) H12 ( s )U 2 ( s )
(1)
67
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Y22 ( s ) H 21 ( s )U1 ( s ) H 22 ( s )U 2 ( s )
(2)
Fig 7a) Interaction among control loops (a) one loop closed
7b) both loops closed
68
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
REMARKS:
1. Equations ( 9) and (10) describe the response of the outputs Y1 and Y2 when both loops are
closed.
2.When H12=H21=0 there is no interaction between the two control loops, the closed loops outputs
are given by the following equations:
Y1 H11Gc1 /1 H11Gc1 * Y1s. p
Y2 H 21Gc1 /1 H11Gc1 * Y1s. p
The closed loop stability of the two non- interacting loops depends on the roots of their
characteristics equations. Thus, if the roots of the two equations
1 H 11 Gc1 0
1 H 22Gc 2 0
Have negative real parts, the two non-interacting loops are stable.
3.The stability of the closed loop outputs of two interacting loops is determined by the roots of the
characteristics equation
Q(s) (1 H11Gc1 )(1 H 22Gc 2 ) H12 H 21Gc1Gc 2 0
69
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Thus, if the roots of the above equation have negative real parts, the two interacting loops are
stable.
4.Suppose that the two feedback controllers Gc1 and Gc 2 are tuned separately. Then we cannot
guarantee stability for the overall control system, where both loops are closed.
Step 1: Determine the ultimate gain Ku, i and the ultimate frequency ωu,i of each diagonal
process transfer function Gii(s) by classical SISO method.
Step 2: Calculate the corresponding Ziegler-Nichols setting (KcZN, i and τIZN,i) for each loop.
Step 3: assume a detuning factor F. typical value ranges are 2 to 5.
Step 4: The gains and reset times for the feedback controllers are calculated:
K c ,i K cZN ,i / F
It must be noted that F remains the same for all the loops.
`I ,i IZN ,i F
Step 5: Compute the closed loop log modulus Lcm using the above designed controllers for a
specified frequency range:
Lcm 20 log[W /(1 W )]
where
W 1 Det[ I GK (s)]
Step 6: Compute Lcm (the biggest log modulus) from the data of Lcm versus frequency.
Step 7: Check if Lcm=2N, where N is the order of the process matrix.
(a) If Lcm =2N, then stop
(b) Otherwise return to step 3.
The larger the detuning factor F, the more stable the system will be, but the set point and load
changes become more sluggish. The BLT algorithm yields a value of F, which gives a
reasonable compromise between stability and performance in multivariable systems. Lcm=2N
criterion suggests that the higher the order of the system, the more undamped the closed loop
system must be to achieve reasonable responses.
2. Change in M1 not only affects M1 but also the output Y2 thus the feedback effect of M2 on
Y2. in order to keep loop 2 output Y2 constant by varying M2 ,which in-turn affects the S.S
70
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
of Y1.Let us take this S.S output as Y’1.Now the open loop gain between Y1 and M1 when
Y2 is kept constant is given by
Y '1
( ) Y2
M 1
3.The ratio of the two open loop gains computed gives the relative gain 11 ,between output Y1
and Input M1
Y
11 = ( ) / ( 1 )Y 2
M 2 M 1
The relative gain provides a useful measure of interaction.
If 11 =0,then Y1 does not respond to M1 and M1 should not be used to control Y1.
If 11 =1,then M2 does not affect Y1,and the control loop between Y1and M1 does not
interact with the loop of Y2 and M2.This is called as COMPLETELY DECOUPLED
LOOPS.
If 0< 11 <1,then the interaction exists and as M2 varies it affects the S.S value of Y1.The
smaller the value of 11 ,the larger the interaction becomes.
If 11 <0,then M2 causes strong effect on Y1,and in the opposite direction from that caused
by M1.Interaction effect is very dangerous.
In this way we can relate the other relative gains 12 , 21 , 22 .
The relative gain (λij) between input j and output I is defined as follows;
yi
u
𝑔𝑎𝑖𝑛 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑖𝑛𝑝𝑢𝑡 𝑗 𝑎𝑛𝑑 𝑜𝑢𝑡𝑝𝑢𝑡 𝑖 𝑤𝑖𝑡ℎ 𝑎𝑙𝑙 𝑜𝑡ℎ𝑒𝑟 𝑙𝑜𝑜𝑝𝑠 𝑜𝑝𝑒𝑛 j uk k j
λij=
yi
𝑔𝑎𝑖𝑛 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑖𝑛𝑝𝑢𝑡 𝑗 𝑎𝑛𝑑 𝑜𝑢𝑡𝑝𝑢𝑡 𝑖 𝑤𝑖𝑡ℎ 𝑎𝑙𝑙 𝑜𝑡ℎ𝑒𝑟 𝑙𝑜𝑜𝑝𝑠 𝑐𝑙𝑜𝑠𝑒𝑑 ij
u
j yk k i
The relative-gain array provides a methodology where we select pairs of input and output variables
in order to minimize the interaction among resulting loops. It is a square matrix that contains
11 12
individual relative gain as elements, that is ij . For a 2x2 system, the RGA is
21 22
.
11 12
RGA is given by , this equation yields the following relationships,
21 22
11 12 1 ;
11 21 1 ;
12 22 1 ;
21 22 1, then for a 2x2 system, only one relative gain must be calculated for the entire array,
1 11
11
11
.
1 11
71
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
0.05 0.95
Consider a relative gain array,
0.95 0.05
We pair Y1 with U1 and Y2 with U2
RGA-implications:
1. Pairing loops on λij values that are positive and close to 1.
2. Reasonable Pairings: 0.5 < λij< 4.0
3. Pairing on negative λij values results in at least one of the following; a. Closed loop system is
unstable, b. Loop with negative λij is unstable, c. Closed loop system becomes unstable if loop
with negative is λij turned off.
4. Plants with large RGA-elements are always ill-conditioned. (i.e., a plant with a large γ(G)
may have small RGA-elements)
5. Plants with large RGA-elements around the crossover frequency are fundamentally difficult to
control because of sensitivity to input uncertainties, decouplers or other inverse-based controllers
should not be used for plants with large RGA-elements.
6. Large RGA-element implies sensitivity to element-by-element uncertainty.
7. If the sign of RGA-element changes from s=0 to s=∞, then there is a RHP-zero in G or in some
subsystem of G.
8. The RGA-number can be used to measure diagonal dominance: RGA-number = || Λ(G)-I
||min. For decentralized control,, pairings with RGA-number at crossover frequency close to
one is preferred.
9. For integrity of whole plant, we should avoid input-output pairing on negative RGA-element.
10. For stability, pairing on an RGA-number close to zero at crossover frequency is preferred.
APPLICATIONS OF RGA
Consider the whiskey blending problem, which has steady state process gain matrix and RGA:
Y1 0.025 0.075 U1 0.025 0.075 0.25 0.75
Y 1 ,K ,
2 1 U 2 1 1 0.75 0.25
indicating that the output-input pairings should be Y1- U2 and Y2- U1. In order to achieve this
pairing, we, could use the following block diagram. The difference between R2 and Y2 is used to
adjust U1, using a PID controller (Gc1), hence, we refer to this pairing as Y2-U1. The difference
between R2 and Y1 is used to adjust U2 using a PID controller (Gc1); hence we refer to this pairing
as Y1-U2. This corresponds to the following diagram. This can also be done by redefining variables.
Problem:
Consider a process with the following input-output relationships:
1 1
y1 m1 m2
s 1 0.1s 1 (1)
0.2 0.8
y2 m1 m2 (2)
0.5s 1 s 1
1.Make a unit step change in m1 while keeping m2 constant (ie m1=1/s).Then from the above
equations
1 1
y1
s 1 s
According to the final value theorem the new steady state value of y1
1
y1, s lim[ s, y1 ( s)] lim( ) 1
s 0 s 0 s 1
72
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
y1
Therefore ( ) m2 1 / 1 1
m1
2.Keep y2 constant under control by varying m2.Introduce a unit step in m1.Since y2 must
remain constant ,the equation (2) gives us how much m2 must change
0.2 s 1
m2 m1
0.8 0.5s 1
Substitute the values in equation(1)
1 1 0.2 s 1
y1 m1 m1
s 1 0.1s 1 0.8 0.5s 1
Then the resulting new steady state for y1 is given by
1 1 1 0.2 s 1 1
y1, s lim[ s, y1 ] lim[ s{ }] 1.25
s 0 s 0 s 1 s 0.1s 1 0.8 0.5s 1 s
Therefore, (y1 / m1 ) y2 1.25 1.25
1
y
( 1 )m
m1 2 1
11 0.8
y1
( ) y 2 1.25
m1
Since we know the sum of the rows and columns of a Relative gain matrix need to be 1,
12 21 0.2
We get
22 0.8
It is easy to conclude that we should pair m1 with y1 and m2 with y2 to form two loops with
minimum interaction. If the other two loop configurations would have been chosen then the
interaction would have been 4times larger.
To eliminate the interaction of loop1 on 2 let us construct another decoupler whose transfer
function is given as
H 21 ( s )
D2 ( s ) (3)
H 22 ( s )
73
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
From fig8(b) with 2 feedback loops it is possible to get an i/p-o/p relationship of 2 closed
loops.
This uses M2 as i/p and provides o/p which gives the amount by which M1 should be varied to keep
Y1 constant and cancel the effect of M2 on Y1.This way the decouplers cancels the effect of loop2
on loop1.
The equations( 3) and( 4) shows that o/p of loop 1 and loop2 depend only on the set point of its
own and not the other loop
74
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
This method is based on the direct synthesis approach in which the multi-loop PI/PID
controller is designed based on the desired closed-loop transfer function .
The resulting analytical design rule includes a frequency-dependent relative gain array that
provides information of dynamic interactions useful for estimating the controller parameters.
76
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
77
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Introduction:
The chemical plant usually consists of multi-inputs and multi-output (MIMO) variables. The
MIMO control problem is handled by two approaches:
1. Centralized (Multivariable) control
2. Decentralized (Multi-loop) control
Centralized control:
It uses all process inputs and outputs measurement simultaneously to determine all manipulated
variables.
78
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Decentralized control:
It uses multiple single loop controllers. Each single output is connected to a single input forming
a network of multiple loops.
In diagonal controller, the diagonal elements are controllers and off-diagonal elements are zero. In
this controller, decoupler is used to minimize/ remove interaction. When interactions are
significant a full matrix controller is used. This centralized control is mathematically showed as
follows;
y1 ( s ) e1 ( s ) 0 gc1 ( s) gc 2 ( s) g11 ( s ) g12 ( s )
y ( s) 0 e2 ( s) gc3 ( s) gc 4 ( s) g 21 ( s) g 22 ( s )
2
error Full matrix controller process
The closed loop transfer function is Y ( s) G p ( s)U ( s) where U (s) Gc (s)(r (s) Y (s)) .
Y (s) (1 G(s)Gc (s))1G(s)Gc (s) R(s) where,
y (s)
Y (s) 1
y2 ( s )
g ( s ) g12 ( s )
G ( s ) 11
g 21 ( s ) g 22 ( s )
g (s) gc 2 (s)
Gc ( s ) c1
gc3 ( s) gc 4 ( s)
r (s)
R( s) 1
r2 ( s )
80
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
The stability of the closed loop system is determined by the poles of the characteristic equation.
I G(s)Gc (s) 0 .
A process with equal number of inputs and outputs controlled by the above specified methods is
called a square system process. Non-square systems often arise in the process industry. Eg.: 2X3
mixing tank system, 5X7 shell standard control problem, 4X5 crude distillation system.
Two simple methods to control non-square systems are Davison’s method and Tanttu and Lieslhto
method.
Davison’s method of designing centralized PID controller:
Davison has proposed a centralized multivariable PID controller tuning method for square systems.
Here proportional, integral and derivative gain matrix are given by,
K c G ( s 0)
1
K I G ( s 0)
1
K D G ( s 0)
1
Where [G(s=0)] is called the rough tuning matrix and δ and ɛ are the fine tuning parameters which
generally range from 0 to 1. This method can be extended to non-square systems. As inverse does
not exist for a non-square system, a Moore-Penrose pseudo inverse is used.
A AH ( A AH ) 1 where AH is the Hermitian matrix of A.
So for a non-square system, PID controller gains are,
K c G ( s 0)
K I G ( s 0)
K D G ( s 0)
81
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
82
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
4. The filter factors are adjusted to vary the response and robustness characteristics. In the
~
limit of a perfect model, (G p ( s) G p ( s)) the response will be,
~
y ( s ) G p ( s)Q( s )r ( s ) G p ( s)G p G p ( s ) F ( s ) r ( s )
Gp (s) Q(s)
y ( s ) G p ( s ) F ( s )r ( s )
DYNAMIC MATRIX CONTROL:
-developed by Shell Oil company in the 1960s and 1970s. It is based on a step response model,
which has the form, yk s1uk 1 s2 uk 2 ... s N 1uk N 1 s N uk N (1)
N 1
Which is of the form, yk si uk i s N uk N (2)
i 1
Where yk is the model prediction at time step k, and uk N is the manipulated input N steps in the
past. The difference between the measured output (yk) and the model prediction is called the
additive disturbance. d k yk y k (3)
The corrected prediction is then equal to the actual measured output at step k,
(4)
y k y k d k Similarly the corrected predicted output at the first time step in the future can be
found from
y c
k 1 yc
k 1 d k 1
N 1
y c k 1 si uk i 1 sN uk N 1 d k 1 (5)
i 1
N 1
y c k 1 s1uk si uk i 1 sN uk N 1 d k 1
i2
So for the jth step into the future, we find
yc k j yc k j d k j
j N 1
s u
(6)
y c k 1 si uk i j sN uk N j d
i k i j k j
i 1 i j 1
correction term
effect of future control moves effect of past control moves
and we can separate the effect of past and future control moves as shown in the above equation.
The most common assumption is that the correction terms is constant in the future (this is the
constant additive disturbance assumption): d k j d k j 1 ... d yk y k
(7)
Also, realize that there are no control moves beyond the control horizon of M steps, so
uk M uk M 1 ... uk P 1 0 (8)
In matrix-vector form, a prediction horizon of P steps and a control horizon of M steps, yields
83
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
c
y k 1 s1 0 0 0 0
c s uk
s 0 0 0 u
y k 2 2 1
k 1
c
yk j s s s s
uk M 2
j j 1 j 2 j M 1
c uk M 1
sP s P1 s P2 s PM 1
Mx1 current and future control moves u f
y k P
PxM dynamic matrix S f
c
Px1 corrected output prediction s Y
s2 s3 s4 s N 2 s N 1
s uk 1
3 s 4 s 5 s N 1 0 u
0 0 k 2
s j 1 s j 2 s N 1 0 0
uk N 3
uk N 2
s P1 s P2 0 0
( N 2) x1 past control moves u past
Px ( N 2 ) matrix S past
uk N 1 d k 1
u d
sN k N 2
k 2
(8)
uk N P d k P
Px1 past inputs u P Px1 predicted disturbanc es
e w u
P M 1
The least-squares objective function is
c 2 c 2
k i k i
(12)
i 1 i 0
84
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
c
eck 1
e e
ek P ek 2 E c E c
P 2
c c c c T
i 1
k i k 1 e
k 2
(13)
c
ek P
uk
w u k i w u k u k 1 u k M 1 u k 1
M 1
c 2
i 0
u k M 1
And (14)
w 0 0 0 u k
0 w 0 0
u k u k 1 u k M 1
0 0 0
u k 1
u f Wu f
T
0 0 0 w u k M 1
Therefore the objective function can be written in the form,
Subject to the modelling equality constraint (11), E c E S f u f (15)
Substituting (15) into (16), the objective function can be written,
( E S f u f )T ( E S f u f (u f )T )Wu f (16)
The solution for minimization of this objective function is
u f ( S f S f W ) 1 S f
T T
E
(17)
unforced errors
K
The current and future control move vector is proportional to the unforced error vector.
Because only the current control move is actually implemented, we use the first row of the matrix
of the K matrix and u f K1 E (18)
Where represents the first row of the matrix, where K (S f S f W ) 1 S f
T T
generally needed to provide adequate control of simple systems, which are often controlled well
by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers
include large time delays and high-order dynamics.
MPC models predict the change in the dependent variables of the modeled system that will be
caused by changes in the independent variables. In a chemical process, independent variables that
can be adjusted by the controller are often either the set-points of regulatory PID controllers
(pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent
variables that cannot be adjusted by the controller are used as disturbances. Dependent variables
in these processes are other measurements that represent either control objectives or process
constraints. MPC uses the current plant measurements, the current dynamic state of the process,
the MPC models, and the process variable targets and limits to calculate future changes in the
dependent variables. These changes are calculated to hold the dependent variables close to target
while honoring constraints on both independent and dependent variables. The MPC typically sends
out only the first change in each independent variable to be implemented, and repeats the
calculation when the next change is required. While many real processes are not linear, they can
often be considered to be approximately linear over a small operating range. Linear MPC
approaches are used in the majority of applications with the feedback mechanism of the MPC
compensating for prediction errors due to structural mismatch between the model and the process.
In model predictive controllers that consist only of linear models, the superposition
principle of linear algebra enables the effect of changes in multiple independent variables to be
added together to predict the response of the dependent variables. This simplifies the control
problem to a series of direct matrix algebra calculations that are fast and robust.
When linear models are not sufficiently accurate to represent the real process nonlinearities,
several approaches can be used. In some cases, the process variables can be transformed before
and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with
nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model
may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity
dynamic model based on fundamental mass and energy balances. The nonlinear model may be
linearized to derive a Kalman filter or specify a model for linear MPC. An algorithmic study by
El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide
significant reduction in online computations while maintaining comparative performance to a non-
altered implementation. The proposed algorithm solves N convex optimization problems in
parallel based on exchange of information among controllers. MPC is based on iterative, finite-
horizon optimization of a plant model. At time the current plant state is sampled and a cost
minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively
short time horizon in the future: Specifically, an online or on-the-fly calculation is used to explore
state trajectories that emanate from the current state and find a cost-minimizing control strategy
until time Only the first step of the control strategy is implemented, then the plant state is sampled
again and the calculations are repeated starting from the new current state, yielding a new control
and new predicted state path. The prediction horizon keeps being shifted forward and for this
reason MPC is also called receding horizon control. Although this approach is not optimal, in
practice it has given very good results. Much academic research has been done to find fast methods
of solution of Euler–Lagrange type equations, to understand the global stability properties of
MPC's local optimization, and in general to improve the MPC method. To some extent the
theoreticians have been trying to catch up with the control engineers when it comes to MPC.
86
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
Principles of MPC
Model Predictive Control (MPC) is a multivariable control algorithm that uses:
The GPC method was proposed by Clarke et al and has become one of the most popular
MPC methods both in industry and academia. It has been successfully implemented in many
industrial applications, showing good performance and a certain degree of robustness.
The basic idea of GPC is to calculate a sequence of future control signals in such a way
that it minimizes a multistage cost function defined over a prediction horizon. The index to be
optimized is the expectation of a quadratic function measuring the distance between the
predicted system output and some reference sequence over the horizon plus a quadratic function
measuring the control effort.
Generalized Predictive Control has many ideas in common with the other predictive
controllers since it is based upon the same concepts but it also has some differences. As will be
seen later, it provides an analytical solution (in the absence of constraints), it can deal with
unstable and non-minimum phase plants and incorporates the concept of control horizon as well
87
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE
as the consideration of weighting of control increments in the cost function. The general set of
choices available for GPC leads to a greater variety of control objective compared to other
approaches, some of which can be considered as subsets or limiting cases of GPC. The GPC
scheme can be seen in the Figure below It consists of the plant to be controlled, a
reference model that specifies the desired performance of the plant, a linear model of
the plant, and the Cost Function Minimization (CFM) algorithm that determines the input
needed to produce the plant’s desired performance. The GPC algorithm consists of the CFM
block.
The GPC system starts with the input signal, r(t), which is presented to the reference
model. This model produces a tracking reference signal, w (t) that is used as an input to the
CFM block. The CFM algorithm produces an output, which is used as an input to the plant.
Between samples, the CFM algorithm uses this model to calculate the next control input, u
(t+1), from predictions of the response from the plant’s model. Once the cost function is
minimized, this input is passed to the plant. This algorithm is outlined below.
88