Você está na página 1de 88

EI6801 COMPUTER CONTROL OF PROCESSES Dept.

of EIE and ICE

EI6801 COMPUTER CONTROL OF PROCESSES

UNIT I DISCRETE STATE-VARIABLE TECHNIQUE 9


State equation of discrete data system with sample and hold – State transition equation – Methods
of computing the state transition matrix – Decomposition of discrete data transfer functions – State
diagrams of discrete data systems – System with zero-order hold – Controllability and
observability of linear time invariant discrete data system–Stability tests of discrete-data system –
State Observer - State Feedback Control.

STATE EQUATION OF DISCRETE DATA SYSTEM WITH SAMPLE AND HOLD


If the time space is continuous, the system is known as a continuous-time system. However, if the
input and state vectors are defined only for discrete instants of time k, where k ranges over the
integers, the time space is discrete and the system is referred to as a discrete-time system. We shall
denote a continuous-time function at time t by f(t). Similarly, a discrete time function at time k
shall be denoted by f(k). We shall make no distinction between scalar and vector functions.

Figure. Illustration of discrete-time functions and quantized functions. a) discrete time


funciton, b) quantized function, c) quantized, discrete-time funciton

BLOCK DIAGRAM OFA TYPICAL SAMPLED DATA CONTROLLED SYSTEM

ADC: The analog signal is converted into digital form by A/D conversion system. The conversion
system usually consists of an A/D converter proceeded by a sample-and-hold device.
DAC: The digital signal coming from digital device is converted into analog signal.

1
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Sample and Hold Device:


Sampler: It is a device which converts an analog signal into a train of amplitude demodulated
pulses.
Hold device: A hold device simply maintains the value of the pulse for a prescribed time duration.
Sampling: sampling is the conversion of a continuous-time signal into a discrete time signal
obtained by taking samples of the continuous time signal (or analog signal) at discrete time
instants.
Sampling frequency: Sampling frequency should be greater than two times of signal frequency.
Fs  2* Fin
Types of sampling:
Periodic sampling: In this sampling, samples are obtained uniformly at intervals of T seconds.
Multiple-order sampling: A particular sampling pattern is separated periodically.
Multiple-rate sampling: In this type two simultaneous sampling operations with different time
periods are carried out on the signal to produce the sampled output.
Final Control Element: A Final control element changes a process in response to a change
in controller output a system. Some example of Final control elements are actuators include valves,
dampers, fluid couplings, gates, and burner tilts to name a few.
Sensor: A sensor is a device that detects and responds to some type of input from the physical
environment. The specific input could be light, heat, motion, moisture, pressure, or any one of a
great number of other environmental phenomena.

STATE TRANSITION EQUATION – METHODS OF COMPUTING THE STATE


TRANSITION MATRIX

State variable model of SISO discrete system consists of a set of first-order difference equations
i. Relating state variables x1(k), x2(k) ,…. xn(k) of the discrete time system to the input u(k).
ii. Relating output y(k) algebraically to state variables and the input.

Thus, the dynamics of a linear time-invariant system is described by the following two
equations.

x(k  1)  Fx(k )  gu (k )
y (k )  Cx(k)  du(k)
Where equation (1) is called ‘state equation ‘and equation (2) is called ‘output equation’. The
equations together give ‘state variable model’ of the system.
In equation (1) and (2)
X(k) is called state vector
U(k) is the system input,
Y(k) is output
‘F’ a constant matrix, ‘g’- a constant column matrix and
‘c’ is a constant row matrix
‘d’ is a scalar coupling between input and output.

2
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

 x1 (k )   f11 f12 .. .. f1n 


   
 x ( k ) 
2  f 21 f 22 .. .. f2n 
x(k )   ..  F  .. 
  ,  
 ..   .. 
 x (k )  f f nn nxn
 n   n1 f n 2 ..

 g1 
 
 g2 
g   .. 
  ,
 .. 
g 
 n

c   c1 c2 .. .. cn  nx1
Where d is a scalar

SOLUTION OF STATE DIFFERENCE EQUATIONS


The state equation is given by
x(k  1)  Fx(k )  gu(k ) (1)
Where x is nx1 state vector, u is a scalar input, F is a nxn real constant matrix and g is nx1 real
constant.
The solution can be obtained by means of recursion procedure
Put k=0 in equation (1)
x(1)  Fx(0)  gu(0) (2)
Put k=1 in equation (1)
x(2)  Fx(1)  gu(1) (3)
Put k=2 in equation (1)
x(3)  Fx(2)  gu(2) (4)
Put k=k-1 in equation (1)
x(k )  Fx(k 1)  gu(k 1) (5)
Substituting equation (2) in equation (3)
x(2)  F  Fx(0)  gu(0)  gu(1)
x(2)  F 2 x(0)  Fgu(0)  gu(1) (6)
Substituting equation (6) in equation (4)

3
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

x(3)  F  F 2 x(0)  Fgu(0)  gu(1)   gu(2)


x(3)  F 3 x(0)  F 2 gu(0)  F gu(1)  gu(2) (7)
Thus we have
x(k )  F k x(0)  F k 1 gu(0)  F k 2 gu(1)  ....  F 0 gu(k 1)
k 1
x(k )  F x(0)   F k 1i gu (i )
k
(8)
i 0
The equation (4) has two parts, one representing the contribution of initial state x(0) and the other
representing the contribution of the input u(i):i=0,1,… (k-1)

STATE TRANSITION MATRIX


The state equation is given by
x(k  1)  Fx(k )  gu(k ) (1)
When u(k)=0, the equation (1) can be rewritten as follows
x(k  1)  Fx(k ) (2)
The above equation is called homogeneous state equation, whose solution can be obtained by
making u(i) term as zero in equation (1)
The solution of equation (2) is given by
x(k )  F k x(0) (3)
From equation (3) it is observed that the initial state x(0) at k=0 is driven to the state x(k) at the
sampling instant ‘k’.
This transition is carried out by the matrix Fk and hence it is called state transition matrix and is
denoted by
 (k )  F k Ф(k)
And Ф(0)=I (Identify matrix)
Methods of evaluating State Transition Matrix
There are three methods of evaluating the state transition matrix.
1. Evaluation using Inverse Z-Transform
2. Evaluation using similarity transformation
3. Evaluation using Cayley Hamilton Technique

Evaluation of state transition matrix using Inverse Z-Transform


The homogeneous state equation is given by
x(k  1)  Fx(k )
Taking Z-Transform on both sides, we get
Z X (z)  Z x(0)  F X(z)
Z X (z)  F X ( z )  Z x(0)
 ZI  F  X (z)  Z x(0)
X (z)   ZI  F  Z x(0)
1

4
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

X (k )  1
ZI  F  1
Z x(0) 
Also, F k   (k )  1
ZI  F  Z  1

Compute FK for the following system using 3 different techniques


 0 1 0 
x(k  1)    x(k )    (1) K
 0.21  1 1 

1
x ( 0)   
0 
y ( k )  x2 ( k )

 0 1
F  
 0.21  1

From equation (9), we have



F K   (k )  Z 1 zI  F  z
1

 z 1 
 
0.21 z  1

F K  PK P 1

 z 1 1 
 z  z  0.21
2
z  z  0.21 
2
 
 zI  F   
1

 0.21 2 
 2 
 z  z  0.21 z 2  z  0.21 

 z z z z 
 1.75 z  0.3  0.75 z  0.7 2.5
z  0.3
 2.5
z  0.7 
zI  F 1 z   

 z z z z 
 0.525 z  0.3  0.525 z  0.7  0.75
z  0.3
 1.75
z  0.7 

 1.75(0.3) K  0.75(0.7) K 2.5(0.3) K  2.5(0.7) K 


Z 1
zI  F   1
z 



 0.525(0.3) K  0.525(0.7) K  0.75(0.3) K  1.75(0.7) K 

5
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

 1.75 0.75 2.5 2.5 


 z  0.3  z  0.7 
z  0.3 z  0.7 
 
 
  0.525 0.525  0.75 1.75 
 z  0.3  z  0.7 
z  0.3 z  0.7 

Evaluation of FK using Similarity Transformation Method

In this method, the state transition matrix FK is given by

zI  F   
z 0  0 1
  
0 z   0.21  1

1K 0 .. .. 0
 
0 
K
2 .. .. 0 
 P :  P 1
 
 : 
0 K
.. .. n 
 0

Where’ is the transformation matrix that transforms ‘F’ into diagonal form
For the previous example, FK can be found as follows using Similarity Transformation method.

Given  0 1
F  
 0.21  1

 z z z z 
 1.75 z  0.3  0.75 z  0.7 2.5
z  0.3
 2.5
z  0.7 
zI  F 1 z   

 z z z z 
 0.525 z  0.3  0.525 z  0.7  0.75
z  0.3
 1.75
z  0.7 

The Eigen values are -0.3 and -0.7

 1.75(0.3) K  0.75(0.7) K 2.5(0.3) K  2.5(0.7) K 



Z 1 zI  F 
1

z 



 0.525(0.3) K  0.525(0.7) K  0.75(0.3) K  1.75(0.7) K 

 2    0.21  0
6
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

 0   0 1   1 
I  F       
 0    0.21  1 0.21   1

 1  0.3, 2  0.7
As ‘F’ is in the comparison form, the matrix ‘P’ can be written as
  0 .3 0 
  P 1 FP  
 0  0.7 

1 1   1 1 
P   
1 2   0.3  0.7 

 1.75 2. 5 
 P 1   
 0.75  2.5
 0.3K 0 
 K   
 0  0.7 K 

 1 1   0.3K 0   1.75 2.5 


F K  PK P 1    K 
 0.3  0.7   0  0.7    0.75  2.5

 1.75(0.3) K  0.75(0.7) K 2.5(0.3) K  2.5(0.7) K 


 
 
 0.525(0.3) K  0.525(0.7) K  0.75(0.3) K  1.75(0.7) K 

EVALUATION OF FK USING CALEY HAMILTON THEOREM


For the previous example, FK can be found as follows using Caley Hamilton Theorem.

Given  0 1
F  
 0.21  1

The eigen values are -0.3 and -0.7

Since, F is of second order the polynomial will be of the form


g ( )  0 1

 (0.3) K   00.3 1
(0.7) K   00.7  1

'  0' ' 1'


7
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Solving for and we get


 0 1.75(0.3)  0.75(0.7)
K K

 1 2.5(0.3) K  2.5(0.7) K

 ( K )  F K   0I   1F
 1.75(0.3) K  0.75(0.7) K 2.5(0.3) K  2.5(0.7) K 
 (K )   
 
 0.525(0.3)  0.525(0.7)  0.75(0.3) K  1.75(0.7) K 
K K

For the previous example, FK can be found as follows using Caley Hamilton Theorem.
Given  0 1
F  
 0.21  1
The Eigen values are -0.3 and -0.7. Since, F is of second order, the polynomial will be of
the form g ( )  
0 1

 (0.3) K   00.3 1

(0.7) K   00.7  1

 0 1.75(0.3) K  0.75(0.7) K

 1 2.5(0.3) K  2.5(0.7) K
 ( K )  F K   0I   1F

 1.75(0.3) K  0.75(0.7) K 2.5(0.3) K  2.5(0.7) K 


 (K )   
 
 0.525(0.3) K  0.525(0.7) K  0.75(0.3) K  1.75(0.7) K 

Decomposition of discrete data transfer functions


a) Direct decomposition
b) Cascade decomposition
c) Parallel decomposition

STATE DIAGRAMS OF DISCRETE DATA SYSTEMS


Consider a discrete time system described by the following state difference equations.
x1 (k  1)   x1 (k )  x2 (k )
x2 (k  1)   x1 (k )  u (k )
y (k )  x1 (k )  x2 (k )

8
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

SYSTEM WITH ZERO-ORDER HOLD


State diagram of zero order hold is important for sampled date control systems. Let the input to
and output of a ZOH is e ∗ (t) and h(t) respectively. Then, for the inetrval kT ≤ t ≤ (k + 1)T,
h(t)e(kT)

Therefore, the state diagram, as shown in Figure, consists of a single branch with gain s−1

Figure: State Diagram of Zero Order Hold

CONTROLLABILITY AND OBSERVABILITY OF LINEAR TIME INVARIANT


DISCRETE DATA SYSTEM
CONTROLLABILITY
A control system is said to be completely state controllable if it is possible to transfer the system
from any arbitrary initial state to any desired state (also an arbitrary state) in a finite time period.
That is, a control system is controllable if every state variable can be controlled in a finite time
period by some unconstrained control signal.
The discrete state variable model of the system is given by
x(k  1)  Fx(k )  gu (k )
y (k )  cx(k )  du (k )
F  nxn g  nx1 c  1xn d  1x1
X-state vector,
U-input vector, y-output vector
The necessary and sufficient condition for the system to be completely controllable is that the nxn
controllability matrix
U   g Fg F 2 g ..... F n1 g 
has rank equal to n, i.e., (U)=n.
OBSERVABILITY
The system is said to be completely observable if every initial state x(0) can be determined from
the observations of y(kT) over a finite number of sampling periods. The system, therefore, is
completely observable if every transition of the states eventually affects every element of the
output vector.
9
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

The necessary and sufficient condition for the system to be completely observable is that the ‘nxn’
observability matrix
 c 
 cF 
 
V   cF 2 
 
 .... 
cF n 1 
Has rank equal to n, i.e., i.e., (V)=n.
CONTINUOUS STATE SPACE SYSTEM TO DISCRETE STATE SPACE SYSTEM
The continuous state equation of the plant is

x(t )  Ax(t )  bu (t )
y  Cx(t )
The Discrete state equation of the plant is
x(k  1)  Fx(k )  gu (k )
y (k )  Cx(k )
The transformation is
e At  L1  sI  A 
1. Find 1
 
2. Calculate F  e AT
3. Calculate T
g   e A bd
0

Example:
0 1  0 
A  , b    , c  1 0
0 1 1 
T  1 sec
The system is
1 1  e  t   1 1  e T 
e 
At
 F e AT
 
0 et  0 e T 
T 
 0 1 e 
d 
 
  T  1  e 
T T
g   e bd  T
A 
   1  e T 
0
  e d   
 
 0 
LOSS OF CONTROLLABILITY AND OBSERVABILITY DUE TO SAMPLING
The Transfer is
Y ( s) 2

X (s) s 2   2

10
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

U   g Fg   2sin T 1  cos T 
c
V     sin T
cF 
The controllability and observability are preserved only if
Re i  Re  j
2n
Im  i   j  
T
n  1,  2,.......
STABILITY TESTS OF DISCRETE-DATA SYSTEM
Jury's stability test
F ( z )  an z n  an 1 z n 1  an  2 z n  2  .....  a2 z 2  a1 z  a0
Necessary condition is
F(1)>0 and (-1)nF(-1)>0
The total number of rows are 2n-3
Sufficient condition (2n-3 row will be there)

Row z0 z z2 ….. Zn-k …. Zn-2 Zn-1 zn


1 a0 a1 a2 …… an-k …… an-2 an-1 an
2 an an-1 an-2 …… ak …… a2 a1 a0
3 b0 b1 b2 …… ….. …… bn-2 bn-1
4 bn-1 bn-2 bn-3 …… ….. …… b1 b0
5 c0 c1 c2 …… ….. …… cn-2
6 cn-2 cn-3 cn-4 …… ….. …… c0
……
2n-5
2n-4
2n-3

th
The k element of row-3 is given by

a an  k  b bn 1 k 
bk   0  , ck   0
 an ak  bn 1 bk 

Sufficient conditions
(n-1) conditions

11
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

a0  an
b0  bn 1
c0  cn  2
....
STABILITY ANALYSIS USING BILINEAR TRANSFORMATION
The bilinear transformation maps the interior of unit circle in the z-plane into the left half
of the r-plane.
1 r
z
1 r

STATE OBSERVER
Determination of observer Gain Matrix (m)
Method -1
1. Check the Observability (Qb)
2. Determine the desired char polynomial
   1    2  ......   n    n  1 n1  ....   n1   n
3. Determine the original char polynomial
 I  F   n  a1 n1  a2 n2  ....  an1  an
4. Determine the transformation matrix (Po)

Po  W QbT ,

 c   an 1 an 2 .... a2 a1 1 
 cF  a 
   n 2 an 3 .... a1 1 0 
Qb   cF 2  , W   ... ... .... ... ... ...
   
 ....   a1 1 .... 0 0 0 
cF n 1   1 0 .... 0 0 0 
5. Determine the Observer Gain Matrix (m)

  n  an 
 m1   a 
m   n 1 n 1 
m  2
P 
1
.... 
 ...  o  
    2  a2 
 mn  1  a1 

Method-2 Observer design


1. Determine the desired characteristic polynomial
   1    2  ......   n    n  1 n1  ....   n1   n
2. Determine the characteristic polynomial of the system with state feedback
 I   F  mg   0

12
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

3. Equate the coefficients of polynomials


Method3- Observer design
1. Determine the desired characteristic polynomial
   1    2  ......   n    n  1 n1  ....  n1  n I
2. Determine the matrix ф(F) using the coefficient of desired characteristic polynomial.
 ( F )  F n  1 F n1  ....   n1F   n I
3. Calculate the state feedback gain matrix (K) using Ackermann’s formula.
0
0
 
m   ( F )Qb1 ...
 
0
 1 

STATE FEEDBACK CONTROL


The discrete state variable model of the system is given by
x(k  1)  Fx(k )  gu (k )
y (k )  cx(k )  du (k )
F  nxn g  nx1 c  1xn d  1x1
X-state vector,
u-input vector, y-output vector
Pole Placement by State Feedback Method-1
1. Check the controllability (Qc)
2. Determine the desired char polynomial
   1    2  ......   n    n  1 n1  ....   n1   n
3. Determine the original char polynomial
 I  F   n  a1 n1  a2 n2  ....  an1  an
4. Determine the transformation matrix (Pc)
 P1 
 PF 
Pc   1  , P   0 0 ... 0 1 Q 1
 ....  1 c

 n 1 
 P1 F 
5. Determine the state feedback gain matrix
K  n  an  n1  an1 .... 2  a2 1  a1  Pc
Note:
If the given system state model is in controllable phase variable form then Pc=I (unit matrix)

13
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Example
Design a state feedback controller which will give closed-loop poles at -2, -1±j.
 x1 (k  1)  0 1 0   x1 (k )   0 
 x (k  1)   0 0 1   x ( k )    0  u ( k )
 2    2   
 x3 (k  1)  0 2 3  x3 (k )  10 
Solution:
Qc   g Fg F 2 g 

0 0 10 
  0 10 30   1000  0

10 30 70 

0.2 0.3 0.1


Q   0.3 0.1 0 
1
c

 0.1 0 0 
Desired characteristics polynomial
   1    2    3      2    1  j    1  j 
  3  4 2  6  4
Here, 3=4, 2=6, 1=4
Original characteristics polynomial
1 0 0   0 1 0 
 I  F   0 1 0   0 0 1 
0 0 1  0 2 3
  3  3 2  2  0
Here
a3 =0, a2 =2, a1 =3,
P1   0 0 1 * Qc1

 P1   0.1 0 0
Pc   P1 F    0 0.1 0 
  
 P1 F 2   0 0 0.1
The state feedback gain matrix is
K   3  a3  2  a2 1  a1  Pc
  4  0 6  2 4  3 Pc
  0.4 0.4 0.1

14
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

METHOD-2 POLE PLACEMENT


1. Determine the desired characteristic polynomial
   1    2  ......   n    n  1 n1  ....   n1   n
2. Determine the characteristic polynomial of the system with state feedback
 I   F  gK   0
3. Equate the coefficients of polynomials

METHOD3 (ACKERMANN’S FORMULA)


1. Determine the desired characteristic polynomial
   1    2  ......   n    n  1 n1  ....  n1  n I
2. Determine the matrix ф(F) using the coefficient of desired characteristic polynomial.
 ( F )  F n  1 F n1  ....   n1F   n I
3. Calculate the state feedback gain matrix (K) using Ackermann’s formula.
K  0 0 .... 0 1 Qc1  ( F )

Concept of Eigen Values and Eigen Vectors


The roots of characteristic equation that we have described above are known as eigen values or
eigen values of matrix A. Now there are some properties related to eigen values and these
properties are written below-
 Any square matrix A and its transpose At have the same eigen values.
 Sum of eigen values of any matrix A is equal to the trace of the matrix A.
 Product of the eigen values of any matrix A is equal to the determinant of the matrix A.
 If we multiply a scalar quantity to matrix A then the eigen values are also get multiplied by the
same value of scalar.
 If we inverse the given matrix A then its eigen values are also get inverses.
 If all the elements of the matrix are real then the eigen values corresponding to that matrix are
either real or exists in complex conjugate pair.
Now there exists one eigen vector corresponding to one Eigen value, if it satisfy the following
condition (ek × I - A)Pk = 0. Where, k = 1, 2, 3, ........n.

UNIT II SYSTEM IDENTIFICATION 9


Non Parametric methods:-Transient analysis–Frequency analysis–correlation analysis– Spectral
analysis – Parametric methods: - Least square method – Recursive least square method.

BASICS
System identification is a methodology for building mathematical models of
dynamic systems using measurements of the system's input and output signals.
• White box
One could build a so-called white-box model based on first principles, e.g. a model for a
physical process from the Newton equations, but in many cases such models will be overly
complex and possibly even impossible to obtain in reasonable time due to the complex nature
of many systems and processes.

15
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

• Grey box model: although the peculiarities of what is going on inside the system are not
entirely known, a certain model based on both insight into the system and experimental
data is constructed.
• black box model: No prior model is available. Most system identification algorithms are
of this type.

Different approaches to system identification depending on model class


• Linear/Non-linear ™
• Parametric/Non-parametric
• Non-parametric methods try to estimate a generic model. (step responses, impulse
responses, frequency responses)
• Parametric methods estimate parameters in a user-specified model. (transfer functions,
state-space matrices)
System identification includes the following steps
• Experiment design: its purpose is to obtain good experimental data, and it includes the
choice of the measured variables and of the character of the input Signals
• Selection of model structure: A suitable model structure is chosen using prior knowledge
and trial and error.
• Choice of the criterion to fit: A suitable cost function is chosen, which reflects how well
the model fits the experimental data.
• Parameter estimation: An optimization problem is solved to obtain the numerical values
of the model parameters.
•Model validation: The model is tested in order to reveal any inadequacies.

16
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Different mathematical models


• Model description: ™ Transfer functions ™State-space models ™ Block diagrams (e.g.
Simulink)
Experiments and data collection
Preliminary experiments
• Step/impulse response tests to get basic understanding of system dynamics ,
• Linearity, static gains, time delays constants, sampling interval.

Data collection for model estimation
• Carefully designed experiment to enable good model fit
• Operating point, input signal type, number of data points to collect
Preliminary experiments- step response
Useful for obtaining qualitative information about the system
• Dead time (delay)
• Static gain
• Time constants (rise time)
• Resonance frequency

1. NON PARAMETRIC METHODS


TRANSIENT ANALYSIS
The input is taken as a step or an impulse and the recorded output constitutes the model.
1.1 STEP RESPONSE

• Time domain equation



y( t )  K 1  e  t /  
t  , y()  K  Yss
y()  0.632 * K
17
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

1.2 IMPULSE RESPONSE

• Time domain equation


y( t )  Ke  t / 
t  0, y(0)  K
y()  0.368 * K
FREQUENCY ANALYSIS
The input is a sinusoid. For a linear system in steady state the output will also be sinusoidal.
The change in amplitude and phase will give the frequency response for the used frequency.
IDENTIFICATION FROM FREQUENCY RESPONSE
First order system
The frequency response of the first order system is given by
Y( j) K
T( j)  
R ( j) 1  2  2

18
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

• Correlation analysis. The input is white noise. A normalized cross-covariance function


between output and input will provide an estimate of the weighting function.
• Spectral analysis. The frequency response can be estimated for arbitrary inputs by dividing
the cross-spectrum between output and input to the input spectrum.

LEAST SQUARES METHOD

f x   ax  b
N N
Error   y i  f x i    y i  ax i  b 
2 2

i 1 i 1
• The ‘best’ line has minimum error between line and data points
• This is called the least squares approach, since square of the error is minimized.
 N
2
Minimize Error   y i  ax i  b  
 i 1 
Take the derivative of the error with respect to a and b, set each to zero

 N 2
  y i  ax i  b  
 Error 
  i 1  0
a a
 N 2
  y i  ax i  b  
 Error 
  i 1  0
b b
 Error  N
 2 x i y i  ax i  b   0
a i 1

 Error  N
 2 y i  ax i  b   0
b i 1
Solve for the a and b so that the previous two equations both = 0
N N N
a  x  b x i   x i y i
2
i
i 1 i 1 i 1
N N
a  x i  bN   y i
i 1 i 1

19
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Put these into matrix form

 N
 N 
N  xi 
b   y 
     i 1
i
 i 1

N N

2   a  N

 x i  x i   i i 
x y
 i 1 i 1   i 1 
N N N
N x i yi   x i  yi
a i 1 i 1 i 1
2
N 
N
N  x   x i  2
i
i 1  i 1 
N N N N

y x x x y
i
2
i i i i
b i 1 i 1 i 1 i 1
2
N
N 
N  x   x i 
2
i
i 1  i 1 

NON PARAMETRIC METHODS OF SYSTEM IDENTIFICATION


TRANSIENT ANALYSIS
In this method, the model is estimated from step response or impulse response of the
process. In this section, how a model can be estimated from step response using transient analysis
technique is explained

Identifying First – Order Dead Time (FOPDT) model


Let us consider a FOPDT system which is described as transfer function model as shown
below.
Y(s) K -s
G s e (1)
U(s) Ts 1
Where,
Y(s) = Laplace transform of the output signal y(t)
U(s) = Laplace transform of the Input signal u(t)
G s = Transfer function of FOPDT system
= T
Time constant
= Dead time
K = Steady state gain
Apply a unit step as a input to FOPDT system as shown in Figure 1. And obtain a step
response which is shown in Figure 2.
20
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

FOPDT system
Unit step input Output
K -s
e
u(t) Ts 1 y(t)

Figure 1. Unit step input to FOPDT system

Figure 2. Demonstrates a graphical method for determining the FOPDT parameters K, T,


and from step response. The gain K is given by the final value of the response. By drawing
steepest tangent, T and can be obtained as shown in Figure 2. The tangent crosses the t- axis at
t .

Figure 2. Step response of FOPDT system to a unit step input

Identifying Second Order Model


Let us consider a second order system which is described as a transfer function model as
shown below
Y(s) K 02
G s (2)
U(s) s2 2 0s 2
0
Where,
Y(s) = Laplace transform of the output signal y(t)
U(s) = Laplace transform of the Input signal u(t)
G s = Transfer function of FOPDT system
K = Steady state gain
0 = Undamped natural frequency
= Damping ratio
Apply a unit step input to second order system as shown in Figure 3. And obtain a step
response which is shown in Figure 4.

21
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Figure 3. Unit step input to a second order system

Figure 4. Step response of second order system to a unit step input

The gain K is given by final value ( after convergence) as shown in Figure 4. The maxima
and minima of the step response occur at times.
tn n (3)
2
0 1
n 1, 2,..

And that Y tn K 1 ( 1)n Mn (4)

Where the overshoot M is given by

M exp (5)
2
1-

Note: First maxima occurs at n 1 , First minima occurs at n 2 , and second maxima occurs at
n 3 and so on.
From the step response as shown in Figure 4.the first peak time t1 and first peak overshoot M can
be determined from (5).
log M
2
(6)
[ (log M 2 )]1/2

Then 0 can be determined from (…) at t1


22
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

0 (Since n =1 ) (7)
2
t1 1-
From (6) and (7) the parameters and 0 can be determined.

FREQUENCY ANALYSIS
For a frequency analysis, it is convenient to use following system with transfer function model as
Y(s) G(s) U(s) (1)
Where,
Y(s) = Laplace Transform of Output signal Y(t)
U(s) =Laplace Transform of Input Signal u(t)
G(s) = Transform function of a system

Apply a following sinusoidal Input U(t) to a system (described in eq.(4) as shown in Figure 2.
u t a sin ( t)
Where,
a = amplitude of the sinusoidal input u(t)
= Frequency of the sinusoidal input u(t) in rad/sec

Figure 2. Sinusoidal Input to G(s)

If the system G(s) is asymptotically stable, then the output y(t) is also a sinusoidal signal.

y(t) b sin ( t ) (2)


Where,
b - amplitude of amplitude output y(t)
- Phase difference between Input u(t) and output Y(t) ( Shown in Figure . 3)

23
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Figure 3. Input and Output waveforms of G(s)

From (2), we can write


b a G( ) (3a)
arg G( ) (3b)
This can be proved as follows. Assume the system is initially at rest. Then the system G(s) can
be represented using a weighting function h(t) as follows.
t

Y(t) h( )u(t )d (4)


0
Where h(t) is the function whose laplace transform equals . G(s) .

t
s
G(s) h( ) e d (5)
0
i t i t
(e e )
Since sin( t) (6)
2i
Equations (1) (4) (5) and (6)

Y(t) h( )u(t )d
0
t

h( ) a sin ( (t ))d
0
t
(ei (t )
e i (t )
)
h( ) a d
0
2i
t
a
h( )(ei (t )
e i (t )
)d
2i 0

24
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

t
a
h( )(ei t e i
e i t
e i
)d
2i 0
t t
a i t ( i t) i t ( i t)
e h( )e d e h( )e d
2i 0 0
t

G(i ) h( )e ( i t)
d
0
t
( i t)
G( i ) h( )e d
0

a it i t
Y(t) e G(i ) e G( i )
2i
Since we can represent
G(i ) rei
Where,
r magnitude of G(i ) G(i )
argument of G(i ) ei arg G(i )

a it
Y(t) e G(i ) ei arg G (i )
e i t
G(i ) e i arg G (i )

2i

G(i ) G( i ) G(i )

a
Y(t) G(i ) ei t ei arg G (i )
e i t
e i arg G (i )

2i

a G(i ) sin t argG(i )


b. sin( t ) (7)
b a G(i )
arg G(i )
From above equations (1) and (3) are proved.
By measuring the amplitudes, a and b as well as the phase difference , one can draw a
bode plot (or Nyquist or equivalent plot) for different values. From the bode plot, one can easily
estimate the transfer function model G(s) of a system.

CORRELATION ANALYSIS
The form of model used in correlation analysis is

Y t h K u t K v t (2)
K 0

25
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Where,
Y(t) = Output signal
u(t) = Input signal
h (K) = Weighting sequence
v t = Disturbance term.

Figure 1. System used in correlation analysis

Assume that input u(t) is a stationary stochastic process (white noise) which is
independent of the disturbance v t . Then the following relation holds for the cross covariance
function.

ryu h K ru -K (2)
k 0

Where,
ryu = Cross covariance function between output Y(t) and input u(t)

= E Y(t ) u(t) rˆu rˆu


ru = Cross covariance function of input u(t)
= E u(t ) u(t)
Note: E is called expected value = mean value
Conduct an experiment and collect Input u(t) and Output Y(t) data.
The covariance functions in (2) can be estimated from the Input and Output data as

N min ( ,0)
1
r̂Yu Y t u t 0, 1, 2 (3)
N t 1 min ( ,0)

N
1
r̂u u t u t
N t 1

rˆu rˆu 0, 1, 2

Where,
26
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

N = Number of experimental data


r̂Yu = Estimated covariance function between Y(t) and u t
from Input and Output data.
r̂u = Estimated covariance function of u t from Input data.

Then an estimate ĥ(K) of the weighting function h(K) can be determined by solving
following equation

rˆyu hˆ K rˆ u -K (4)
k 0

rˆu 0 ... rˆu


r̂Yu 0 hˆ 0
. r̂u 1 ... . .

rˆYu rˆu ... rˆu 0 ĥ

Solving above infinite dimensional equation is very difficult. This problem can be simplified by
2
using white noise (mean value = 0, variamce 1 as input
For white noise,
r̂u = 0, for t = 1, 2…

r̂u = Constant value for = 0 (i.e non zero value)

r̂u 0 = Constant value

Since input is a white noise, the equation (4) can be written as

rYu k
h (K ) K = 0, 1… (5)
ru k

27
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

rYu 0
ru 0
h 0
rYu 1
h1
. ru 0
h .

rYu
ru 0

From the above equation the weighting function h(K) i.e, h 0 , h 1 ,.....h can be
easily estimated.

PARAMETRIC METHODS OF SYSTEM IDENTIFICATION


INTRODUCTION
A parametric method can be characterized as a mapping from the recorded data to the
estimated parameter vector. The estimated parameters do not have any physical insight of the
process. The various parametric methods of system identification are
1. Least squares (LS) estimate
2. Prediction error method (PEM)
3. Instrumental variable (IV) method

LEAST SQUARES ESTIMATION


The method of least squares is about estimating parameters by minimizing the squared
error between observed data and their expected values. The linear regression is the simple type of
parametric model. This model structure can be written as

T
Y(t) (t) (1)
Where,
Y(t) = Measured Quantity
(t) = n – Vector of known quantities
T
= Y(t 1), Y(t 2),...., Y(t n a ) u(t 1)... u(t nb)
= n – Vector of unknown parameters
The following two examples show how a model can be represented using linear regression model
form.
Example 1:
Consider a following first order linear discrete model
Y(t) a y(t 1) b u(t 1) (2)
The model represented in eq. (2) can be written in linear regression model as follows

Y(t) a y(t 1) b u(t 1) (3)


28
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

a
y(t 1) u(t 1)
b
T
(t)

Where,
(t) y(t 1) u(t 1)
a
b
The elements of (t) are often called regression variables or regressors
while y(t) is called regressed variable. The is called parameter vector. The variable t
takes integer values.
Example 2:
Consider a truncated weighting function model.

Y(t) h 0 u(t) h1 u(t 1) ..... h m 1 u(t M 1)

The input signal u(t) u(t 1) ..... h m 1 u(t M 1) are recorded during the experiment. Hence
the regression variables

(t) u(t) u(t 1) ..... h m 1 u(t M 1) is a M- Vector of


known quantities.
T
h 0 h1.... h m 1 is a M- Vector of
unknown parameters to be estimated.
The problem to find an estimate ˆ of the parameter vector as shown in Fig.1 from experimental
measurements given an experimental measurement Y(1), (1),Y(2), (2).....Y(N), (N) . Here
‘N’ represents number of experimental data and ‘n’ represents number of unknown quantities in
(t) or number of unknown parameters in .
T
Y(1) (1)

T
Y(2) (2)
.
.
T
Y(N) (N)

This can be written in matrix notation as

Y (4)
Where,
29
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Y(1)
.
Y . an ( N x1 ) vector (5)
Y(N)

T
(1)
.
. an ( N x n ) vector (6)
T
(N)

(t) y(t) T
(t) ˆ (7)

y(t) - Observed value


T
(t) ˆ - Expected value
And stack these in a vector defined as
(1)
.
.
(N)

In statistical literature the equation errors are often called residuals. The least square
estimate of is defined as the vector ˆ that minimizes the loss function
N N
1 2 1 T 2
V( ) (t) Y(t) (t) (8)
2 t 1 2 t 1

Note:
The order forms of loss function are
1 T
V( ) . (9)
2
1 2
V( ) (10)
2
Where . denotes the Euclidean vector norm.

30
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

The ˆ will be estimated from experimental measurements


Y(1), (1),Y(2), (2).....Y(N), (N) , by minimizing the loss function V( ) in (8) and
(7) . The solution to this optimized problem is
ˆ T 1 T
Y (11)
For this solution, the minimum value of V is
min
V( ) V ˆ
1 T
Y Y YT ( T
) 1 T
Y (12)
2
Note:
The matrix T is a positive definite.
The form eq. (11) of the least squares estimate can be rewritten in the equivalent form
t 1 t
ˆ (t) (t) T
(t) (t) Y(t) (13)
s 1 s 1

In many cases (t) is known as function of t. Then (13) might be easier to implement
than (11) since the matrix of large dimension is not needed in eq. (13). Also the form eq. (13)
is the starting point in deriving several recursive estimates.

RECURSIVE IDENTIFICATION METHOD


In recursive (also called on-line) identification methods, the parameter estimates are
ˆ
computed recursively in time. This means that if there is an estimate (t
1) based on data upto
ˆ ˆ
time t 1 , then (t) is computed by some ‘simple modification’ of (t 1) .

The counter parts of on-line methods are the so called off-line or batch methods, in
which all the recorded data are used simultaneously to find the parameter estimates.
 Recursive identification methods have the following general features :
 They are central part of adaptive systems (used, for example, for control and signal
processing) where the action is based on the most recent model.
 Their requirement on primary memory is quite less compare to offline identification
methods which require large memory to store entire data.
 They can be easily modified into real-time algorithms, aimed at tracking time-
varying parameters.
They can be first step in a fault detection algorithm, which is used to find out whether the
system has changed significantly.

31
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Fig.1. A general scheme for adaptive control

Most adaptive systems, for example adaptive control systems as shown in Fig. 1. are based
(explicitly or implicitly) on recursive identification.
Then a current estimated model of the process is available at all times. This time varying
model is used to determine the parameters of the (also time-varying) regulator
(also called controller).
In this way the regulator will be dependent on the previous behavior of the process
Through the information flow: Process --- model --- regulator).

If an appropriate principle is used to design the regulator, then the regulator should adopt
to the changing characteristics of the process.
The various identification methods are
 Recursive least squares method
 Real time identification method
 Recursive instrumental variable method
 Recursive prediction error method.

RECURSIVE LEAST SQUARES ESTIMATION


The linear time-variant system can be represented as
A(q 1 ) y(t) B(q 1 ) u(t) (t) (1)
Where,
A(q 1 ) 1 a1q 1
.... a na q na

B(q 1 ) b1q 1 .... bnbq nb

(t) Equation error


This model can be expressed in regression model form as
T
y(t) (t) e(t) (2)
Where,
(t) y(t 1),...., y(t n a )u(t 1).... u(t nb )

32
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

a1a 2 ...a na b1b2 ...bn b


Then the least squares parameter estimate is given by
t 1 t
ˆ (t) (s) T
(s) (s)Y(s) (3)
s 1 s 1

The argument t has been used to stress the dependence of ˆ on time. The eq.(3) can be
computed in recursive fashion.
Introduce the notation
t 1
T
P(t) (s) (s) (4)
s 1
t
1 T
P (t) (s) (s)
s 1
t 1
P 1 (t) (s) T
(s) (t) T
(t)
s 1

P 1 (t) P 1 (t 1) (t) T
(t)
1 1 T
P (t 1) P (t) (t) (t) (5)
Then using eq.(3) and eq. (4) can be written as
t
ˆ (t) P(t) (s)Y(s) (6)
s 1
Note:
If we replace t by t 1 in eq. (6), we get
t 1
ˆ (t 1) P(t 1) (s)Y(s) (7)
s 1

t 1
ˆ (t 1) P 1 (t 1) (s)Y(s)
s 1
The equation eq. (6) can be written as
t 1
ˆ (t) P(t) (s)Y(s) (t)y(t) (8)
s 1
By substituting eq. (7) in eq. (8) we get
ˆ (t) P(t) P 1 (t 1) ˆ (t 1) (t)y(t)
By substituting eq. (5)
ˆ (t) P(t) P 1 (t) (t) T
(t) ˆ (t 1) (t)y(t)
ˆ (t) P(t) P 1 (t) ˆ (t 1) (t) T
(t) ˆ(t 1) (t) y(t)
ˆ (t) ˆ (t 1) P(t) (t) T
(t) ˆ (t 1) P(t) (t) y(t)

33
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

ˆ (t) P(t) P 1 (t) ˆ (t 1) (t) T


(t) ˆ(t 1) (t)y(t)
ˆ (t) ˆ (t 1) P(t) (t) y(t) T
(t) ˆ (t 1) (9)
Thus eq. (9) can be written as
ˆ (t) ˆ (t 1) K (t) (10 a)
K(t) P(t) (t) (10 b)
(t) y(t) T
(t) ˆ (t 1) (10 c)

Hence the term (t) should be interpreted as a prediction error. It is the difference between the
measured output y(t) and the one-step-ahead prediction ŷ t |t 1; ˆ (t 1) T
(t) ˆ (t 1)
of y(t) made at t 1 based on the model corresponding to the estimate ˆ (t 1) . If (t) is small,
the estimate ˆ (t 1) is ‘good’ and should not be modified very much. The vector K(t) in eq. (10
b) should be interpreted as a weighting or gain factor showing how much the value of (t) will
modify the different elements of the parameter vector.
To complete the algorithm, eq. (5) must be used to compute P(t) which is needed in eq. (10 b).
However, the use of eq. (5) needs a matrix inversion at each time step. This would be time
consuming procedure. Using matrix inversion lemma, however eq. (5) can be rewritten in updating
equation form as

P(t 1) (t) T (t) P(t 1)


P(t) P(t 1) (11)
1 T (t) P(t 1) (t)
Note that in eq. (11) there is now a scalar division (scalar inversion) instead of matrix
inversion. From eq.(10 b) and eq.(10 c),

P(t 1) (t)
K(t) T
(12)
1 (t) P(t 1) (t)

The recursive least squares algorithm (RLS) consists of


1. ˆ (t) ˆ (t 1) K (t)
2. (t) y(t) T
(t) ˆ (t 1)
P(t 1) (t) T (t) P(t 1)
3. P(t) P(t 1)
1 T (t) P(t 1) (t)

34
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

UNIT –III DIGITAL CONTROLLER DESIGN 9


Review of z-transform – Modified of z-transform – Pulse transfer function – Digital PID controller
– Dead-beat control and Dahlin’s control – Smith predictor – Digital Feed-forward controller –
IMC - State Feedback Controller - LQG Control
Z-Transforms:
- Two methods to analyse and design the control systems, (i.e.), conventional or modern
- Classical methods are characterised by transform techniques and transfer functions
- Modern techniques are based on modelling of systems by state variables and state equations
- Laplace transform is the basic tool in the conventional analysis and design of analog control
systems.
- Z-transforms describes the behaviour of a signal at the sampling instants
Procedure to obtain the z-transform of a signal:
1. Determine the values of f(t) at sampling instants to obtain f(nkT). f(kT) is now in the form
of an impulse modulated train.
2. Take Laplace transform of the succession of impulses to obtain F*(s). F*(s) now contains
terms of the form eTs.
3. Make the substitution z= eTs. Into F*(s) gives F(z)
Note: F(s)≠F(z); F*(s)≠F*(z) [F(z) contains information about the signal at the sampling
instants only, F(s) has between two sampling instants].

Definition of Z-transform: z f t    f nT z  n  F  z 
n 0
Properties of Z-transform:
S.No Properties Sequence Z-Transform ROC
1 Linearity ax(n)+by(n) aX(z)+bY(z) Contains RxՈRy
2 Shifting x(n-n0) z-n0X(z) Rx
3 Time Reversal x(-n) X(z)-1 1/ Rx
4 Multiplication by an αnx(n) X(α-1z) | α |Rx
exponential
5 Convolution theorem x(n)*y(n) X(z)Y(z) Contains RxՈRy
6 Conjugation x*(n) X*(z*) Rx
7 Derivative  dX ( z ) Rx
nx(n) z
dz
8 Initial value Theorem

Common z transform pairs:


Sequence Z transform ROC
δ(n) 1 All z
αnu(n) 1 |z|>|α|
1  z 1
-αnu(-n-1) 1 |z|>|α|
1  z 1
nαnu(n) z 1 |z|>|α|
(1  z 1 ) 2

35
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

-nαnu(-n-1) z 1 |z|>|α|
(1  z 1 ) 2
cos(nω0)u(n) 1  cos 0 z 1 |z|>1
1  2cos 0 z 1  z 2
sin(nω0)u(n)z-a sin 0 z 1 |z|>1
1  2cos 0 z 1  z 2

Problems:
1. f(n)=an

By the definition of Z-transform, z f n    f n z n  F  z 
n 0
1 1
 z a
  2
a a
n
 a
Z  f n    a n z n  
a z
1      ... 1   if a  1    if z  a
n 0 n 0 z
n
z z  z z  z  za
Now in place of a if we substitute as 1 or -1, the transform will become
z
z (1)  if z  1
z 1
z z
z (1) n   if z  1
z  (1) z  1
2. f(n)=n2
znf (n)   z Z  f (n)
d
dz
d  z   z  1  2z   z  1 
 
z n 2  zn.n   z Z n    z 
d
2
dz  z  1 
  z ( z  1)  4 
 z 3
dz  z  1   z  1 
3. f(n)=n
  2
1 2 3  1  1
Z n   nz   n   2  3  ...  1   2  ...    1  
n n 1 2 3
n 0 n 0 z z z z z z z  z  z
4. z-transform of cosωt
Continuous time function is f(t)=cosωt; put t=kT where T=sampling time period.
By the definition of one sided z-transform,
 
z{ f (k )}  F ( z )  
k 0
f (k ) z k   cos(kT ) z
k 0
k

i  i
e e
We know that cos  
2
1  ikT k 1  ikT k
F ( z)  
2 k 0
e z  
2 k 0
e z

z
We know that z (e kT ) 
z  e kT

36
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

1 z 1 z
 F ( z)  iT

2 ze 2 z  e iT
z 2  z cos T
 2
z  2 z cos T  1
z ( z  cos T )
F ( z)  2
z  2 z cos T  1

5. f(t)=e-atsinωt
Continuous function is f(t)= e-atsinωt
Substitute t=kT
f(kT)= e-akTsinωkT
By the definition of one sided Z-transform we get,

z[ f (k )]  F ( z )  e
k 0
 akT
sin kTz k


1  aT iT 1 k
 
1
 (e aT e iT z 1 ) k  (e e z )
2i k 0 2i k 0

C
1
From infinite geometric sum series from that, k

k 0 1 C
1 1 1 
 F ( z)    aT iT 1
  aT iT 1 
2 1 e e z 1 e e z 
1 1 1 
  iT
 iT aT 
2  1  e / ze aT
1 e / ze 
1 ze aT ze aT 
  aT  

2  ze  e iT ze aT  e iT 
ze aT ( ze aT  cos T ) e i  e i
F ( z)  cos  
z 2 e 2 aT  2 ze aT cos T  1 2
Modified Z-transform:
To obtain the values of the response between sampling instants modified z-transforms are useful.
Modified z-transform mainly useful in analysing sampled-data control systems containing
transportation lag.
Evaluation of Modified Z-transforms:
Suppose that the transfer function of a process with dead time is represented by the following
expression:
G p ( s)  G(s)e d s where G(s) contains no dead time and θd=dead time.
Substitute  d  NT   where
N= Largest integer number of sampling intervals in θd; T=sampling period.
G p (s)  G(s)e ( NT  ) s
Taking Z-transform; G p (Z )  Z{G(s)e ( NT  ) s }  Z{G(s)e  NTs * G(s)e  ) }
We know that, Z{e  NTs }  Z  N  Z  N z{G(s)e s }
The quantity Z{G(s)e-θs} is defined as the modified z-transform of G(s) and is denoted by Zm{G(s)}
or G(z,m). Thus G{Z , m}  Z m {G(s)}  Z{G(s)e s }
37
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

1. Unit step function


u (t ) t  0
f (t )  
0 t0
1  e s 
Z m {F ( s )}  Z m    Z  

s  s 

 Z {u (T   )}   u(nT   ) z
n 0
n
 z 1{1  z 1  z 2 }
1
z 1
Therefore, Z m [u (t )]  1

1 z z 1
2. Zm{e-at}
 e s 
   1 
Z m e  at  Z m    Z 
m


sa sa
 es 
 Z    z{u (t   )e  a (t  ) }

sa

  u(nT   )e
n 0
 a ( nT  )
z n

 z 1e  amT {1  z 1e  aT  z  2 e  2 aT }


z 1e  amT
Therefore, z m (e at ) 
1  z 1e aT
Determine the response of the system shown in figure to a unit step change in set point.
Assume T=1sec and D(z) is a control algorithm with the value of 2. N=3
The closed loop pulse transfer function of the system is
C ( z) D ( z ).Gho G p ( z )

R ( z ) 1  D ( z ).Gho G p ( z )
To find GhoGp(z) by modified Z transform:
 1  e  sT e 3.5 s 
Gho G p ( z )  Z   
 s s  5 
 e 3.5 s 

Gho  1  z 1 z   

 s ( s  5) 
 d  3.5 sec; T  1sec
 d  NT   3.5  3 1   ;   0.5
 e  d s 
Gho G p ( z )  (1  z 1 ) z  

 s ( s  5) 
1  e
  NTs e  s 
Thus,  (1  z ) z 
 s ( s  5) 
 
 1 
 (1  z 1 ) z 3 z m  
 s ( s  5) 

38
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

 a   1 e  amT 
z m    z 1  1
 
 aT 1 
We know that  s( s  a)   1  z 1  e z 
a  5; m  1   / T  0.5
 1 e 50.5 
G ho  z 1  1
 
5 1 
 1  z 1  e z 
 0.918 z  4  0.075 z 5 
G ho G p ( z )   

 5  0.033 z 1 
 0.918 z  4  0.075 z 5 
 
C ( z)  5  0.0.033 z 1 
 2 
 1  2 0.918 z  0.075 z 
R( z ) 4 5

  5  0.0.033 z 1 
  
 z  1.836 z  4  0.15 z 5 

C ( z)    1 4 5 
 z  1  5  0.033 z  1.836 z  0.15 z 
 1.836 z 3  0.15 z 4 
C ( z )   1 3 4

5 
 5 z  5.033  0.033 z  1.836 z  1.686 z  0.15 z 
C(k) is calculated by long division method.
C ( z )  0  0 z 1  0 z 2  0 z 3  0.3672 z 4  0.398 z 5  0.398 z 6
 c(k )  {0, 0, 0, 0, 0.3672, 0.398, 0.398, ...}
Pulse Transfer function:
The transfer function of Linear Discrete-Time system (LDS) is called pulse transfer function of
the system.
Let h(k)=Impulse response of a LDS system. Now Z-transform of h(k)=Z{h(k)}=H(z).
H(z)=Transfer function of LDS system. The input-output relationship of a LDS system is governed
by a convolution sum of equation.

y (k )   x ( m) g ( k  m)  x ( k ) * g ( k )
m 0
By taking Z-transform of this convolution sum it can be shown that, G(z) is given by the ratio of
Y(z)/X(z), where Y(z) is the z-transform of output y(k) of LDS system and X(z) is the Z-transform
of input x(k) to the LDS system.
The pulse transfer function is defined as transfer function of the LDS system as the ratio of the Z-
transforms of the output of a system to the Z-transform of the input to the system with zero initial
conditions.
Y ( z)
Pulse Transfer function  
X ( z)
The input-output relation of LDS system is governed by the constant coefficient difference
equation,
N M
y (k )   a
m 1
m y (k  m)  a
m 0
m x(k  m)

Where N is the order of the system and M  N . On taking Z-transform we get,

39
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Z { y (k  m)}  z  mY ( z ) and
Z {x(k  m)}  z m X ( z ) by shifting property
N M
Y ( z)   
m 1
a m z  mY ( z )  b
m 0
mz
m
X ( z)

N M
Y ( z)  
m 1
a m z  mY ( z )  b
m 0
mz
m
X ( z)

On expanding, the above equation with M=N, we get,


Y ( z)  a1 z 1Y ( z)  a2 z 2Y ( z)  ...  a N z  N Y ( z)  b0 X ( z)  b1 z 1 X ( z)  b2 z 2 X ( z)  ...  bN z  N X ( z)
Y ( z ) b0  b1 z 1  b2 z 2  b3 z 3  ...  bN z  N

X ( z ) 1  a1 z 1  a 2 z 2  a3 z 3  ...  a N z  N
Solve the difference equation: c(k  3)  4.5c(k  2)  5c(k  1)  1.5c(k )  u (k )
Given that c(0)=0; c(1)=0; c(2)=2; c(k)=0; k<0.
Given difference equation is c(k  3)  4.5c(k  2)  5c(k  1)  1.5c(k )  u (k )
Let z{c(k)}=C(z) and Z{u(k)}=U(z)
z
U(k) is unit step signal, U ( z ) 
z 1
We know that, if F(z)=Z{f(k)}then
m 1
Z { f (k  m)}  z m F ( z )   f (i) z
i 0
m i

On taking z-transform of the difference equation;


z
z 3C ( Z )  z 3c(0)  z 2 c(1)  zc (2)  4.5[ z 2 C ( Z )  z 2 c(0)  zc (1)]  5[ zC ( Z )  zc (0)]  1.5C ( Z ) 
z 1
Substituting initial conditions we get,
C ( z) 2z 1

z ( z  1)( z  3)( z  0.5)( z  1)
By partial fraction technique we get,
1 z  7  z   z  3 z 
C ( z)       1.06   
12  z  1  20  z  3   z  0. 5  2  z  1 
On taking inverse Z-transform of C(z) we get,
1 7 3
c(k )  u (k )  (3) k  1.06(0.5) k  (1) k
12 20 2
Find the pulse transfer functions in the z-domains of the continuous transfer functions in the
Laplace domain. Assume a ZOH element.
2
a) G p ( s ) 
(0.1s  1)(5s  1)( s  1)
Pulse transfer function H(z)=C(z)/R(z)
To find, HGp(z) where H(s)= Zero order hold
1  e  st 2 
Z {H ( s )G p ( s )}  HG p ( Z )  Z  
 s (0.1s  1)(5s  1)( s  1) 
We know that,

40
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

1  e  st   G p (s) 
Z G p ( s )  (1  Z 1 ) Z  
 s   s 
 4 
 (1  Z 1 ) Z  
 s ( s  10)( s  0.2)( s  1) 
Taking Partial fraction we get,
 4   2 0.0045 2.55 0.55 
Z    Z    _ 
 s ( s  10)( s  0.2)( s  1)   s s  10 s  0.2 s  1 
After applying Z-transforms and substituting T=1sec we get,
 z 1 z 1 z 1 
HG p ( z )  2  0.0045  2.55  0.55
 z  0.0000453 z  0.0818 z  0.368 
5e td
b) G p ( S )  t d  2T , T  1sec
(3s  1)( s  1)
1  e  st 5e td 
HG p ( Z )  Z  
 s (3s  1)( s  1) 
 1.66e 2T 
We know that,  (1  z 1 ) Z  

 s ( s  0.33)( s  1) 
 1.66 
 (1  z 1 ) z 2 Z  
 s ( s  0.33)( s  1) 
5 7.5 2.5 
After taking Partial fractions, we get, Z    
 s s  0. 33 s 1
Taking z-transform and substituting T=1sec;
 5z 7 .5 z 2. 5 
HG p ( Z )  (1  z 1 ) z 2   0.33
 
 z 1 z  e z  e T 
  z 1   z  1 
 z 2 5  7.5   2.5 
  z  0.718   z  0.368 
Digital PID Controller:

The figure shows a continuous data PID controller acting on an error signal e(t). The controller
simply multiplies thee error signal e(t) with a constant Kp. The integral control multiplies time
integral of e(t) by Ki and derivative control generates a signal equal to Kd times the time derivative
of e(t). The function of the integral control is to provide action to reduce the area under e(t) which
leads to reduction of steady state error. The derivative control provides anticipatory action to
41
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

reduce the overshoots and oscillations in time response. In digital control, P-control is still
implemented by a proportional constant Kp. The integrator and differentiator can be implemented
by various schemes.
Numerical Integration:
Since integration is the most time consuming and difficult, basic mathematical operations on a
digital computer using simulation is important. Continuous integration operations are performed
by numerical methods. This replaces the SOH devices at strategic locations in a control system.

X(s)/R(s)=1/s represents the integrator, where r(t) is the input. Area under the curve is
represented by x(t), with output between t=0, t=T.

42
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Rectangular Integration- forward rectangular integration and backward rectangular integration


These are equivalent to inserting a ZOH in front of an integration.
X ( z) 1 T
Z-transfer function of backward rectangular integration is  (1  z 1 ) z ( 2 ) 
R( z ) s z 1
State equation is X[(k+1)T]=X(kT)+Tr(kT)
X ( z) 1 Tz
Z-transfer function of forward rectangular integration is  (1  z 1 ) z ( 2 ) 
R( z ) s z 1
State equation is X[(k+1)T]=X(kT)+Tr((k+1)T)
The most common method of approximating the derivative of e(t) at t=T is
de(t )  e(kT )  e[(k  1)T ] 
 
dt t T  T 
Taking Z-transform on both sides pf the last equation including proportional constant KD,
z 1 T Tz T z 1
DD ( z )  K D DI ( z )  K I or K I or K I
Tz z 1 z 1 2 z 1

DEADBEAT ALGORITHM:
1
Obtain the deadbeat control law for the given G p ( s)  where sampling period T=1sec.
0.4s  1
1  z 1 
Deadbeat control algorithm, D( z )   
G ( z )  1  z 1 
 1  e  sT 1   1   e  sT 
G ( z )  Z {Gho ( s )  G p ( s )}  Gho G p ( z )  Z    Z 
 
  Z 



 s 0. 4 s  1   s ( 0. 4 s  1)   s ( 0.4 s  1) 

 2 .5 
Gho G p ( z )  (1  z 1 ) Z  
 s ( s  2.5) 
2.5 1 1
By partial fraction method we get,  
s( s  2.5) s s  2.5
 2.5   z z 
(1  z 1 ) z    (1  z 1 )   
Taking z transform we get,  s ( s  2.5)   z  1 z  0.08 
0.92

z  0.08

43
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

1  z 1 
D( z )   
G ( z )  1  z 1 
1  0.08 z 1

0.92  0.92 z 1
To invert deadbeat algorithm into time domain, the following steps are carried out;
M ( z) 1  0.98 z 1
D( z )  
E ( z ) 0.92  0.92 z 1
After inverse z-transforms, we get, m(n)  m(n  1)  1.08e(n)  0.086e(n  1)
Where, M(z)- Z-transform of controller output
E(z)- Z-transform of error [R(z)-C(z)]
m(n)- controller output at the nth sampling instant
m(n-1)- controller output at the (n-1) sampling instant
e(n)- Error (set point-measurement) at the nth sampling instant
e(n-1)- Error at the (n-1) sampling instant

DAHLIN ALGORITHM
e 0.8 s
Design a Dahlin’s controller algorithm for G p ( s)  T  0.4 sec
0.6s  1

Dahlin’s control algorithm D( z ) 


1 
 
1   z  ( N 1) 

( N 1) 
G ( z )  1  z   1   z
1

1 e  sT
e 0.8 s

G ( z )  Z  

 s 0.6 s  1 
 e 0.8 s 
 (1  z 1 ) Z  

 s ( 0.6 s  1) 

 d  NT where  d  0.8 sec N  2 whenT  0.4 sec


 
Z e  NTs  Z  N
We know that,  1.67 
G( z )  (1  z 1 ) z 2 Z  
 s( s  1.67) 
After partial fraction method,
 1.67  1 1 
G ( z )  (1  z 1 ) z 2 Z    (1  z 1 ) z 2 Z   
 s ( s  1.67)   s s  1.67 
Applying z-transforms,
 z z 
G ( z )  (1  z 1 ) z 2   1.67 T 
 z  1 z  e 
0.49 z 2
Gho G p ( z ) 
z  0.51
0.6321z 2  0.322 z 3
 D( z ) 
0.49 z 2  0.18 z 3  0.31z 5
   e T / , T   , e 1  0.3678

44
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

M ( z) 0.6321  0.322 z 1
D( z )  
E ( z ) 0.49  0.18 z 1  0.31z 3
After inverse z-transforms, we get,
m(n)  1.29e(n)  0.657 e(n  1)  0.3673m(n  1)  0.6326 m(n  3)
Where, M(z)- Z-transform of controller output
E(z)- Z-transform of error [R(z)-C(z)]
m(n)- controller output at the nth sampling instant
m(n-1)- controller output at the (n-1) sampling instant
e(n)- Error (set point-measurement) at the nth sampling instant
e(n-1)- Error at the (n-1) sampling instant
Smith Predictor scheme:

45
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

As shown in the figure, the process is conceptually split into a pure lag and a pure dead time. If
the fictitious variable (b) could be measured somehow, that can be connected to the controller as
shown in fig.7.40(b). This would move the dead time outside the loop. The controlled variable (c)
would repeat whatever b did after a delay of ɵd. since there is no delay in the feedback signal (b),
the response of the system would be greatly improved. The scheme of course, cannot be
implemented, because b is an unmeasurable signal. Ow a model of the process is developed and a
manipulated variable (m) is applied to the model as shown in the figure. If the model were perfect
and disturbance, L=0, then the controlled variable c will become equal to the error cm and em=c-
cm=0. The arrangement reveals that although the fictitious process variable b is unavailable, the
value of bm can be derived which will be equal to b unless modelling errors or load upsets are
present. It is used as feedback signal. The difference (c-cm) is the error which arises because of
modelling errors or load upsets. To compensate for these errors, a second feedback loop is
46
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

implemented using em. this is called the Smith Predictor control strategy. The Gc(s) is a
conventional PI or PID controller which can be tuned much more tightly because of the elimination
of dead time from the loop. Thus the system consists of a feedback PI algorithm (Gc) that controls
a simulated process Gm-(s), which is easier to control than the real process.

Digital Feed forward Controller:


Feedforward control is desired when,
1. Feedback control does not provide satisfactory control performance
2. A measured feed forward variable is available
Feedforward variable must satisfy the following criteria
1. The variable must indicate the occurrence of an important disturbance
2. There must not be a causal relationship between the manipulated and feedforward variables
Analog implementation of the feedforward control requires an electrical circuit that closely
approximates the transfer function. Such a circuit is costly and is seldom made for a range of model
structures, but it is available for lead/lag with gain.
Digital implementation: the programming of the controller is shown as follows;
The gain is simply a multiplication.
The dead time can be implemented by a using a table of data whose length times the sampled
period equals dead time.
The data location (or pointer) is shifted one step every time that the controller is executed.
The lead/lag element must be transformed into a digital algorithm. One way to do this is to convert
the lead/lag element into a differential equation by remembering that multiplication of the laplace
transform by the variables is equivalent to differentiation.
dY (t ) dX (t )
The resulting equation is Tlg  Y (t )  Tld  X (t )
dt dt
With x- input to the lead/lag, y- output from lead/lag algorithm
Converting the differential equation into difference equation,
dY (t ) Yn  Yn1 dX (t ) X n  X n1
 
dt t dt t
The resulting equation can be rearranged to yield the following equation. This can be used in a
digital computer to implement the digital lead/lag.
 Tlg   Tld   Tld 
     
Yn   t Yn1   t  X n   t  X n1
 Tlg   Tlg   Tld  1 
  1   1  
 t   t   t 
Where Yn – output signal from lead/lag, Xn – the input signal to lead/lag
This can be combined with the gain and dead time for the didgital form of a feedforward controller
with lead/lag.
 Tlg   Tld   Tld 
     
MVn    t  MV  K ff   t  D   K ff T  t  D  where   t
 Tlg  n1  Tlg  m n  lg  m n 1
  1   1   1
 t   t   t 

This method follows many limitations though;


1. Limitations are presented by the delay table which requires dead time divided by the execution
period to be an integer.
47
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

2. Difference approximation is accurate only for execution times that are small compared to lead
and lag times.
General feed-forward controller design,
Gd ( s )
G ff ( s )  
G p (s)
MV ( s ) G (s)  T  1   ff s
G ff ( s )   d  K ff  lds e
Dm ( s ) G p (s)  Tlds  1 

lead / lag a lg orithm  Tlds  1 Tlg s  1 
feedforward controller gain  K ff   K d K p
Dead time   ff   d  
Lead time Tld   Lag time Tlg   d
The internal model principle and structure
The IMC philosophy relies on the internal model principle, which states: Accurate control can be
achieved only if the control systems encapsulates (either implicitly or explicitly) some
representation of the process to be controlled.

~
Suppose G ( S ) is a model of G(s).
• By setting C(s) as the inverse of the model:
~ 1 ~
C ( s)  G ( S ) And assuming that G ( s )  G ( s )
Then the output y(t) will track the reference input yd(t) perfectly.
• However in this example no feedback is shown.

48
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

One good thing about the IMC procedure, is that it results in a controller with a single tuning
parameter, the IMC filter (λ). For a system, which is “minimum phase”, λ is equivalent to a closed-
loop time constant (the “speed of response” of the closed-loop system). Although the IMC
procedure is clear and IMC is easily implemented, the most common industrial controller is still
the PID controller.
Why do we care about IMC, if we can show that it can be rearranged into the standard
feedback structure?
The process model is explicitly used in the control system design procedure. The standard
feedback structure uses the process model in an implicit fashion, that is, PID tuning parameters
are “tweaked” on a transfer function model, but it is not always clear how the process model effects
the tuning decision. In the IMC formulation, the controller, q(s), is based directly on the “good”
part of the process transfer function. The IMC formulation generally results in only one tuning
parameter, the closed loop time constant (λ , the IMC filter factor). The PID tuning parameters
are then a function of this closed-loop time constant. The selection of the closed-loop time constant
is directly related to the robustness (sensitivity to model error) of the closed-loop system.
THE EQUIVALENT FEEDBACK FORM TO IMC:
In this section, we derive the feedback equivalence to IMC by using block diagram manipulation.
Begin with the IMC structure shown in Figure.

Notice that r(s) − y(s) is simply the error term used by a standard feedback controller. Therefore,
we have found that the IMC structure can be rearranged to the feedback control (FBC) structure,
as shown in Figure. This reformulation is advantageous because we will find that a PID controller
49
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

often results when the IMC design procedure is used. Also, the standard IMC block diagram cannot
be used for unstable systems, so this feedback form must be used for those cases.

We can use the IMC design procedure to help us design a standard feedback controller. The
~
standard feedback controller is a function of the internal model g p (s ) and internal model controller
q(s) as shown in the equation below.
q( s)
g c ( s)  ~
1  g p ( s)q( s)
We will refer to the above equation as the IMC-based PID relationship because the form of gc(s)
is often that of a PID controller. One major difference is that the IMC-Based procedure will, many
times, not require that the controller be proper. Also, the process dead time will be approximated
using the Padé procedure, in order to arrive at an equivalent PID-type controller. Because of the
Padé approximation for dead time, the IMC-based PID controller will not perform as well as IMC
for processes with time-delays.
The IMC-Based PID Control Design Procedure
The following steps are used in the IMC-based PID control system design
1. Find the IMC controller transfer function, q(s), which includes a filter, f(s), to make q(s) semi
proper or to give it derivative action (order of the numerator of q(s) is one order greater that the
denominator of q(s)). Notice that this is a major difference from the IMC procedure.

Here, in the IMC-based procedure, we may allow q(s) to be improper, in order to find an equivalent
PID controller. The bad news is - you must know the answer that you are looking for, before you
can decide whether to make q(s) proper or improper in this procedure.

q( s)
2. Find the equivalent standard feedback controller using the transformation g c ( s)  ~
1  g p ( s)q( s)
write this in the form of a ratio between two polynomials.
3. Show this in PID form and find kc, τI, τ D. Sometimes this procedure results in a PID controller
cascaded with a lag term τ.

3. Perform closed-loop simulations for both the perfect model case and cases with model
mismatch. Choose the desired value for λ as a trade-off between performance and
robustness.

50
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

IMC-BASED PID DESIGN FOR A FIRST-ORDER PROCESS


~ kp
To find the PID equivalent to IMC for a first order process; g p ( s ) 
 ps  1
Step 1: Find the IMC controller transfer function q(s) which includes a filter to make q(s) semi
proper.
~ ~ 1  ps  1 1
q( s)  q( s) f ( s)  g p  ( s) f ( s) 
k p s  1
1  ps  1
q( s) 
k p s  1
Step 2: Find the equivalent standard feedback controller using the transformation.
q(s)  ps  1
gc (s)  
~
1  g ( s)q( s) k p s
p

Step 3: Rearrange the above two equations by multiplying and dividing with  p
  p  ps  1 p
gc (s)    kc  I   p
 k p   p s k p
 
The IMC-based PID design procedure for a first-order process has resulted in a PI control law.
The major difference is that there are no longer two degrees of freedom in the tuning parameters
kc, τI - the IMC-based procedure shows that only the proportional gain needs to be adjusted. The
integral time is simply set equal to the process time constant. Notice that the proportional gain is
inversely related to λ, which makes sense. If λ is small (closed loop is “fast”) the controller gain
must be large. Similarly, if λ is large (closed loop is “slow”) the controller gain must be small.

IMC-BASED PID DESIGN FOR A SECOND-ORDER PROCESS


To find the PID equivalent to IMC for a second-order process,
~ kp
g p ( s) 
( 1s  1)( 2 s  1)
Step 1: Find the IMC controller transfer function q(s) (improper),
~
~ ( s  1)( 2 s  1) 1
q( s)  q( s) f ( s)  g p1 ( s ) f ( s)  1
kp (s  1)
Step 2: Find the equivalent standard feedback controller using the transformation
q( s)  1 2 s 2  ( 1   2 ) s  1
g c ( s)  
~
1 k p s
1  g p ( s)q( s)
Step 3: Rearrange by multiplying and dividing by (1   2 ) , we find,
       s 2  ( 1   2 ) s  1
g c (s)   1 2  1 2
 k p  ( 1   2 ) s
 
We find the following relationships,
(   ) 
kc  1 2  I   1   2  D  1 2
k p 1   2

51
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

LQG CONTROL:
OPTIMAL CONTROL
The main objective of optimal control is to determine control signals that will cause a
process (plant) to satisfy some physical constraints and at the same time extremize (maximize or
minimize) a chosen performance criterion (performance index or cost function). The main interest
is to find the optimal control u*(t) (* indicates optimal condition) that will drive the plant P from
initial state to final state with some constraints on controls and states and at the
same time extremizing the given performance index J.
The formulation of optimal control problem requires
1. a mathematical description (or model) of the process to be controlled (generally in state variable
form),
2. a specification of the performance index, and
3. a statement of boundary conditions and the physical constraints on the states and/or controls.
Plant
For the purpose of optimization, a physical plant is described by a set of linear or nonlinear
differential or difference equations.
Performance Index
Classical control design techniques have been successfully applied to linear, time-
invariant, single-input, single output (8180) systems. Typical performance criteria are system time
response to step or ramp input characterized by rise time, settling time, peak overshoot, and steady
state accuracy; and the frequency response of the system characterized by gain and phase margins,
and bandwidth. In modern control theory, the optimal control problem is to find a control which
causes the dynamical system to reach a target or follow a state variable (or trajectory) and at the
same time extremize a performance index which may take several forms as described below.
1. Performance Index for Time-Optimal Control System:
The main aim is to transfer a system from an arbitrary initial state x(to) to a specified final
state x( t f) in minimum time. The corresponding performance index (PI) is

2. Performance Index for Fuel-Optimal Control System:


Consider a spacecraft problem. Let u(t) be the thrust of a rocket engine and assume that the
magnitude u( t) of the thrust is proportional to the rate of fuel consumption. In order to minimize
the total expenditure of fuel, we may formulate the performance index as

For several controls, we may write it as

where R is a weighting factor.


3. Performance Index for Minimum-Energy Control System:
Consider Ui (t) as the current in the ith loop of an electric network. Then 2:i!1 u;(t)ri
(where, ri is the resistance of the ith loop) is the total power or the total rate of energy expenditure
of the network. Then, for minimization of the total expended energy, we have a performance
criterion as

52
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

or in general,

where, R is a positive definite matrix and prime (') denotes transpose here. Similarly, we can think
of minimization of the integral of the squared error of a tracking system. We then have,

where, Xd(t) is the desired value, xa(t) is the actual value, and x(t) = xa(t) - Xd(t), is the error. Here,
Q is a weighting matrix, which can be positive semi-definite.
4. Performance Index for Terminal Control System:
In a terminal target problem, we are interested in minimizing the error between the desired
target position Xd (t f) and the actual target position Xa (t f) at the end of the maneuver or at the final
time t f.
The terminal (final) error is x (t f) = Xa ( t f) - Xd ( t f ). Taking care of positive and negative
values of error and weighting factors, we structure the cost function as

which is also called the terminal cost function. Here, F is a positive semi-definite matrix.
5. Performance Index for General Optimal Control System:
Combining the above formulations, we have a performance index in general form as

or,

where, R is a positive definite matrix, and Q and F are positive semi-definite matrices, respectively.
the matrices Q and R may be time varying. The particular form of performance index is called
quadratic (in terms of the states and controls) form.
The problems arising in optimal control are classified based on the structure of the
performance index J. If the PI contains the terminal cost function S(x(t), u(t), t) only, it is called
the Mayer problem, if the PI has only the integral cost term, it is called the Lagrange problem, and
the problem is of the Bolza type if the PI contains both the terminal cost term and the integral cost
term. There are many other forms of cost functions depending on our
performance specifications. However, the above mentioned performance indices (with quadratic
forms) lead to some very elegant results in optimal control systems.
Constraints
The control u( t) and state x( t) vectors are either unconstrained or constrained depending
upon the physical situation. The unconstrained problem is less involved and gives rise to some
elegant results. From the physical considerations, often we have the controls and states, such as
currents and voltages in an electrical circuit, speed of a motor, thrust of a rocket, constrained as

where, +, and - indicate the maximum and minimum values the variables can attain.
53
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Formal Statement of Optimal Control System


Let us now state formally the optimal control problem even risking repetition of some of
the previous equations. The optimal control problem is to find the optimal control u*(t) (* indicates
extremal or optimal value) which causes the linear time-invariant plant (system)
x’(t) = Ax(t) + Bu(t)
to give the trajectory x* (t) that optimizes or extremizes (minimizes or maximizes) a performance
index

or which causes the nonlinear system


x’(t) = f(x(t), u(t), t)
to give the state x*(t) that optimizes the general performance index

with some constraints on the control variables u( t) and/or the state variables x(t). The final time tf
may be fixed, or free, and the final (target) state may be fully or partially fixed or free. The entire
problem statement is also shown pictorially.

Optimal Control Theory


The linear quadratic control problem has its origins in the celebrated work of N. Wiener on
mean-square filtering for weapon fire control during World War II (1940-45). Wiener solved the
problem of designing filters that minimize a mean-square-error criterion (performance measure)
of the form

where, e( t) is the error, and E {x} represents the expected value of the random variable x. For a
deterministic case, the above error criterion is generalized as an integral quadratic term as

54
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

where, Q is some positive definite matrix.

PONTRYAGIN MINIMUM PRINCIPLE


Constrained System
Let us consider the optimal control system

where, x(t) and u(t) are the n- and r- dimensional state and control unconstrained variables
respectively, and the performance index

with given boundary conditions

The important stages in obtaining optimal control for the previous system are
1. the formulation of the Hamiltonian

where, λ(t) is the costate variable, and


2. the three relations for control, state and costate as

to solve for the optimal values x*(t), u*(t), and A*(t), respectively,along with the general boundary
condition

Usually in problem formulation, we assume that the control u(t) and the state x( t) are
unconstrained, hat is, there are no limitations (restrictions or bounds) on the magnitudes of the
control and state variables. But, in reality, the physical systems to be controlled in an optimum
manner have some constraints on their inputs (controls), internal variables (states) and/or outputs
due to considerations mainly regarding safety, cost and other inherent limitations. For example,
consider the following cases :
1. In a D. C. motor used in a typical positional control system, the input voltage to field or
armature circuit are limited to certain standard values, say, 110 or 220 volts. Also, the
magnetic flux in the field circuit saturates after a certain value of field current.
2. Thrust of a rocket engine used in a space shuttle launch control system cannot exceed a
certain designed value.
3. Speed of an electric motor used in a typical speed control system cannot exceed a certain
value without damaging some of the mechanical components such as bearings and shaft.
This optimal control problem for a system with constraints was addressed by Pontryagin et al.,
and the results were enunciated in the celebrated Pontryagin Minimum Principle. In their original
works, the previous optimization problem was addressed to maximize the Hamiltonian

which is equivalent to minimization of the Hamiltonian.

55
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Previously, for finding the optimal control u*(t), performance index, and boundary conditions, we
used arbitrary variations in control u(t) = u*(t)+δu(t) to define the increment ΔJ and the (first)
variation δJ in J as

where, the first variation

Also, in order to obtain optimal control of unconstrained systems, we applied the fundamental
theorem of calculus of variationsi.e., the necessary condition of minimization is that the first
variation δJ must be zero for an arbitrary variation δu(t). But now we place restrictions on the
control u(t) such as
or component wise,
where, u j and u j are the lower and upper bounds or limits on the control function Uj (t). Then,
we can no longer assume that the control variation δu(t) is arbitrary for all t E [to, tf]. In other
words, the variation δu(t) is not arbitrary if the external control u*(t) lies on the boundary condition
or reaches a limit. If, for example, an external control u*(t) lies on the boundary during some
interval [ta, tb] of the entire interval [to, tf]' as shown in Figure 6.1(a), then the negative admissible
control variation -δu(t) is not allowable. The reason for taking +δu(t) and -δu(t) the way it is shown
will be apparent later. Then, assuming that all the admissible variations 11δu(t)11 is small enough
that the sign of the increment ΔJ is determined by the sign of the variation δJ, the necessary
condition for u * ( t) to minimize J is that the first variation.

Summarizing, the relation for the first variation is valid if u*(t) lies on
the boundary (or has a constraint) during any portion of the time interval [to, tf] and the first
variation δJ = 0 if u*(t) lies within the boundary (or has no constraint) during the entire time
interval [to, tf]. Next, let us see how the constraint affects the necessary conditions which were
derived by using the assumption that the admissible control values u(t) are unconstrained. Using
the results, we have the first variation as

In the above,
1. if the optimal state x* (t) equations are satisfied, it results in the state relation,
2. if the costate λ* (t) is selected so that the coefficient of the dependent variation δx( t) in the
integrand is identically zero, it results in the costate condition and
3. the boundary condition is selected such that it results in the auxiliary boundary condition.
When the previous items are satisfied, then the first variation becomes,

56
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

The integrand in the previous relation is the first order approximation to change in the
Hamiltonian H due to a change in u(t) alone. This means that by definition

Then, using the above equation in the first variation, we have

Now, using the above, the necessary condition becomes

for all admissible δu(t) less than a small value. The relation becomes

Replacing u*(t) + δu(t) by u(t), the necessary condition becomes

or, in other words,


The previous relation, which means that the necessary condition for the constrained optimal
control system is that the optimal control should minimize the Hamiltonian, is the main
contribution of the Pontryagin Minimum Principle. We note that this is only the necessary
condition and is not in general sufficient for optimality.

57
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

The variations +δu(t) and -δu(t) are taken in such a way that the negative variation -δu(t) is not
admissible and thus we get the condition. On the other hand, by taking the variations +δu(t) and -
δu(t) in such a way that the positive variation +8u(t) is not admissible, we get the corresponding
condition as

It should be noted that


1. the optimality condition is valid for both constrained and unconstrained control systems,
whereas the control relation is valid for unconstrained systems only,
2. the results given in the Table provide the necessary conditions only, and
3. the sufficient condition for unconstrained control systems is that the second derivative of the
Hamiltonian must be positive definite.

Additional Necessary Conditions


In their celebrated works, Pontryagin and his co-workers also obtained additional
necessary conditions for constrained optimal control systems. These are stated below without
proof.
1. If the final time t f is fixed and the Hamiltonian H{ does not depend on time t explicitly,
then the Hamiltonian H must be constant when evaluated along the optimal trajectory; that
is

2. If the final time t f is free or not specified priori and the Hamiltonian does not depend
explicitly on time t, then the Hamiltonian must be identically zero when evaluated along
the optimal trajectory; that is,

POLE PLACEMENT BY STATE FEEDBACK


Stability and optimality of a system are closely related to the location of poles or
eigenvalues of the system. If the poles are not in the desired locations, can we move and place
them in the right places? This is the problem to be solved by pole placement. Pole placement can
be achieved by feedback control. Here, we assume that all states are available for feedback control.
Hence, we consider only the state equation

The poles of the controlled system are λ(A+BK). The question is whether or not we can
relocate the poles to arbitrary locations in the complex plane as we desire. The answer is that this
can be done if and only if (A,B) is controllable. To see this, let us first consider single-input–single-
output systems with transfer function of the form.

We can realize this system in state space representation in the following form:

58
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

The above state space representation is called the controllable canonical form. The
characteristic equation of the system is
Our goal is to find a state feedback

so that the poles of the controlled system is in the desired locations represented by the desired
characteristic equation

This can be achieved by letting the feedback matrix be

To prove this, let us substitute u = Kx+v into the state equation

The characteristic equation of the above controlled system is

Therefore, if the system is represented in the controllable canonical form, it is


straightforward to design a state feedback to place its poles in arbitrary locations in the complex
plane to achieve stability or optimality.

Procedure 1 (Pole placement of single-input systems)


Given: a controllable system (A,B) and a desired characteristic polynomial

1. Find the characteristic polynomial of A

2. Write the controllable canonical form (Ac,Bc)


3. Find the transform matrix

4. Determine the feedback matrix for (Ac,Bc)

5. Determine the feedback matrix for (A,B)

6. The state feedback is given by u = Kx+v.

Example
Consider the inverted pendulum

59
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

We want to find a state feedback to place the poles at −5, −10, −2+j2, −2−j2. Let us first check if
the system is controllable.

Since rank(C) = 4, the system is controllable. The characteristic polynomial of A is

The controllable canonical form is

The controllability matrix of (Ac,Bc) is

The transform matrix is

The desirable characteristic equation is

The feedback matrix for (Ac,Bc) is

The feedback matrix for (A,B) is

Procedure 2 (Pole placement of multi-input systems)


Given: a controllable system (A,B) and a desired characteristic polynomial

1. Randomly pick

2. Check if is controllable. If yes, continue, otherwise, go back to Step 1.


3. Determine the feedback matrix [k1 . . . kn ] for
4. Determine the feedback matrix for (A,B)

60
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Example
Consider the following multi-input system

We want to find a state feedback to place the poles at −5, −10, −20. Let us first check if the
system is controllable.

Since rank(C) = 3, the system is controllable. Let us pick

The converted system is

which is controllable. The characteristic polynomial of A is

The controllable canonical form is

The controllability matrix of (Ac,Bc)is

The transform matrix is

The desirable characteristic equation is

The feedback matrix for (Ac,Bc)is

The feedback matrix for (A,B)is

If a system is controllable, then state feedback can be used to place poles to arbitrary
locations in the complex plane to achieve stability and optimality. However, in practice, not all
states can be directly measured. Therefore, in most applications, it is not possible to use state
feedback directly. What we can do is to estimate the state variables and to use feedback based on
estimates of states. In this section, we present the following results. (1) We can estimate the state
61
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

of a system if and only if the system is observable. (2) If the system is observable, we can construct
an observer to estimate the state. (3) Pole placement for state feedback and observer design can be
done separately.
Let us first consider a naïve approach to state estimation. Suppose that a system (A,B)is
driven by an input u. We cannot measure its state x. To estimate x, we can build a duplicate system
(say, in a computer) and let it be driven by the same input, as shown in Figure. In the figure, x
indicates the actual states and ˆx the estimates. To see how good the estimates are, let us investigate
the estimation error which satisfies the following differential equation.

Clearly, the dynamics of ~ x is determined by the eigenvalues of A, which may not be


satisfactory. One way to get a better estimation is to find the error of estimation and feed it back
to improve the estimation. However, we cannot measure ~ x directly, but we can measure
So, let us use this feedback and construct an ‘observer’, as shown in
Figure below.
The dynamics of the observer in Figure below is given by

where G is the observer matrix to feedback the error. Now, the estimation error ~
x satisfies the
following differential equation.

62
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

The dynamics of ~ x is determined by the eigenvalues of A+GC. Since the observer matrix
G can be selected in the design process, we can place the poles of the observer in the right locations
to meet the desired performance requirements. The pole placement problem for the observer can
be solved as a ‘dual’ of the pole placement problem for the state feedback. Note that a matrix and
its transpose have the same eigenvalues

Hence, if we view (AT,CT) as (A,B)and GT as K, then the pole placement problem for the
observer is transferred to the pole placement problem for the state feedback. By duality and our
previous results. The poles A–GC of can be arbitrarily assigned
(AT,CT) is controllable
(A,C)is observable.

UNIT-IV MULTILOOP REGULATORY CONTROL

Multi-loop Control - Introduction – Process Interaction – Pairing of Inputs and Outputs -The
Relative Gain Array (RGA) – Properties and Application of RGA – Multi – loop PID Controller–
Biggest Log Modulus Tuning Method – De coupler.

MULTI-LOOP CONTROL -INTRODUCTION:


When more than one input and output exist in a process, several control configurations are possible
depending on which output is controlled by manipulating which input.
Interactions in a process take place between controlled and manipulated variables.
It is important to specify the degrees of freedom of a process.ie the number of independent
variables available to identify or describe a process.
F=V-E
Where

63
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

V=the number of independent variables describing the process


E=number of equations physically relating the variables

SINGLE INPUT SINGLE OUTPUT PROCESS (SISO)


Control systems that have only one controlled variable and one manipulated variable.
 Single-input, single-output (SISO) control system
 Single-loop control system
In practical control problems there typically are a number of process variables which must be
controlled and a number of variables which can be manipulated.
 Multi-input, multi-output (MIMO) control system

Fig1: Single input single output process with multiple disturbances

MULTIPLE INPUT MULTIPLE OUTPUT PROCESS (MIMO)

Fig 2: Multiple input multiple output process(2x2)

Fig 3: Multiple input multiple process(nxn)


In process ,two possible control pairings possible,
U1 with Y1; U2with Y2

OR

64
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

U1with Y2; U2 with Y1

Transfer function model

In matrix form

𝒀𝟏(𝒔) 𝑼𝟏(𝒔)
Y(s)= ; U(s)=
𝒀𝟐(𝒔) 𝑼𝟐(𝒔)

𝑯𝟏𝟏(𝒔) 𝑯𝟏𝟐(𝒔)
H(s)=
𝑯𝟐𝟏(𝒔) 𝑯𝟐𝟐(𝒔)

For example, let’s consider a jacketed CSTR. the following are the controlled variables,
manipulated variables and disturbances associated with this application.

Fig 4: Jacketed CSTR


In this system, the controlled variable is temperature (T), Manipulated variable is Coolant flowrate
(Fc), and the disturbances in this system are inlet temperature (Ti), Coolant temperature (Tc).

MULTI-LOOP CONTROL
Each manipulated variable depends only on a single controlled variable but the system can a set of
conventional feedback controllers.

Fig4: Alternative loop configurations for a 2x2 process

65
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

MULTIVARIABLE CONTROL
Each manipulated variable can depend on two or more of the controlled variables. Example MPC
decoupling control.

FLASH DRUM UNIT

Fig 5:Setup of flash drum unit


Here the feed is in the form of liquid is fed into the drum ,thus the liquid with the pressure of Pf is
flashed into the drum at a low pressure P. The vapor in the drum reaches equilibrium. Steam
flowing through the coil provides the necessary heat for maintaining the desired temperature in
drum.
Identify the control variables.
Identify the manipulated inputs
Generate all possible loop configurations.
The degrees of freedom of the flash drum is given
Total mass balance equation is
A dh / dt  Ff  ( FV  FL )
A d (hxi ) / dt  F fzi  ( FV yi  FLxi )i  1, 2,....N  1
HeatBalance
C pl Ad (hT ) dt  C pf Ff Tf  (C p .V Fv T  C p.L F L T )  U As (Ts  T)
Yi  ki(T, p) Xi  i  1, 2,.... N

Consistencyconstraints
N

 Xi  1; 
N
i 1
Yi  1
i 1

All the above relations of the system are 2N+3 equations with 4N+14 variables
GENERATION OF ALTERNATIVE LOOP CONFIGURATION
 Once the manipulated variables and controlled variables are identified, now it is required
to specify which manipulated variable affects the controlled variable.

CONFIGURATION OF CONTROL LOOPS


 As the number of variables in a process loop increase, the number of control loop
configurations also increase correspondingly.
66
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

If N=3,Then 3!=6 possible loop configurations.


If N=5 Then 5!=120 possible loop configurations.
Selection of the best loop configuration among all is a difficult problem. Various criteria can
be used to select the best couplings among the controlled and manipulated variables
1.Choose the manipulations that have fast and direct effect on the controlled variable.
2.Choose the loop which has less dead time between the manipulation and controlled variables.
3.Choose the couplings that has minimal interactions among various loops.
Consider the control operation of a flash drum unit with two controlled outputs and two
manipulations

Fig 6a) Block diagram of the process with two controlled outputs and two manipulations
open loop

Fig 6b) Block diagram of the process with two controlled outputs and two manipulations
closed loop
Thus, the input-output relation of the process is given by
Y11 ( s )  H11 ( s )U1 ( s )  H12 ( s )U 2 ( s )
(1)

67
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Y22 ( s )  H 21 ( s )U1 ( s )  H 22 ( s )U 2 ( s )
(2)

Where H11 (s), H12 (s), H 21 (s), H 22 (s)


are the four transfer functions relating the inputs and outputs.
If Gc1 (s) & Gc2 (s) are the two transfer functions of the controllers then the value of manipulations
are given by
M1 ( s)  Gc1 ( s)[Y1, s. p ( s)  Y1 ( s)] (3)
M 2 ( s)  Gc 2 ( s)[Y2, s. p ( s)  Y2 ( s)] (4)
INTERACTION OF CONTROL LOOPS
Consider a 2x2 process with two control loops
ONE LOOP CLOSED
 Assume loop 1 is closed and loop 2 is open
 The setpoints to both the loops in the process are Y1s.p and Y2 s.p
 Assume m2= constant, a step change in m1 of  magnitude records a new Steady state
value of Y1 of 
 
 Open loop static gain between M1 and Y1 keeping M2=constant is ( )M
 2
 Change in M1 also causes a change in uncontrolled output Y2
 The way in which the setpoint Y1s.p to this process affects both the outputs Y1 and Y2 can
be observed in figure below
Similarly, conclusions can be considered if loop 2 is closed and loop1 is open.

Fig 7a) Interaction among control loops (a) one loop closed
7b) both loops closed

68
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

BOTH LOOPS CLOSED


 Initially the process is at steady state with both outputs at their desired values. Consider a
change in the setpoint Y1s.p and keeping setpoint of loop 2 the same.
 The controller of loop1 will change value of m1 so as to bring the output of y1 to a new set
point value. This is known as direct effect of M1 on Y1.
 The control action of M1 not only affects Y1 but also disturbs Y2 from its steady state. So
in order to bring Y2 to its new steady state M2 is altered.
 But altering M2 in turns affects the steady state output Y1. This is called as indirect effect
of M2 on Y1.
 NOTE:
 The regulatory action of a control loop deregulates the output of another loop which in turn
takes control action to compensate for the variations in its controlled output, disturbing at
the same time the output of the first loop.
Substitute equation ( 3) in (1)and (2)
Y1  H11Gc1 /1  H11Gc1 * Y1s. p (5)

Y2  H 21Gc1 /1  H11Gc1 * Y1s. p (6)


Substitiuting (3) & (4)in(1) & (2)

(1  H11Gc1 ) Y1  (H12 G c 2 ) Y2  H11G c1Y1s.p  H12 G c 2 Y2s.p


(7)

( H 21Gc1 ) Y1  (1  H 22G c 2 ) Y2 H 21G c1Y1s.p  H 22G c 2 Y2s.p (8)


Solving(7) &(8)
Y1  P11 (s) Y1s.p  P12 (s) Y2s.p
(9)

Y2  P21 ( s)Y1s. p  P22 ( s)Y2 s. p (10)

REMARKS:
1. Equations ( 9) and (10) describe the response of the outputs Y1 and Y2 when both loops are
closed.
2.When H12=H21=0 there is no interaction between the two control loops, the closed loops outputs
are given by the following equations:
Y1  H11Gc1 /1  H11Gc1 * Y1s. p
Y2  H 21Gc1 /1  H11Gc1 * Y1s. p
The closed loop stability of the two non- interacting loops depends on the roots of their
characteristics equations. Thus, if the roots of the two equations

1  H 11 Gc1  0
1  H 22Gc 2  0
Have negative real parts, the two non-interacting loops are stable.
3.The stability of the closed loop outputs of two interacting loops is determined by the roots of the
characteristics equation
Q(s)  (1  H11Gc1 )(1  H 22Gc 2 )  H12 H 21Gc1Gc 2  0

69
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Thus, if the roots of the above equation have negative real parts, the two interacting loops are
stable.
4.Suppose that the two feedback controllers Gc1 and Gc 2 are tuned separately. Then we cannot
guarantee stability for the overall control system, where both loops are closed.

BIGGEST LOG MODULUS TUNING METHOD.


This method proposed by Luyben is an extension of the SISO Nyquist stability criterion to
multivariable systems. The essential steps of the algorithm are as follows:

Step 1: Determine the ultimate gain Ku, i and the ultimate frequency ωu,i of each diagonal
process transfer function Gii(s) by classical SISO method.
Step 2: Calculate the corresponding Ziegler-Nichols setting (KcZN, i and τIZN,i) for each loop.
Step 3: assume a detuning factor F. typical value ranges are 2 to 5.
Step 4: The gains and reset times for the feedback controllers are calculated:
K c ,i  K cZN ,i / F
It must be noted that F remains the same for all the loops.
 `I ,i   IZN ,i F
Step 5: Compute the closed loop log modulus Lcm using the above designed controllers for a
specified frequency range:
Lcm  20 log[W /(1  W )]
where
W  1  Det[ I  GK (s)]
Step 6: Compute Lcm (the biggest log modulus) from the data of Lcm versus frequency.
Step 7: Check if Lcm=2N, where N is the order of the process matrix.
(a) If Lcm =2N, then stop
(b) Otherwise return to step 3.

The larger the detuning factor F, the more stable the system will be, but the set point and load
changes become more sluggish. The BLT algorithm yields a value of F, which gives a
reasonable compromise between stability and performance in multivariable systems. Lcm=2N
criterion suggests that the higher the order of the system, the more undamped the closed loop
system must be to achieve reasonable responses.

RELATIVE GAIN ARRAY


 N controlled outputs and N manipulated variables can form control loops in N! ways.
Among these loops the loop which has minimal interaction must be chosen. In order to
choose the loop which has minimal interaction, Relative Gain Array (RGA) is used.
 RGA provides a way to choose those inputs and outputs which will reduce the interaction
among the resulting loops formed with them.
Consider a process with two outputs and two inputs
1. Assume that M2 remains constant. Introduce a step change in the input M1 of magnitude
 and record the new S. S value of output Y1. Let  be the change of output. The
open loop static gain between Y1 and M1 with M2 constant is

2. Change in M1 not only affects M1 but also the output Y2 thus the feedback effect of M2 on
Y2. in order to keep loop 2 output Y2 constant by varying M2 ,which in-turn affects the S.S

70
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

of Y1.Let us take this S.S output as Y’1.Now the open loop gain between Y1 and M1 when
Y2 is kept constant is given by
Y '1
( ) Y2
M 1
3.The ratio of the two open loop gains computed gives the relative gain 11 ,between output Y1
and Input M1

  Y
11 = ( ) / ( 1 )Y 2
 M 2 M 1
The relative gain provides a useful measure of interaction.
 If 11 =0,then Y1 does not respond to M1 and M1 should not be used to control Y1.
 If 11 =1,then M2 does not affect Y1,and the control loop between Y1and M1 does not
interact with the loop of Y2 and M2.This is called as COMPLETELY DECOUPLED
LOOPS.

 If 0< 11 <1,then the interaction exists and as M2 varies it affects the S.S value of Y1.The
smaller the value of 11 ,the larger the interaction becomes.
 If 11 <0,then M2 causes strong effect on Y1,and in the opposite direction from that caused
by M1.Interaction effect is very dangerous.
In this way we can relate the other relative gains 12 , 21 , 22 .
The relative gain (λij) between input j and output I is defined as follows;
 yi 
 
 u 
𝑔𝑎𝑖𝑛 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑖𝑛𝑝𝑢𝑡 𝑗 𝑎𝑛𝑑 𝑜𝑢𝑡𝑝𝑢𝑡 𝑖 𝑤𝑖𝑡ℎ 𝑎𝑙𝑙 𝑜𝑡ℎ𝑒𝑟 𝑙𝑜𝑜𝑝𝑠 𝑜𝑝𝑒𝑛  j  uk k  j
λij=  
 yi 
𝑔𝑎𝑖𝑛 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑖𝑛𝑝𝑢𝑡 𝑗 𝑎𝑛𝑑 𝑜𝑢𝑡𝑝𝑢𝑡 𝑖 𝑤𝑖𝑡ℎ 𝑎𝑙𝑙 𝑜𝑡ℎ𝑒𝑟 𝑙𝑜𝑜𝑝𝑠 𝑐𝑙𝑜𝑠𝑒𝑑 ij

 
 u 
 j  yk k  i
The relative-gain array provides a methodology where we select pairs of input and output variables
in order to minimize the interaction among resulting loops. It is a square matrix that contains
 11 12 
individual relative gain as elements, that is   ij . For a 2x2 system, the RGA is    
21 22 
.
 11 12 
RGA is given by     , this equation yields the following relationships,
21 22 
11  12  1 ;
11  21  1 ;
12  22  1 ;
21  22  1, then for a 2x2 system, only one relative gain must be calculated for the entire array,
  1  11 
   11
11 
.
1  11

71
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

0.05 0.95 
Consider a relative gain array,    
 0.95 0.05 
We pair Y1 with U1 and Y2 with U2
RGA-implications:
1. Pairing loops on λij values that are positive and close to 1.
2. Reasonable Pairings: 0.5 < λij< 4.0
3. Pairing on negative λij values results in at least one of the following; a. Closed loop system is
unstable, b. Loop with negative λij is unstable, c. Closed loop system becomes unstable if loop
with negative is λij turned off.
4. Plants with large RGA-elements are always ill-conditioned. (i.e., a plant with a large γ(G)
may have small RGA-elements)
5. Plants with large RGA-elements around the crossover frequency are fundamentally difficult to
control because of sensitivity to input uncertainties, decouplers or other inverse-based controllers
should not be used for plants with large RGA-elements.
6. Large RGA-element implies sensitivity to element-by-element uncertainty.
7. If the sign of RGA-element changes from s=0 to s=∞, then there is a RHP-zero in G or in some
subsystem of G.
8. The RGA-number can be used to measure diagonal dominance: RGA-number = || Λ(G)-I
||min. For decentralized control,, pairings with RGA-number at crossover frequency close to
one is preferred.
9. For integrity of whole plant, we should avoid input-output pairing on negative RGA-element.
10. For stability, pairing on an RGA-number close to zero at crossover frequency is preferred.

APPLICATIONS OF RGA
Consider the whiskey blending problem, which has steady state process gain matrix and RGA:
 Y1  0.025 0.075 U1   0.025 0.075  0.25 0.75
Y    1    ,K    ,   
 2  1  U 2   1 1  0.75 0.25
indicating that the output-input pairings should be Y1- U2 and Y2- U1. In order to achieve this
pairing, we, could use the following block diagram. The difference between R2 and Y2 is used to
adjust U1, using a PID controller (Gc1), hence, we refer to this pairing as Y2-U1. The difference
between R2 and Y1 is used to adjust U2 using a PID controller (Gc1); hence we refer to this pairing
as Y1-U2. This corresponds to the following diagram. This can also be done by redefining variables.
Problem:
Consider a process with the following input-output relationships:
1 1
y1  m1  m2
s 1 0.1s  1 (1)

0.2 0.8
y2  m1  m2 (2)
0.5s  1 s 1
1.Make a unit step change in m1 while keeping m2 constant (ie m1=1/s).Then from the above
equations
1 1
y1 
s 1 s
According to the final value theorem the new steady state value of y1
1
y1, s  lim[ s, y1 ( s)]  lim( ) 1
s 0 s 0 s  1

72
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

y1
Therefore ( ) m2  1 / 1  1
m1
2.Keep y2 constant under control by varying m2.Introduce a unit step in m1.Since y2 must
remain constant ,the equation (2) gives us how much m2 must change
0.2 s  1
m2  m1
0.8 0.5s  1
Substitute the values in equation(1)
1 1 0.2 s  1
y1  m1  m1
s 1 0.1s  1 0.8 0.5s  1
Then the resulting new steady state for y1 is given by
1 1 1 0.2 s  1 1
y1, s  lim[ s, y1 ]  lim[ s{  }]  1.25
s 0 s 0 s  1 s 0.1s  1 0.8 0.5s  1 s
Therefore, (y1 / m1 ) y2  1.25  1.25
1
y
( 1 )m
m1 2 1
11    0.8
y1
( ) y 2 1.25
m1
Since we know the sum of the rows and columns of a Relative gain matrix need to be 1,
12  21  0.2
We get
22  0.8
It is easy to conclude that we should pair m1 with y1 and m2 with y2 to form two loops with
minimum interaction. If the other two loop configurations would have been chosen then the
interaction would have been 4times larger.

To eliminate the interaction of loop1 on 2 let us construct another decoupler whose transfer
function is given as
 H 21 ( s )
D2 ( s )  (3)
H 22 ( s )

73
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

From fig8(b) with 2 feedback loops it is possible to get an i/p-o/p relationship of 2 closed
loops.

DESIGN OF NON-INTERACTING CONTROL LOOPS USING DECOUPLERS


The purpose of decouplers is to cancel the interaction effects between the two loops and thus render
two interacting loops. Let us consider the i/p-o/p relationships of a 2 i/p-o/p process.2control loops
M1 with Y1 and M2 withY2.keep Y1 constant.
M1 should change by the following amount:
 H12 (s)
M1 (s)  M 2 (s) (1)
H11 ( s )

We introduce a dynamic element


 H12 ( s )
D1 ( s )  (2)
H11 ( s )

This uses M2 as i/p and provides o/p which gives the amount by which M1 should be varied to keep
Y1 constant and cancel the effect of M2 on Y1.This way the decouplers cancels the effect of loop2
on loop1.

Fig8a)2x2 process with one decoupler

Y1 ( s )  Gc1[ H11  H12 H 21 / H 22 ] /1  G c1[H11  H12 H 21 / H 22 ]Y1s.p


(4)

Y2 (s)  Gc 2 [ H 22  H12 H 21 / H11 ] /1  G c 2 [H 22  H12 H 21 / H11 ]Y2s.p (5)

The equations( 3) and( 4) shows that o/p of loop 1 and loop2 depend only on the set point of its
own and not the other loop

74
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Fig 8c complete decoupling block diagram


NOTE:
1.Two interacting control loops are perfectly decoupled only if H11,H12,H21,H22 are perfectly
known. Practically this is not possible so only partial decoupling is possible.
2.For non-linear processes like chemical process, initially they will be perfectly decoupled. As
process parameters keep changing interaction increases. Solution is to use Adaptive decouplers.
3.Perfect decoupling allows independent tuning of each controller.
4.Decouplers are feedforward control elements.

MULTI-LOOP PID CONTROLLER


 Multivariable or multi-input multi-output (MIMO) systems are frequently encountered in the
chemical and process industries.
 Despite the considerable work that has been done on advanced multivariable controllers for
MIMO systems, multi-loop PID controllers (sometimes known as decentralized PID
controllers) are still much more favoured in most commercial process control applications,
because of their satisfactory performance along with their simple, failure tolerant, and easy to
understand structure.
 Multi-loop PID controllers are made up of individual PID controllers acting in a multi-loop
fashion and tuned mainly on a single loop basis.
 However, due to the process interactions in MIMO systems, this approach cannot guarantee
stability when all of the loops are closed simultaneously.
 This is because the closing of one loop affects the dynamics of the other loops and can make
them worse or even unstable. This complex interactive nature of MIMO systems makes the
proper tuning of multi-loop PID controllers quite difficult.
 despite the wide popularity of multi-loop PID controllers, the number of applicable tuning
methods is relatively limited.
 Furthermore, many of the existing methods of designing multi-loop PID controllers are
computationally intensive and/or require solving a large scale optimization problem and,
therefore, are less appealing to the practitioners.
 Most existing tuning methods for multi-loop PID controllers are similar in that they first use
the single loop tuning rules by ignoring process interactions and then detune the individual
loops to preserve stability or adjust them to meet some performance specification
 However, since these approaches are based on single loop tuning rules, they often lead to local
optimum solutions or too conservative responses.
75
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

 This method is based on the direct synthesis approach in which the multi-loop PI/PID
controller is designed based on the desired closed-loop transfer function .
 The resulting analytical design rule includes a frequency-dependent relative gain array that
provides information of dynamic interactions useful for estimating the controller parameters.

THE MULTI-LOOP FEEDBACK CONTROLLER DESIGN FOR DESIRED SET-POINT


RESPONSES

Fig 9: Multi loop control


The figure above shows the generalized block diagram for a multi-loop feedback control
Consider a general transfer function matrix for stable, square, and multi-delays MIMO processes
represented as following matrix

 g11 ( s ) g12 ( s )...... g1n ( s ) 


G ( s)   g 21 ( s ) g 22 ( s )....... g 2 n ( s ) 
(1)
 g n1 ( s) g n 2 ( s) g m ( s) 
From a standard block diagram of multi-loop feedback control shown in figure, the closed-loop
transfer function matrix can be written as
H ( s)  G ( s)G 'c (s)[I G( s) G'(s)]1 (2)
Consider a transfer function H '(s) of a diagonal structure for a desired closed-loop response. Then
the feedback controller to give the desired closed-loop response can be straightforwardly found by
rearranging equation (2). However, the resulting controller is generally not a diagonal (or
decentralized) form. By taking off all off-diagonal elements from the resulting centralized
controller, one can obtain a multi-loop (or decentralized) feedback controller as follows
G 'c ( s)  diag{G 1 ( s) H ~ ( s)[ I  H ~ ( s )]}1 (3)
It is clear that the controller by (3) gives a closed-loop response closer to the desired one as process
interactions are insignificant. Since the multi-loop controllers are usually applied to processes with
modest interactions, this approach can have validity. Note that the multi-loop controller by
equation (3) is not a standard PID form. The controller above consists of two parts.
G 1 ( s) and H ~ (s)[ I  H ~ (s)]}1

76
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Where G 1 ( s) can be written as


adjG ( s)
G 1 ( s) = (4)
G( s)
Where adjG = [G ji ] where Gij is the cofactor corresponding to g ij in G; G ( s) is the determinant
of G(s)
Furthermore H ~ (s)[ I  H ~ (s)]}1 can be expressed in terms of diagonal element as
h (s)
H ~ (s)[ I  H ~ (s)]}1 = [ H ~ 1 ( s)  I]1  diag[ ii ] (5)
1  hii ( s )
where hii is each diagonal element of H(s) and corresponds to the desired servo closed-loop
transfer function for each loop.
Substituting equation(4 )and (5) in (3)
adjG ( s) h ( s)
G ~ c ( s)  diag{ diag[ ii ]} (6)
G( s) 1  hii ( s)
Therefore, each element of the multi-loop controller can be derived as
G ii ( s ) hii ( s )
g ci ( s )  ( ) (7)
G ( s ) 1  hii ( s )
From Bristol, the diagonal element of the frequency-dependent relative gain array for G(s) is
calculated by
Gii( s)
ii ( s)  gii ( s) (8)
G(s)
Hence, by substituting (8) into (7), each element of the multi-loop controller can be obtained as
h (s)
g ci ( s )   ii ( s ) g ii 1 ( s )( ii ) (9)
1  hii ( s )
It is noted that ii (0) corresponds to the diagonal element of the steady-state relative gain array
(RGA) by Bristol.
For processes with multi-delays, the proposed multi-loop controller can be written as
gci ( s )  s 1 pii ( s ) (10)
Furthermore, (10) can be expanded by using the Maclaurin series as
1 p '' i(0)
g ci (s)  [ pi(0)  sp 'ii (0)  s 2  ......] (11)
s 2

The standard form of multi-loop PID controller is given by


1
G ~ Ci ( s)  [ K ~ Ii  sK ~ Ci  s 2 K ~ Di ] (12)
s
The proposed PID controller is found by the comparison between (11) and (12)
K ~ Ci  diag{ p 'ii (0)} (13)
K ~ Ii  diag{ pii (0)} (14)
K Di  diag{ p ''ii (0) / 2}
~
(15)
From (13), (14), and (15), it is straightforward to design the multi-loop PI/PID controller for
various multivariable processes with delays.

77
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

UNIT V MULTIVARIABLE REGULATORY CONTROL 9


Introduction to Multivariable control –Multivariable PID Controller - Multivariable IMC -
Multivariable Dynamic Matrix Controller – Multivariable Model Predictive Control –
Generalized Predictive Controller – Implementation Issues

Introduction:
The chemical plant usually consists of multi-inputs and multi-output (MIMO) variables. The
MIMO control problem is handled by two approaches:
1. Centralized (Multivariable) control
2. Decentralized (Multi-loop) control

Centralized control:
It uses all process inputs and outputs measurement simultaneously to determine all manipulated
variables.

78
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

S.no. Advantages Disadvantages


1 It can handle cross loop interaction. Control algorithm and calculation maybe
complex.
2 It can incorporate input constraints directly. Maybe difficult to understand by plant
operations.
3 It can provide optimal usage of the Failure of the central control system may
manipulated variables. affect all loops.

Decentralized control:
It uses multiple single loop controllers. Each single output is connected to a single input forming
a network of multiple loops.

S.no. Advantages Disadvantages


1 Usually a simple algorithm. Does not handle cross-loop interaction.
2 Easy to understand by plant personnel. Does not optimize the use of the
manipulated variables.
3 Standard control designs have been There are many possible control
developed for common unit operation. configurations.
4 Maintaining a failed loop does not affect Does not handle input constraints
the other loops. efficiently.

Multivariable PID controller:


Two input- two output (TITO) system is one of the most prevalent categories of multivariable
systems, because it is a real process or because a complex process has been decomposed in 2X2
blocks with non-negligible interactions between its inputs and outputs. When the interactions in
different loops of the process are modest, a diagonal or decentralized controller is often adequate.
The diagonal controller can be expressed mathematically as follows;
 y1 ( s )  e1 ( s ) 0   gc1 ( s ) 0   d11 ( s) d12 ( s)   g11 ( s) g12 ( s) 

 y (s)   0
 2   e2 ( s )   0 gc 2 ( s)   d 21 ( s ) d 22 ( s )   g 21 ( s ) g 22 ( s ) 
error diagonal controller decoupler process
79
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

In diagonal controller, the diagonal elements are controllers and off-diagonal elements are zero. In
this controller, decoupler is used to minimize/ remove interaction. When interactions are
significant a full matrix controller is used. This centralized control is mathematically showed as
follows;
 y1 ( s )  e1 ( s ) 0   gc1 ( s) gc 2 ( s)   g11 ( s ) g12 ( s ) 
 y ( s)    0 e2 ( s)   gc3 ( s) gc 4 ( s)   g 21 ( s) g 22 ( s ) 
 2  
error Full matrix controller process

The closed loop transfer function is Y ( s)  G p ( s)U ( s) where U (s)  Gc (s)(r (s)  Y (s)) .
Y (s)  (1  G(s)Gc (s))1G(s)Gc (s) R(s) where,
 y (s) 
Y (s)   1 
 y2 ( s ) 
 g ( s ) g12 ( s ) 
G ( s )   11 
 g 21 ( s ) g 22 ( s ) 
 g (s) gc 2 (s) 
Gc ( s )   c1 
 gc3 ( s) gc 4 ( s) 
 r (s) 
R( s)   1 
 r2 ( s ) 

80
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

The stability of the closed loop system is determined by the poles of the characteristic equation.
 I  G(s)Gc (s)  0 .
A process with equal number of inputs and outputs controlled by the above specified methods is
called a square system process. Non-square systems often arise in the process industry. Eg.: 2X3
mixing tank system, 5X7 shell standard control problem, 4X5 crude distillation system.
Two simple methods to control non-square systems are Davison’s method and Tanttu and Lieslhto
method.
Davison’s method of designing centralized PID controller:
Davison has proposed a centralized multivariable PID controller tuning method for square systems.
Here proportional, integral and derivative gain matrix are given by,
K c   G ( s  0) 
1

K I   G ( s  0) 
1

K D  G ( s  0) 
1

Where [G(s=0)] is called the rough tuning matrix and δ and ɛ are the fine tuning parameters which
generally range from 0 to 1. This method can be extended to non-square systems. As inverse does
not exist for a non-square system, a Moore-Penrose pseudo inverse is used.
A  AH ( A  AH ) 1 where AH is the Hermitian matrix of A.
So for a non-square system, PID controller gains are,
K c   G ( s  0) 

K I   G ( s  0) 

K D  G ( s  0) 

81
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

Multivariable Internal Model Control:


The general multivariable IMC procedure is as follows;
1. Factor the process into invertible and noninvertible elements,
~ ~ ~
G p ( s)  G p  ( s) G p  ( s)
The difficulty is that there are many ways to factor the process transfer function matrix. One way
is to place the RHP transmission zero on the diagonal.
 1 
  z s 1 
 1 0 0 
 s 1 
 z 
 1 
  s 1 
~  0 z 0 0 
G p  (s)   1 
s 1
 z 
 0 
 
 1 
  s  1
z
 0 0
1 
 s 1 
 z 
Though being the simplest method, it does not offer the best performance.
2. Form the idealized controller.
~ ~ 1
Q( s )  G p  ( s )
3. Add a filter to make all the elements in the controller matrix proper.
~ ~ 1
Q(s)  Q( s) F ( s)  G p ( s) F ( s) where F(s) is normally a diagonal matrix.
 1 
F ( s )  diag  n
 (i s  1) 

82
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

4. The filter factors are adjusted to vary the response and robustness characteristics. In the
~
limit of a perfect model, (G p ( s)  G p ( s)) the response will be,
~
y ( s )  G p ( s)Q( s )r ( s )  G p  ( s)G p  G p  ( s ) F ( s ) r ( s )
Gp (s) Q(s)

y ( s )  G p  ( s ) F ( s )r ( s )
DYNAMIC MATRIX CONTROL:
-developed by Shell Oil company in the 1960s and 1970s. It is based on a step response model,

which has the form, yk  s1uk 1  s2 uk 2  ...  s N 1uk  N 1  s N uk  N (1)
 N 1
Which is of the form, yk   si uk  i  s N uk  N (2)
i 1

Where yk is the model prediction at time step k, and uk  N is the manipulated input N steps in the
past. The difference between the measured output (yk) and the model prediction is called the

additive disturbance. d k  yk  y k (3)
The corrected prediction is then equal to the actual measured output at step k,
  (4)
y k  y k  d k Similarly the corrected predicted output at the first time step in the future can be
found from
  
y c
k 1 yc
k 1  d k 1
 N 1 
y c k 1   si uk  i 1  sN uk  N 1  d k 1 (5)
i 1
 N 1 
y c k 1  s1uk   si uk  i 1  sN uk  N 1  d k 1
i2
So for the jth step into the future, we find
  
yc k  j  yc k  j  d k  j
 j N 1 
  s u
(6)
y c k 1  si uk  i  j   sN uk  N  j  d
i k i  j k j


i 1 i  j 1
 correction term
effect of future control moves effect of past control moves

and we can separate the effect of past and future control moves as shown in the above equation.
The most common assumption is that the correction terms is constant in the future (this is the
  
constant additive disturbance assumption): d k  j  d k  j 1  ...  d  yk  y k
(7)
Also, realize that there are no control moves beyond the control horizon of M steps, so
uk  M  uk  M 1  ...  uk  P 1  0 (8)
In matrix-vector form, a prediction horizon of P steps and a control horizon of M steps, yields

83
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

 c 
 y k 1   s1 0 0  0 0 
c  s   uk 
 s 0  0 0   u 
 y k 2  2 1

       k 1 

c       
 yk j   s s  s    s     
uk  M 2 
j j 1 j 2 j M 1
      
 c     uk  M 1 
  
sP s P1 s P2   s PM 1  
  Mx1 current and future control moves u f
y k P 
 PxM dynamic matrix S f
c
Px1 corrected output prediction s Y

 s2 s3 s4  s N 2 s N 1 
s   uk 1 
 3 s 4 s 5  s N 1 0   u 
   0 0   k 2 

     
 s j 1 s j 2  s N 1 0 0   
       uk  N 3 
  uk  N 2 
 s P1 s P2  0  0   
 ( N 2) x1 past control moves u past
Px ( N  2 ) matrix S past

 uk  N 1   d k 1 
 u  d 
sN  k  N 2 
  k 2 
     
    (8)
uk  N  P  d k P 
 
Px1 past inputs u P Px1 predicted disturbanc es

Which we write using matrix-vector notation


c 
Y  S f u f  S past u past  s N u P  d (9)
corrected   predicted disturbanc es
predicted effect of effect of past moves
outputs current and
futture moves

In equation (9), the corrected-predicted output response is naturally composed of a “forced


response” (contributions of the current and future control moves) and a “free response” (the output
changes that are predicted if there are no future control moves). The difference between the
setpoint trajectory, r and the future predictions is
c 
r
Y r  [ S past u P  s N u P  d ]  S f u f (10)
corrected
 
predicted unforced error ( if no current and future control moves are made ) E
error E c

Which can be written E c  E  S f u f (11)


Where the future predicted errors are composed of “free response” E and “forced response”
( S f u f ) contributions.

e   w u 
P M 1
The least-squares objective function is   
c 2 c 2

k i k i
(12)
i 1 i 0

Notice that the quadratic terms can be written in matrix-vector form as

84
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

 c 
 eck 1 
 e   e   
 ek  P  ek 2   E c E c
P 2
c c c c T

i 1
k i k 1 e
k 2

(13)
 c 
ek  P 

 uk 
  
w  u k i  w u k u k 1  u k  M 1  u k 1  
M 1
c 2

i 0 
 
u k  M 1
And (14)
 w 0 0 0   u k 
 0 w 0 0  

 u k u k 1  u k  M 1 
0 0  0
 u k 1 
     u f Wu f
T

  
 0 0 0 w  u k  M 1

Therefore the objective function can be written in the form,
Subject to the modelling equality constraint (11), E c  E  S f u f (15)
Substituting (15) into (16), the objective function can be written,
  ( E  S f u f )T ( E  S f u f  (u f )T )Wu f (16)
The solution for minimization of this objective function is
u f  ( S f S f  W ) 1 S f
T T
E
 (17)
   unforced errors
K
The current and future control move vector is proportional to the unforced error vector.
Because only the current control move is actually implemented, we use the first row of the matrix
of the K matrix and u f  K1 E (18)
Where represents the first row of the matrix, where K  (S f S f  W ) 1 S f
T T

MODEL PREDICTIVE CONTROL


Model predictive control (MPC) is an advanced method of process control that has been in use
in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has
also been used in power system balancing models. Model predictive controllers rely on dynamic
models of the process, most often linear empirical models obtained by system identification. The
main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping
future timeslots in account. This is achieved by optimizing a finite time-horizon, but only
implementing the current timeslot and then optimizing again, repeatedly, thus differing from LQR.
Also MPC has the ability to anticipate future events and can take control actions
accordingly. PID controllers do not have this predictive ability. MPC is nearly universally
implemented as a digital control, although there is research into achieving faster response times
with specially designed analog circuitry.
Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical examples
of MPC. The models used in MPC are generally intended to represent the behavior of
complex dynamical systems. The additional complexity of the MPC control algorithm is not
85
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

generally needed to provide adequate control of simple systems, which are often controlled well
by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers
include large time delays and high-order dynamics.
MPC models predict the change in the dependent variables of the modeled system that will be
caused by changes in the independent variables. In a chemical process, independent variables that
can be adjusted by the controller are often either the set-points of regulatory PID controllers
(pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent
variables that cannot be adjusted by the controller are used as disturbances. Dependent variables
in these processes are other measurements that represent either control objectives or process
constraints. MPC uses the current plant measurements, the current dynamic state of the process,
the MPC models, and the process variable targets and limits to calculate future changes in the
dependent variables. These changes are calculated to hold the dependent variables close to target
while honoring constraints on both independent and dependent variables. The MPC typically sends
out only the first change in each independent variable to be implemented, and repeats the
calculation when the next change is required. While many real processes are not linear, they can
often be considered to be approximately linear over a small operating range. Linear MPC
approaches are used in the majority of applications with the feedback mechanism of the MPC
compensating for prediction errors due to structural mismatch between the model and the process.
In model predictive controllers that consist only of linear models, the superposition
principle of linear algebra enables the effect of changes in multiple independent variables to be
added together to predict the response of the dependent variables. This simplifies the control
problem to a series of direct matrix algebra calculations that are fast and robust.
When linear models are not sufficiently accurate to represent the real process nonlinearities,
several approaches can be used. In some cases, the process variables can be transformed before
and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with
nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model
may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity
dynamic model based on fundamental mass and energy balances. The nonlinear model may be
linearized to derive a Kalman filter or specify a model for linear MPC. An algorithmic study by
El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide
significant reduction in online computations while maintaining comparative performance to a non-
altered implementation. The proposed algorithm solves N convex optimization problems in
parallel based on exchange of information among controllers. MPC is based on iterative, finite-
horizon optimization of a plant model. At time the current plant state is sampled and a cost
minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively
short time horizon in the future: Specifically, an online or on-the-fly calculation is used to explore
state trajectories that emanate from the current state and find a cost-minimizing control strategy
until time Only the first step of the control strategy is implemented, then the plant state is sampled
again and the calculations are repeated starting from the new current state, yielding a new control
and new predicted state path. The prediction horizon keeps being shifted forward and for this
reason MPC is also called receding horizon control. Although this approach is not optimal, in
practice it has given very good results. Much academic research has been done to find fast methods
of solution of Euler–Lagrange type equations, to understand the global stability properties of
MPC's local optimization, and in general to improve the MPC method. To some extent the
theoreticians have been trying to catch up with the control engineers when it comes to MPC.

86
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

THEORY BEHIND MPC

Principles of MPC
Model Predictive Control (MPC) is a multivariable control algorithm that uses:

 an internal dynamic model of the process


 a history of past control moves and
 an optimization cost function J over the receding prediction horizon,
to calculate the optimum control moves.
An example of a non-linear cost function for optimization is given by:
2 2 2
 
  
  

   rk 1  y k 1    rk  2  y k  2    rk 3  y k 3   wu k2  wu k21
     

y  mod el predictiveoutput
r  setpo int
u  change in the manipulated input
w  weight for the changes in the manipulated input
The subscripts indicatethe sample time.

GENERALIZED PREDICTIVE CONTROL

The GPC method was proposed by Clarke et al and has become one of the most popular
MPC methods both in industry and academia. It has been successfully implemented in many
industrial applications, showing good performance and a certain degree of robustness.
The basic idea of GPC is to calculate a sequence of future control signals in such a way
that it minimizes a multistage cost function defined over a prediction horizon. The index to be
optimized is the expectation of a quadratic function measuring the distance between the
predicted system output and some reference sequence over the horizon plus a quadratic function
measuring the control effort.
Generalized Predictive Control has many ideas in common with the other predictive
controllers since it is based upon the same concepts but it also has some differences. As will be
seen later, it provides an analytical solution (in the absence of constraints), it can deal with
unstable and non-minimum phase plants and incorporates the concept of control horizon as well

87
EI6801 COMPUTER CONTROL OF PROCESSES Dept. of EIE and ICE

as the consideration of weighting of control increments in the cost function. The general set of
choices available for GPC leads to a greater variety of control objective compared to other
approaches, some of which can be considered as subsets or limiting cases of GPC. The GPC
scheme can be seen in the Figure below It consists of the plant to be controlled, a
reference model that specifies the desired performance of the plant, a linear model of
the plant, and the Cost Function Minimization (CFM) algorithm that determines the input
needed to produce the plant’s desired performance. The GPC algorithm consists of the CFM
block.

The GPC system starts with the input signal, r(t), which is presented to the reference
model. This model produces a tracking reference signal, w (t) that is used as an input to the
CFM block. The CFM algorithm produces an output, which is used as an input to the plant.
Between samples, the CFM algorithm uses this model to calculate the next control input, u
(t+1), from predictions of the response from the plant’s model. Once the cost function is
minimized, this input is passed to the plant. This algorithm is outlined below.

GPC Implementation Issues:


Many of the classical GPC approaches used in the industry have performance limitations,
Such as
1. Model Structure
The finite step and impulse response models limit applications to open loop stable processes and
require many model coefficients to describe the response. Integrating systems have been handled
by formulating the derivative of an integrating output as controlled output.
2. Disturbance Assumption
DMC uses a constant output disturbance assumption. This may not yield good performance if the
real disturbance occurs at the plant input.
3. Finite Horizons
Control performance could deteriorate if the prediction or control horizons were not formulated
correctly even if the model was perfect.
4. Model Type
The step and impulse response models are all linear. For some processes where the process
operating conditions are changed frequently, a single linear model may not describe the dynamic
behavior of the process over the wide range of conditions. Batch processes also operate over a
wide range of conditions. For these systems, better control performance may be achieved if non-
linear models are used.

88

Você também pode gostar