Você está na página 1de 41

SYSTEM IDENTIFICATION

The System Identification Problem is to estimate a


model of a system based on input-output data.
Basic Configuration

continuous
u(t)

v(t) disturbance (not observed)

System

output (observed)

input (observed)

discrete

y(t)

{v(k)}

{u(k)}

System

{y(k)}
1

We observe an input number sequence (a


sampled signal)
{u(k)} = {u(0), u(1), ..., u(k), ..., u(N)}
and an output sequence
{y(k)} = {y(0), y(1), ..., y(k), ..., y(N)}
If we assume the system is linear we can
write:-

Y( z ) G ( z ) U( z ) V( z )

using standard z-transform notation


2

V(z)
G(z)

U(z)

Y(z)

The disturbance v(k) is often considered as


generated by filtered white noise :(z)

white noise
U(z)
input

H(z)

V(z)

filter
G(z)

process

disturbance

Y(z)

output

giving the description:

Y( z ) G ( z ) U( z ) H ( z ) ( z )
3

Parametric Models

ARX model (autoregressive with exogenous


variables)
1
A( z 1 )

(z)

V(z)

H(z)
U(z)
where

z n

B( z 1 )
A ( z 1 )

G(z)
1

A ( z ) 1 a1z ana z

na

B( z 1 ) b1z 1 b2 z 2 bnb z4 nb

Y(z)

giving the difference equation:

y ( k ) a1 y ( k 1) ana y ( k na )
b1u( k n 1) b2 u( k n 2) bnb u( k n nb ) ( k )
n

and z
represents an extra delay of n sampling
instants.

identification problem
determine n, na, nb
estimate

(structure)

a1 , a2 , , ana
b1 , b2 , , bnb

(parameters)
5

ARMAX model (autoregressive moving average with


exogenous variables)
C ( z 1 )
A ( z 1 )

(z)

V(z)

H(z)
U(z)

z n

B( z 1 )
A ( z 1 )

Y(z)

G(z)

where

A ( z 1 ) 1 a1z 1 ana z na
B( z 1 ) b1z 1 b2 z 2 bnb z nb
1

C( z ) 1 c1z cnc z

nc

giving the difference equation:


y ( k ) a1 y ( k 1) ana y ( k na )
b1u( k n 1) b2 u( k n 2) bnb u( k n nb )

( k ) c1 ( k 1) cnc ( k nc )

identification problem
determine n, na, nb, nc
estimate a , a , , a
1

na

b1 , b2 , , bnb
c1 , c2 , , cnc

(structure)
(parameters)
7

General Prediction Error Approach


u(t)

Process

y(t)

e(t,)

Predictor with
adjustable
parameters

Algorithm for
minimising some
function of e(t,)

Predictor based on a parametric model


Algorithm often based on a least squares method.
N

min e (k )

k 0

Consistency
A desirable property of an estimate is that it
converges to the true parameter value as the
number of observations N increases towards
infinity.
This property is called consistency
Consistency is exhibited by ARMAX model
identification methods but not by ARX
approaches (the parameter values exhibit
bias).
9

Example of MATLAB Identification Toolbox Session

Input and Output Data of Dryer Model


OUTPUT #1
2
1
0
-1
-2
0

10

15

20

25

10
20

25

INPUT #1
1
0
-1
0

10
15
input and output data

MATLAB statements and results:


(ARX n, na = 2, nb = 2)
th = arx(z2,[2 2 3]); % z2 contains data
th = sett(th,0.08); % Set the correct sampling interval.
present(th)
Results:
Loss fcn: 0.001685
Akaike`s FPE: 0.001731 Sampling interval 0.08
The polynomial coefficients are
B= 0
0
0 0.0666 0.0445
A = 1.0000 -1.2737 0.3935
11

ARX Simulated (solid) and measured (dashed) outputs - error = 6.56


1.5
1
0.5
0
-0.5
-1
-1.5
64

65

ARX model:

66

67
68
Time

69

70

71

1
0
.
0666

0
.
0445
z
G ( z ) z 3
1 12737
.
z 1 0.123935z 2

72

MATLAB Demo

13

ADAPTIVE CONTROL
PERFORMANCE
ASSESSMENT &
UPDATING
MECHANISM

K J Astrom

ref

regulator
parameters
_

REGULATOR

disturbances
fast
varying
PROCESS

parameters

slowly
varying

outputs (fast varying)


14

Adaptive control is a special type of nonlinear


control in which the states of the process can
be separated into two categories:(i) slowly varying states (viewed as
parameters
(ii) fast varying states (compensated by
standard feedback)
In adaptive control it is assumed that there is
feedback from the system performance which
adjusts the regulator parameters to
compensate for the slowly varying process
parameters.
15

Adaptive Control Problem


An adaptive controller will contain :-

characterization of desired closed-loop

performance (reference model or design


specifications)

control law with adjustable parameters


design procedure
parameters updating based on measurements
implementation of the control law (discrete or
continuous)

16

Overview of Some Adaptive Control Schemes


Gain Scheduling
gain
schedule
regulator
parameters
command
signal
u
regulator
control
signal

operating
conditions

process

y
output

The regulator parameters are adjusted to suit


different operating conditions. Gain scheduling is an
17
open-loop compensation.

Auto-tuning
parameters K, Ti, Td
+
_

PID
controller

Process

F
I
1
KG
1
T sJ
H Ts K
d

PID controllers are traditionally tuned using simple


experiments and empirical rules. Automatic methods
can be applied to tune these controllers.
(i) experimental phase using test signals; then:(ii) use of standard rules to compute PID
parameters.
18

MRAS

Model Reference Adaptive Systems


model
regulator
parameters

ym

ideal
output

adjustment
mechanism

uc

regulator

process

19

y
actual
output

The parameters of the regulator are adjusted such


that the error e = y - ym becomes small. The key
problem is to determine an appropriate adjustment
mechanism and a suitable control law.
MIT rule adjustment mechanism

d
e
e
dt

where determines the adaptation rate. This rule


changes the parameters in the direction of the
negative gradient of e2
20

Combining the MIT rule with the control law:

and computing the sensitivity derivatives


produces the scheme:
ym
model

u (uc y )

filter
e

integrator

uc

multiplier

+
_

multiplier

process

_
+
y

Note: steady-state will be achieved when the input


to the
21
integrator becomes zero. That is when y = ym

STR

Self Tuning Regulators


process parameters
design
regulator
parameters

uc

regulator

estimation

process

22

y
actual
output

The process parameters are updated and the


regulator parameters are obtained from the solution
of a design problem. The adaptive regulator consists
of two loops:(i) inner loop consisting of the process and a
linear feedback regulator
(ii) outer loop composed of a parameter estimator
(recursive) and a design calculation. (To obtain
good estimates it is usually necessary to
introduce perturbation signals)
Two problems:(i) underlying design problem
(ii) real time parameter estimation problem
23

Example - SIMULINK Simulation of MRAS


MODEL REFERENCE ADAPTIVE
CONTROL
2
s+2
reference
model

-K-

filter
reference
error

1/s
Integrator

-Kg1

+
e

mult
1/s
Integrator1

*
Input

to

-K-

g2

mult_

s+2
filter_
0.5

+
feedback
error

s+1
process

Mux
Mux1
parameters

*
so

24

Mux
Mux

reference,
output,
command

Input, Reference and Actual Outputs


1
0.8
0.6
0.4
0.2
0
-0.2
0

50

100
Time (second)

150
25

MATLAB Demo

26

INTRODUCTION TO THE KALMAN FILTER


State Estimation Problem
w(t)
u(t)

x Ax Bu Gw

v(t)
x(t)

y Cx Du v

y(t)

SYSTEM

Vectors w(t) and v(t) are noise terms, representing unmeasured


system disturbances and measurement errors respectively.
They are assumed to be independent, white, Gaussian, and to
have zero mean. In mathematical terms:27

T ( )I 0 for all t and


EF
v
(
t
)
w
G
J

H
K
T ( )I Q (assumed constant)
EF
w
(
t
)
w
G
JK
H
T ( )I R (assumed constant)
EF
v
(
t
)
v
G
JK
H
where Q and R are symmetric and non negative definite
covariance matrices. (E is the expectation operator)
Only u(t) and y(t) are assessable.
The state estimation problem is to estimate the states x(t)
from a knowledge of u(t) and y(t). (and assuming we know
A, B, G, C, D, Q, and R).
28

Construction of the Kalman-Bucy Filter


u(t)

y(t)

SYSTEM

x(t)

D
A

u(t)

B
FILTER

x
C

y(t )
_

+ y(t)

L(t)

x(t )
Filter equation :- x Ax Bu L(t )(y 29Cx Du)

Filter equation :- x Ax Bu L(t )(y Cx Du)


L(t) is a time dependent matrix gain.
The estimation problem is now to find L(t) such that
the error between the real states x(t) and the
x(t )states
estimated
is minimized. This can be
formulated as: F
I
T
(t )] [x(t ) x
(t )]J
min E G
[x(t ) x
L (t )

R E Kalman

30

Duality Between the Optimum State Estimation


Problem and the Optimum Regulator Problem
It can be shown that the optimum state estimation problem:

I
T [x(t ) x
(
(
min E F
[
x
(
t
)

x
t
)]
t
)]
G
J
L (t )

subject to:

x& Ax Bu Gw
y Cx Du v
x&
Ax Bu L(t )(y Cx Du)
E(ww )=Q, E( vv )=R
is the dual of the optimum regulator problem:
T

min
L (t )

subject to:

T
T GQGT x uT RuIdt
1 F
x
JK
2 0G
H

x AT x CT u
u L(t )x

31

Thus L(t) can be obtained by solving the matrix


Ricatti equation:
T
T
1
T

S AS SA SC R CS GQG
1

L (t ) R CS(t )

Furthermore for large measurement times L(t)


converges to:

Lim L(t ) L

a constant matrix gain.

32

Linear Quadratic Estimator Design Using MATLAB


LQE Linear quadratic estimator design. For the continuous-time system:
.
x = Ax + Bu + Gw
{State equation}
z = Cx + Du + v
{Measurements}
with process noise and measurement noise covariances:
E{w} = E{v} = 0, E{ww'} = Q, E{vv'} = R, E{wv'} = 0
L = LQE(A,G,C,Q,R) returns the gain matrix L such that the
stationary Kalman filter:
.
x = Ax + Bu + L(z - Cx - Du)
produces an LQG optimal estimate of x.

33

Example:

x 1 x2
x 2 x1 w(t ) , E ( w (t )) 1
2

y x1 v (t ) , E (v (t )) 3
2

A=[0 1;-1 0];


G=[0;1];
C=[1 0];
Q=1;
R=3;
L=lqe(A,G,C,Q,R)

produces:

L=
0.5562
0.1547

34

giving the filter equations:

x x l ( y y )
1
2
1
x x l ( y y )
2

y x1
where l1 = 0.5562, l2 = 0.1547
35

w(t)
u(t) = 0

v(t)
x2

1
s

x1

1
s

-1

y(t)
+

SYSTEM

-1

1
s

x2

1
s

x1

l1
FILTER

l2

x2 ( t )

36

x1 (t )

SIMULINK SIMULATION
vt
WS1
wt
WS2

1.7
sqrt(R)

v(t)
PLANT
1
w(t)

sqrt(Q)

+
-

1/s
x2

1/s
x1

+
+
y

meas(y)
+
y-Cx

+
_

1/s
x2hat

+
+
__

e1t
WS3

Mux

1/s
x1hat

Mux1 x1/x1hat
+
e2t
WS4
e2

0.556
l1

Mux

0.155
l2

+
e1

Mux x2/x2hat
KALMAN FILTER

37

Comparison of actual (solid) and measured (dash) states


6
4
2

x1

0
-2
-4
-6
265

270
275
Time (second) 38

280

Comparison of actual (solid) and measured (dash) states


6
4
2

x2

-2
-4
-6
265

270
275
Time (second)

280
39

Measurement signal y(t)

5
0
-5
-10
265

270

275

280
40

MATLAB Demo

41

Você também pode gostar