Você está na página 1de 15

Lecture: Integral action in state feedback control

Automatic Control 1
Integral action in state feedback control
Prof. Alberto Bemporad
University of Trento
Academic year 2010-2011
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 1 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking
Reference tracking
Assume the open-loop system completely reachable and observable
We know state feedback we can bring the output y(k) to zero asymptotically
How to make the output y(k) track a generic constant set-point r(k) r ?
Solution: set u(k) = Kx(k) +v(k)
v(k) = Fr(k)
We need to choose gain F properly to ensure reference tracking
!"#$%&'$()*+,'-..
F
r(k) u(k) x(k) y(k)
+
+
K
A,B C
',#/+,((-+
v(k)
x(k +1) = (A +BK)x(k) +BFr(k)
y(k) = Cx(k)
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 2 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking
Reference tracking
To have y(k) r we need a unit DC-gain from r to y
C(I (A +BK))
1
BF = I
Assume we have as many inputs as outputs (example: u, y )
Assume the DC-gain from u to y is invertible, that is CAdj(I A)B invertible
Since state feedback doesnt change the zeros in closed-loop
CAdj(I A BK)B = CAdj(I A)B
then CAdj(I A BK)B is also invertible
Set
F = (C(I (A +BK))
1
B)
1
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 3 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking
Example
Poles placed in (0.8 0.2j, 0.3). Resulting
closed-loop:
x(k +1) =
_
1.1 1
0 0.8
_
x(k) +
_
0
1
_
u(k)
y(k) =
_
1 0
_
x(k)
u(k) =
_
0.13 0.3
_
x(k) +0.08r(k)
The transfer function G(z) from r to y is
G(z) =
2
25z
2
40z+17
, and G(1) = 1
0 10 20 30 40
0
0.2
0.4
0.6
0.8
1
1.2
1.4
sample steps
Unit step response of the closed-loop
system (=evolution of the system from
initial condition x(0) =
_
0
0
_
and
reference r(k) 1, k 0)
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 4 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking
Reference tracking
Problem: we have no direct feedback on the tracking error e(k) = y(k) r(k)
Will this solution be robust with respect to model uncertainties and
exogenous disturbances ?
Consider an input disturbance d(k) (modeling for instance a non-ideal
actuator, or an unmeasurable disturbance)
!"#$%&'$()*+,'-..
F
r(k)
u(k) x(k)
y(k)
d(k)
+
+
+
+
K
A,B C
&#*/0)!&.0/+1$#'-
',#0+,((-+
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 5 / 15
Lecture: Integral action in state feedback control Adjustment of DC-gain for reference tracking
Example (contd)
Let the input disturbance d(k) = 0.01, k = 0, 1, . . .
0 10 20 30 40
0
0.2
0.4
0.6
0.8
1
1.2
1.4
sample steps
The reference is not tracked !
The unmeasurable disturbance d(k) has modied the nominal conditions for
which we designed our controller
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 6 / 15
Lecture: Integral action in state feedback control Integral action
Integral action for disturbance rejection
Consider the problem of regulating the output y(k) to r(k) 0 under the
action of the input disturbance d(k)
Lets augment the open-loop system with the integral of the output vector
q(k +1) = q(k) +y(k)
. _ .
integral action
The augmented system is
_
x(k +1)
q(k +1)
_
=
_
A 0
C I
__
x(k)
q(k)
_
+
_
B
0
_
u(k) +
_
B
0
_
d(k)
y(k) =
_
C 0
_
_
x(k)
q(k)
_
Design a stabilizing feedback controller for the augmented system
u(k) =
_
K H
_
_
x(k)
q(k)
_
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 7 / 15
Lecture: Integral action in state feedback control Integral action
Rejection of constant disturbances
!"#$%&'$()*+,'-..
H
+
+
+
+
K
A,B C
&#*/0)!&.0/+1$#'-
&#0-2+$()$'0&,#
!
.0$0-)3--!1$'4
q(k)
u(k)
d(k)
x(k)
y(k)
Theorem
Assume a stabilizing gain [H K] can be designed for the system
augmented with integral action. Then lim
k+
y(k) = 0 for all constant
disturbances d(k) d
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 8 / 15
Lecture: Integral action in state feedback control Integral action
Rejection of constant disturbances
!"#$%&'$()*+,'-..
H
+
+
+
+
K
A,B C
&#*/0)!&.0/+1$#'-
&#0-2+$()$'0&,#
!
.0$0-)3--!1$'4
q(k)
u(k)
d(k)
x(k) y(k)
Proof:
The state-update matrix of the closed-loop system is
__
A 0
C I
_
+
_
B
0
_
_
K H
_
_
The matrix has asymptotically stable eigenvalues by construction
For a constant excitation d(k) the extended state
_
x(k)
q(k)
_
converges to a
steady-state value, in particular lim
k
q(k) = q
Hence, lim
k
y(k) = lim
k
q(k +1) q(k) = q q = 0
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 9 / 15
Lecture: Integral action in state feedback control Integral action
Example (contd) Now with integral action
Poles placed in (0.8 0.2j, 0.3) for the augmented system. Resulting closed-loop:
x(k +1) =
_
1.1 1
0 0.8
_
x(k) +
_
0
1
_
(u(k) +d(k))
q(k +1) = q(k) +y(k)
y(k) =
_
1 0
_
x(k)
u(k) =
_
0.48 1
_
x(k) 0.056q(k)
Closed-loop simulation for x(0) = [0 0]

, d(k) 1:
0 10 20 30 40
0.5
0
0.5
1
1.5
2
2.5
3
sample steps
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 10 / 15
Lecture: Integral action in state feedback control Integral action
Integral action for set-point tracking
!"#$%&'$()*+,'-..
H
+
+
+
+
K
A,B C
&#*/0)!&.0/+1$#'-
&#0-2+$()
$'0&,#
!
.0$0-)3--!1$'4
q(k) u(k)
d(k)
x(k)
y(k)
-
+ r(k)
0+$'4&#2)-++,+
Idea: Use the same feedback gains (K, H) designed earlier, but instead of feeding
back the integral of the output, feed back the integral of the tracking error
q(k +1) = q(k) + (y(k) r(k))
. _ .
integral action
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 11 / 15
Lecture: Integral action in state feedback control Integral action
Example (contd)
x(k +1) =
_
1.1 1
0 0.8
_
x(k)
+
_
0
1
_
(u(k) +d(k))
q(k +1) = q(k) + (y(k) r(k))
. _ .
tracking error
y(k) =
_
1 0
_
x(k)
u(k) =
_
0.48 1
_
x(k) 0.056q(k)
0 10 20 30 40
0
0.5
1
1.5
2
2.5
3
3.5
sample steps
Response for x(0) = [0 0]

,
d(k) 1, r(k) 1
Looks like its working . . . but why ?
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 12 / 15
Lecture: Integral action in state feedback control Integral action
Tracking & rejection of constant disturbances/set-points
Theorem
Assume a stabilizing gain [H K] can be designed for the system
augmented with integral action. Then lim
k+
y(k) = r for all constant
disturbances d(k) d and set-points r(k) r
Proof:
The closed-loop system
_
x(k +1)
q(k +1)
_
=
_
A +BK BH
C I
__
x(k)
q(k)
_
+
_
B 0
0 I
__
d(k)
r(k)
_
y(k) =
_
C 0
_
_
x(k)
q(k)
_
has input
_
d(k)
r(k)
_
and is asymptotically stable by construction
For a constant excitation
_
d(k)
r(k)
_
the extended state
_
x(k)
q(k)
_
converges to a
steady-state value, in particular lim
k
q(k) = q
Hence, lim
k
y(k) r(k) = lim
k
q(k +1) q(k) = q q = 0
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 13 / 15
Lecture: Integral action in state feedback control Integral action
Integral action for continuous-time systems
The same reasoning can be applied to continuous-time systems
x(t) = Ax(t) +Bu(t)
y(t) = Cx(t)
Augment the system with the integral of the output q(t) =
_
t
0
y()d, i.e.,
q(t) = y(t) = Cx(t)
. _ .
integral action
The augmented system is
d
dt
_
x(t)
q(t)
_
=
_
A 0
C 0
__
x(t)
q(t)
_
+
_
B
0
_
u(t)
y(t) =
_
C 0
_
_
x(t)
q(t)
_
Design a stabilizing controller [K H] for the augmented system
Implement
u(t) = Kx(t) +H
_
t
0
(y() r())d
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 14 / 15
Lecture: Integral action in state feedback control Integral action
English-Italian Vocabulary
reference tracking inseguimento del riferimento
steady state regime stazionario
set point livello di riferimento
Translation is obvious otherwise.
Prof. Alberto Bemporad (University of Trento) Automatic Control 1 Academic year 2010-2011 15 / 15

Você também pode gostar