Você está na página 1de 9

EE 580 Linear Control Systems

VI. State Transition Matrix


Department of Electrical Engineering
Pennsylvania State University
Fall 2010
6.1 Introduction
Typical signal spaces are (innite-dimensional) vector spaces. Consider the space C(J, R
n
) of R
n
-
valued continuous functions on an interval J = (a, b) R. For f, g C(J, R
n
), dene f + g
C(J, R
n
) to be the function satisfying
(f + g)(t) = f(t) + g(t) for all t J.
Also, for R and f C(J, R
n
), dene f C(J, R
n
) to be the function satisfying
(f)(t) = f(t) for all t J.
Then it is straightforward to show that C(J, R
n
) is a vector space over the real eld, where the
origin 0 C(J, R
n
) is the function which is identically zero. Under this denition, a function
f C(J, R
n
) is a linear combination of functions g
1
, . . . , g
m
C(J, R
n
) if there are
1
, . . . ,

m
R such that f(t) = (
1
g
1
+ +
m
g
m
)(t) for all t J. A nonempty nite subset {g
1
, . . . , g
m
}
of C(J, R
n
) is then linearly independent if and only if (
1
g
1
+ +
m
g
m
)(t) = 0 for all t J
implies
1
= =
m
= 0.
For simplicity, we will focus on LTI systems of the form x(t) = Ax(t). However, all the theorems
up to Section 6.3.2 inclusive carry over to LTV systems of the form x(t) = A(t)x(t) as long as A()
is continuous; see [1, Sections 2.3 & 2.4] and [2, Pages 5073]. These results hold true even if A()
is piecewise continuous; a function is said to be piecewise continuous if, on each bounded interval
in its domain, it is continuous except possibly for a nite number of discontinuities. If A() is
piecewise continuous (or piecewise constant in particular), then one can solve x(t) = A(t)x(t) piece
by piece and then put together the results to obtain an overall solution.
6.2 Fundamental Matrices [3, Section 3.2.1]
6.2.1 Solution Space
Consider the homogeneous linear time-invariant system
x(t) = Ax(t), t R, (6.1)
where A R
nn
is a constant matrix. We have seen that this system, subject to any initial
condition x(t
0
) = x
0
, has a unique solution on R; that is, there exists a unique C(R, R
n
)
such that

(t) = A(t) for all t R and such that (t
0
) = x
0
. It turns out that the set of solutions
to (6.1) is a nite-dimensional subspace of C(R, R
n
), which is innite dimensional.
1
EE 580 Linear Control Systems 2
Theorem 6.1 The set of all solutions to (6.1) forms an n-dimensional linear subspace of C(R, R
n
).
Proof. Let V be the space of all solutions of (6.1) on R. It is easy to verify that V is a
linear subspace of C(R, R
n
). Choose a set of n linearly independent vectors {x
1
, . . . , x
n
} in R
n
. If
t
0
R, then there exist n solutions
1
, . . . ,
n
of (6.1) such that
1
(t
0
) = x
1
, . . . ,
n
(t
0
) = x
n
.
The linear independence of {
1
, . . . ,
n
} follows from that of {x
1
, . . . , x
n
}. It remains to show that
{
1
, . . . ,
n
} spans V . Let be any solution of (6.1) on R such that (t
0
) = x
0
. There exist unique
scalars
1
, . . . ,
n
R such that x
0
=
1
x
1
+ +
n
x
n
, and thus that =
1

1
+ +
n

n
is a solution of (6.1) on R such that (t
0
) = x
0
. However, because of the uniqueness of a solution
satisfying the given initial data, we have that =
1

1
+ +
n

n
. This shows that every
solution of (6.1) is a linear combination of
1
, . . . ,
n
, and hence that {
1
, . . . ,
n
} is a basis of
the solution space. 2
6.2.2 Matrix Dierential Equations
With A R
nn
as before, consider the matrix dierential equation

X(t) = AX(t). (6.2)


If the columns of X(t) R
nn
are denoted x
1
(t), . . . , x
n
(t) R
n
, then (6.2) is equivalent to

x
1
(t)
.
.
.
x
n
(t)

A 0
.
.
.
.
.
.
.
.
.
0 A

x
1
(t)
.
.
.
x
n
(t)

,
so a matrix dierential equation of the form (6.2) is a system of n
2
dierential equations.
Denition 6.2 A set {
1
, . . . ,
n
} C(R, R
n
) of n linearly independent solutions of (6.1) is
called a fundamental set of solutions of (6.1), and the R
nn
-valued function
=

1

n

is called a fundamental matrix of (6.1).


Theorem 6.3 A fundamental matrix of (6.1) satises the matrix equation (6.2).
Proof. The result is immediate from

(t) =

1
(t)

n
(t)

A
1
(t) A
n
(t)

= A

1
(t)
n
(t)

= A(t). 2
Therefore, a fundamental matrix is a matrix-valued function whose columns span the space of
solutions to the system of n dierential equations (6.1).
6.2.3 Characterizations of Fundamental Matrices
It can be shown that, if is a solution of the matrix dierential equation (6.2), then
det (t) = e
(tr A)(t)
det () (6.3)
for all t, R. This result is called Abels formula and proved in [1, Theorem 2.3.3]. Its immediate
consequence is that, since t, are arbitrary, we have either det (t) = 0 for all t R or det (t) = 0
for any t R. In fact, we have the following characterizations of fundamental matrices, which we
obtain without using Abels formula.
EE 580 Linear Control Systems 3
Theorem 6.4 A solution of the matrix dierential equation (6.2) is a fundamental matrix
of (6.1) if and only if (t) is nonsingular for all t R.
Proof. To show necessity by contradiction, suppose that = [
1

n
] is a fundamental
matrix for (6.1) and that det (t
0
) = 0 for some t
0
R. Then, since (t
0
) is singular, the set
{
1
(t
0
), . . . ,
n
(t
0
)} R
n
is linearly dependent (over the real eld), so that there are
1
, . . . ,

n
R, not all zero, such that

n
i=1

i
(t
0
) = 0. Every linear combination of the columns of a
fundamental matrix is a solution of (6.1), and so

n
i=1

i

i
is a solution of (6.1). Due to the unique-
ness of the solution, this, along with the initial condition

n
i=1

i
(t
0
) = 0, implies that

n
i=1

i

i
is identically zero, which contradicts the fact that
1
, . . . ,
n
are linearly independent. Thus, we
conclude that det (t) = 0 for all t R. To show suciency, let be a solution of (6.2) and
suppose that det (t) = 0 for all t R. Then the columns of form a linearly independent set of
vectors for all t R. Hence, is a fundamental matrix of (6.1). 2.
For example, consider three R
33
-valued functions

1
(t) =

1 2t t
2
0 1 t
0 0 1

,
2
(t) =

1 t t
2
0 1 t
0 0 1

,
3
(t) =

1 t t
2
0 1 t
0 0 t

, t R.
If we write
1
= [
1

2

3
], where
i
C(R, R
3
) are given by
1
(t) = [1 0 0]
T
,
2
(t) = [2t 1 0]
T
,
and
3
(t) = [t
2
t 1]
T
for all t R, then {
1
,
2
,
3
} is a linearly independent set in C(R, R
3
)
because

3
i=1

i
being identically zero implies all
i
equal 0. Similarly, the columns of
2
are
linearly independent in C(R, R
3
) and so are those of
3
. Thus,
1
,
2
, and
3
are potentially
fundamental matrices. We have det
1
(t) = det
2
(t) = 1 and det
3
(t) = t for all t R. Since
det
3
(0) = 0, however, Theorem 6.4 tells us that
3
is not a fundamental matrix of (6.1) or even
its time-varying version x(t) = A(t)x(t) (as the proof of Theorem 6.4 carries over to the time-
varying case). On the other hand, since
1
(t) and
2
(t) are nonsingular for all t, it remains that

1
and
2
are fundamental matrices provided that they solve the matrix dierential equation (6.2).
Solving

1
(t) = A
1
(t) for A R
33
yields that
A =

1
(t)
1
(t)
1
=

0 2 2t
0 0 1
0 0 0

1 2t t
2
0 1 t
0 0 1

0 2 0
0 0 1
0 0 0

,
so indeed
1
is a fundamental matrix of the LTI system (6.1). However, since

2
(t)
2
(t)
1
=

0 1 2t
0 0 1
0 0 0

1 t 0
0 1 t
0 0 1

0 1 t
0 0 1
0 0 0

is a function of t, the matrix-valued function


2
is not a fundamental matrix of (6.1). Neverthe-
less,
2
is a fundamental matrix of the LTV system x(t) = A(t)x(t), where A(t) =

2
(t)
2
(t)
1
.
Theorem 6.5 Let be a fundamental matrix of (6.1). Then is a fundamental matrix of (6.1)
if and only if there exists a nonsingular matrix P R
nn
such that (t) = (t)P for all t R.
Proof. Suppose P is invertible and (t) = (t)P for all t. Then it is easy to check that is a
solution of (6.2). Moreover, since det (t) = 0 for all t and since det P = 0, we have det (t)P =
det (t) det P = 0 for all t. Thus by Theorem 6.4, is a fundamental matrix. Conversely, suppose
that is a fundamental matrix. As (t)(t)
1
= I for all t, the chain rule gives
d
dt

(t) = 0
d
dt

(t) = (t)
1 d
dt

(t)(t)
1
= (t)
1
A
EE 580 Linear Control Systems 4
for all t R. Using this equality, we obtain
d
dt

(t) = (t)
1 d
dt

(t) +
d
dt

(t)(t) = (t)
1
A(t) (t)
1
A(t) = 0
for all t. Hence ()
1
() is constant, which implies the existence of a matrix P such that
(t)
1
(t) = P for all t; since (t)
1
and (t) are both nonsingular, P is nonsingular. 2
6.3 State Transition Matrix [3, Sections 3.2.2 & 3.3.1]
6.3.1 Denitions
Solving the matrix dierential equation (6.2) subject to the initial condition X(t
0
) = I R
nn
is
equivalent to solving the initial-value problems
x
i
(t) = Ax
i
(t), t R; x
i
(t
0
) = e
i
R
n
, (6.4)
separately over all i = 1, . . . , n, where e
i
denotes the ith standard basis vector (i.e., ith column
of the identity matrix I). That is, if
1
, . . . ,
n
C(R, R
n
) are the unique solutions to (6.4) such
that
i
(t
0
) = e
i
and

(t) = A(t) for all t R and for each i = 1, . . . , n, then the fundamental
matrix (, t
0
) = [
1
()
n
()] is the unique solution to the matrix dierential equation

t
(t, t
0
) = A(t, t
0
)
subject to
(t
0
, t
0
) = I.
The R
nn
-valued function (, t
0
) is called the state transition matrix of (6.1). Every x
0
R
n
is a
linear combination of the standard basis vectors e
1
, . . . , e
n
(i.e., x
0
= Ix
0
), and so, as we already
know, the solution to the system of dierential equations (6.1) subject to the initial condition
x(t
0
) = x
0
has a unique solution given by () = (, t
0
)x
0
, which is the same linear combination
of the columns of the state transition matrix. Moreover, (t, t
0
) is determined by the Peano-Baker
series, which in the case of LTI systems reduces to the matrix exponential given by
(t, t
0
) = e
A(tt
0
)
=

k=0
A
k
(t t
0
)
k
k!
for all t, t
0
R with the convention that A
0
= I and 0! = 1.
6.3.2 Properties of State Transition Matrix
Theorem 6.6 Let (t, t
0
) be the state transition matrix of (6.1). If P is invertible, then the state
variable change P x(t) = x(t) leads to the equivalent state equation

x(t) =

A x(t), t R;

A = P
1
AP. (6.5)
If (, t
0
) is the state transition matrix of (6.1), then the state transition matrix of (6.5) is

(t, t
0
) = P
1
(t, t
0
)P, t, t
0
R. (6.6)
Proof. Equation (6.5) follows from x(t) = P
1
x(t) and
d
dt

P
1
x

(t) = (P
1
AP)P
1
x(t), and
equation (6.6) from

t

P
1
(t, t
0
)P

= (P
1
AP)P
1
(t, t
0
)P and P
1
(t
0
, t
0
)P = I. 2
EE 580 Linear Control Systems 5
Theorem 6.7 Let (, t
0
) be the state transition matrix of (6.1). Then the following hold:
(a) If is any fundamental matrix of (6.1), then (t, t
0
) = (t)(t
0
)
1
for all t, t
0
R;
(b) (t, t
0
) is nonsingular and (t, t
0
)
1
= (t
0
, t) for all t, t
0
R;
(c) (t, t
0
) = (t, s)(s, t
0
) for all t, s, t
0
R; (semigroup property)
Proof. Part (a) follows from

t

(t)(t
0
)
1

= A(t)(t
0
)
1
and (t
0
)(t
0
)
1
= I. Choose
any fundamental matrix . Then det (t) = 0 for all t, so (a) implies (t, t
0
) is nonsingu-
lar for all t, t
0
because det (t, t
0
) = det

(t)(t
0
)
1

= det (t) det (t


0
)
1
= 0. Moreover,
(t, t
0
)
1
=

(t)(t
0
)
1

1
= (t
0
)(t)
1
, so (b) holds. Finally, for any fundamental matrix ,
we have (t, t
0
) = (t)(t
0
)
1
=

(t)(s)
1

(s)(t
0
)
1

, so (c) holds. 2
As an example, consider the homogeneous LTI system x(t) = Ax(t) given by

x
1
(t)
x
2
(t)
x
3
(t)

0 2 0
0 0 1
0 0 0

x
1
(t)
x
2
(t)
x
3
(t)

.
It follows readily from x
3
(t) = 0, x
2
(t) = x
3
(t), and x
1
(t) = 2x
2
(t) that
1
(t) = [1 0 0]
T
,

2
(t) = [2t 1 0]
T
, and
3
(t) = [t
2
t 1]
T
are solutions of the system among innitely many others.
Moreover, {
1
,
2
,
3
} is a linearly independent set in C(R, R
3
). Thus = [
1

2

3
] is a
fundamental matrix of the system. Then by part (a) of Theorem 6.7 we have that the state
transition matrix of the system is
(t, t
0
) = (t)(t
0
)
1
=

1 2t t
2
0 1 t
0 0 1

1 2t
0
t
2
0
0 1 t
0
0 0 1

1 2(t t
0
) (t t
0
)
2
0 1 t t
0
0 0 1

.
Indeed, we have (t, t
0
) = e
A(tt
0
)
for all t, t
0
R. As another example, consider the homogeneous
LTV system x(t) = A(t)x(t) given by

x
1
(t)
x
2
(t)
x
3
(t)

0 1 t
0 0 1
0 0 0

x
1
(t)
x
2
(t)
x
3
(t)

.
As
1
(t) = [1 0 0]
T
,
2
(t) = [t 1 0]
T
, and
3
(t) = [t
2
t 1]
T
are linearly independent solutions to
this LTV system, the matrix-valued function = [
1

2

3
] is a fundamental matrix of the LTV
system (but not of any LTI system). Therefore, the state transition matrix of the LTV system is
(t, t
0
) = (t)(t
0
)
1
=

1 t t
2
0 1 t
0 0 1

1 t
0
0
0 1 t
0
0 0 1

1 t t
0
t(t t
0
)
0 1 t t
0
0 0 1

.
That is,

t
(t, t
0
) = A(t)(t, t
0
) for all t R with (t
0
, t
0
) = I.
6.3.3 Properties of Matrix Exponentials
Theorem 6.8 Let A R
nn
. Then the following hold:
(a) e
At
1
e
At
2
= e
A(t
1
+t
2
)
for all t
1
, t
2
R;
EE 580 Linear Control Systems 6
(b) Ae
At
= e
At
A for all t R;
(c)

e
At

1
= e
At
for all t R;
(d) det e
At
= e
(tr A)t
for all t R;
(e) If AB = BA, then e
At
e
Bt
= e
(A+B)t
for all t R.
Proof. Part (a) holds because e
At
1
e
At
2
= (t
1
, 0)(0, t
2
) = (t, t
2
) = e
A(t
1
+t
2
)
, where the
second equality follows from Theorem 6.7(c). Part (b) follows from
A

lim
m

m
k=0
A
k
t
k
k!

= lim
m

m
k=0
A
k+1
t
k
k!
=

lim
m

m
k=0
A
k
t
k
k!

A.
Part (c) is immediate from Theorem 6.7(b). If is a fundamental matrix of (6.1), then Theo-
rem 6.7(a) gives det e
At
= det

(t)(0)
1

= det (t)/ det (0), where the last equality follows


from 1 = det I = det

(0)(0)
1

= det (0) det (0)


1
. Thus Abels formula (6.3) gives (d).
Finally, if AB = BA, then A
i
B
j
= B
j
A
i
and hence we have

lim
m

m
i=0
A
i
t
i
i!

lim
l

l
k=0
B
j
t
j
j!

lim
l

l
k=0
B
j
t
j
j!

lim
m

m
i=0
A
i
t
i
i!

,
which implies (e). 2
In general, matrices do not commute. For example, if
A =

0 1
0 0

and B =

0 0
1 0

,
then AB = BA. In this case,
e
At
=

1 t
0 1

, e
Bt
=

1 0
t 1

, e
(A+B)t
=

1
2
(e
t
+ e
t
)
1
2
(e
t
e
t
)
1
2
(e
t
e
t
)
1
2
(e
t
+ e
t
)

,
so e
At
e
Bt
= e
(A+B)t
.
6.4 How to Determine Matrix Exponentials [3, Section 3.3.2]
We have already seen that the state transition matrix of a linear (time-invariant or time-varying)
system can be obtained from a fundamental matrix, whose columns are linearly independent solu-
tions to the given homogeneous system. Other methods to obtain the state transition matrix of an
LTI system are summarized in this section.
6.4.1 Innite Series Method
Given a matrix A R
nn
, evaluate the partial sum
S
m
(t) =
m

k=0
t
k
k!
A
k
= I + tA+
t
2
2
A+ +
t
m
m!
A
m
for m = 0, 1, . . . . Then, since S
m
(t) e
At
uniformly on any bounded interval J in R, it is
guaranteed that e
At
S
m
(t) for all t J and for all suciently large m. If A is nilpotent, then
the partial sum S
m
(t), with m = n 1, gives a closed-form expression for e
At
. For example,
A =

0
0 0

A
2
=

0 0
0 0

e
At
= I + tA =

1 t
0 1

.
EE 580 Linear Control Systems 7
6.4.2 Similarity Transformation Method
Diagonalizable Case. If A R
nn
has n linearly independent eigenvectors v
1
, . . . , v
n
corresponding
to its (not necessarily distinct) eigenvalues
1
, . . . ,
n
, then let P = [v
1
v
n
]. Then, since
J = P
1
AP = diag{
1
, . . . ,
n
}, Theorem 6.6 gives
e
At
= Pe
Jt
P
1
= P

1
t
0
.
.
.
0 e
nt

P
1
.
General Case. Generate n linearly independent generalized eigenvectors v
1
, . . . , v
n
of A
R
nn
such that P = [v
1
v
n
] takes A into the Jordan canonical form J = P
1
AP =
diag{J
0
, . . . , J
m
} C
nn
, where J
1
, . . . , J
m
are Jordan blocks of varying dimensions. Then
e
At
= Pe
Jt
P
1
= P

e
J
1
t
0
.
.
.
0 e
Jmt

P
1
.
A Jordan block is the sum of a diagonal matrix and a nilpotent matrix; that is, J
k
=
k
+N
k
with

k
=

k
0 0
0
k
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
k

and N
k
=

0 1 0
.
.
.
.
.
.
.
.
. 1
0 0

,
where
k
is some eigenvalue of A for each k. Let J
k
C
n
k
n
k
for each k = 1, . . . , m, so that

m
k=1
n
k
= n. Then we have N
n
k
k
= 0, and so the series dening e
N
k
t
terminates for each k.
Moreover, direct computation yields
k
N
k
= N
k

k
for each k. Therefore, Theorem 6.8(e) yields
e
J
k
t
= e

k
t
e
N
k
t
= e

k
t

1 t t
2
/2 t
n
k
1
/(n
k
1)!
0 1 t t
n
k
2
/(n
k
2)!
0 0 1 t
n
k
3
/(n
k
3)!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1

, k = 1, . . . , m. (6.7)
Note that, if
k
in (6.7) is complex, then Eulers formula gives
e

k
t
= e
(
k
+i
k
)t
= e

k
t
(cos
k
t + i sin
k
t).
For example, if = 0, then
A =

0 0
0

J = P
1
AP =

0 1
0 0

, P =

0 1
0

e
At
= P

1 t
0 1

P
1
=

1 0
t 1

.
6.4.3 Cayley-Hamilton Theorem Method
In view of the Cayley-Hamilton Theorem, there exist
i
(m; ) C(R, R) for i = 0, . . . , n 1 and
m = 1, 2, . . . such that, for A R
nn
,
e
At
= lim
m
m

k=0
t
k
k!
A
k
= lim
m
n1

i=0

i
(m; t)A
i
=
n1

i=0
lim
m

i
(m; t)A
i
.
EE 580 Linear Control Systems 8
Then, letting
i
(t) = lim
m

i
(m; t) for i = 0, . . . , n 1 and t R, we obtain
e
At
=
n1

i=0

i
(t)A
i
, t R. (6.8)
Thus one can determine e
At
by obtaining
i
(t) for all i and t. Let f(s) and g(s) be two analytic
functions (i.e., functions locally given by a convergent power series; e.g., polynomials, exponential
functions, trigonometric functions, logarithms, etc.). Let p(s) =

p
j=1
(s
j
)
m
k
be the character-
istic polynomial of A. Then f(A) = g(A) if
d
k
f
ds
k
(
j
) =
d
k
g
ds
k
(
j
), k = 0, . . . , m
j
1, k = 1, . . . , p, (6.9)
where

p
j=1
m
j
= n. (Proof. Equations (6.9) imply that f(s) g(s) has p(s) as a factor, so the
result follows from the Cayley-Hamilton Theorem.) Thus the terms
i
(t) in (6.8) can be determined
by letting
f(s) = e
st
and g(s) =
0
(t) +
1
(t)s + +
n1
(t)s
n1
.
For example, if A = [
0 0
0
], then the characteristic polynomial of A is p(s) = det(sI A) = s
2
. Let
f(s) = e
st
and g(s) =
0
(t) +
1
(t)s. Then, as f(0) = g(0) and
f
s
(0) =
g
s
(0) imply
0
(t) = 1 and

1
(t) = t, we obtain that e
At
= g(A) = I + tA.
6.4.4 Laplace Transform Method
We know that e
At
is the inverse Laplace transform of (sIA)
1
for A R
nn
. The partial fraction
expansion of each entry in (sI A)
1
gives
(sI A)
1
=
p

j=1
m
j
1

k=0
k!
(s
j
)
k+1
A
jk
,
where
1
, . . . ,
p
are the distinct eigenvalues of A, with corresponding algebraic multiplicities
m
1
, . . . , m
p
such that

p
j=1
m
i
= n, and where each A
jk
C
nn
is a matrix of partial fraction
expansion coecients. Taking the inverse Laplace transform gives
e
At
=
p

j=1
m
j
1

k=0
t
k
e

j
t
A
jk
. (6.10)
If some eigenvalues are complex, conjugate terms on the right side of (6.10) can be combined using
Eulers formula to give a real representation. If A
jk
is nonzero, then t
k
e

j
t
is called a mode of
the system x(t) = Ax(t), t R. It is easily seen that x(t) 0 as t under arbitrary initial
conditions if each mode of the system converges to zero (i.e., each
j
has negative real part). For
example, if A = [
0 0
0
], then e
At
is the inverse Laplace transform of (sI A)
1
=

1/s 0
/s
2
1/s

. The
modes of the system x(t) = Ax(t) in this example are 1 and t.
6.5 Discrete-Time Case
6.5.1 State Transition Matrix
Given an A R
nn
, consider the discrete-time initial-value problem of solving
x(t + 1) = Ax(t), t = t
0
, t
0
+ 1, . . . ; x(t
0
) = x
0
. (6.11)
EE 580 Linear Control Systems 9
The discrete-time state transition matrix is dened to be the unique solution (, t
0
) R
nn
of
the matrix dierence equation
(t + 1, t
0
) = A(t, t
0
), t = t
0
, t
0
+ 1, . . . ,
subject to
(t
0
, t
0
) = I.
That is, (t, t
0
) = A
tt
0
for all t, t
0
Z with t t
0
, and as we have seen earlier the unique
solution to (6.11) is given by x(t) = (t, t
0
)x
0
, t t
0
. Unlike the continuous-time case, the
dierence equation (6.11) cannot be solved backward in time unless A is nonsingular. This is
because, unless A is invertible, the discrete-time state transition matrix (t, t
0
) is not invertible.
An immediate consequence is that the semigroup property holds only forward in time; that is,
(t, t
0
) = (t, s)(s, t
0
) for t s t
0
. Due to time-invariance, we have (t, t
0
) = (t t
0
, 0).
6.5.2 How to Determine Powers of Matrices
Given an A R
nn
, if J = P
1
AP is a Jordan canonical form of A, then we have
A
t
= (PJP
1
)
t
= PJ
t
P
1
.
In particular, if J = diag{
1
, . . . ,
n
}, then (t, 0) = A
t
= Pdiag{
t
1
, . . . ,
t
n
}P
1
. Also, by the
Cayley-Hamilton Theorem there exist
i
(), i = 1, . . . , n 1, such that
A
t
=
n1

i=0

i
(t)A
i
, t = 0, 1, . . . . (6.12)
Thus, as in the continuous-time case, one can determine A
t
by letting
f(s) = s
t
and g(s) =
0
(t) +
1
(t)s + +
n1
(t)s
n1
,
and then solving (6.9) for the terms
i
(t). Finally, one can use the fact that (, 0) is the inverse z-
transform of z(zIA)
1
= (Iz
1
A)
1
to determine A
t
. If
1
, . . . ,
p
are the distinct eigenvalues
of A, with corresponding algebraic multiplicities m
1
, . . . , m
p
such that

p
i=1
m
i
= n, we have
z(zI A)
1
= z
p

j=1
m
j
1

k=0
k!
(z
j
)
k+1
A
jk
and hence
A
t
=
p

j=1
min{m
j
1,t}

k=0
t!
(t k)!

tk
j
A
jk
,
where A
jk
C
nn
are the matrices of partial fraction expansion coecients. If A
jk
is nonzero,
then
t!
(tk)!

tk
j
is called a mode of the system x(t +1) = Ax(t). We have x(t) 0 under arbitrary
initial conditions if each mode converges to zero (i.e., each
j
has a magnitude less than 1).
References
[1] P. J. Antsaklis and A. N. Michel, Linear Systems, 2nd ed. Boston, MA: Birkhauser, 2006.
[2] W. J. Rugh, Linear System Theory. Englewood Clis, NJ: Prentice-Hall, 1993.
[3] P. J. Antsaklis and A. N. Michel, A Linear Systems Primer. Boston, MA: Birkhauser, 2007.

Você também pode gostar