Escolar Documentos
Profissional Documentos
Cultura Documentos
Contents
Contents
1 Fundamental Theory
1.1 ODEs and Dynamical Systems
1.2 Existence of Solutions . . . .
1.3 Uniqueness of Solutions . . .
1.4 Picard-Lindelof Theorem . . .
1.5 Intervals of Existence . . . . .
1.6 Dependence on Parameters . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Linear Systems
2.1 Constant Coefficient Linear Equations .
2.2 Understanding the Matrix Exponential .
2.3 Generalized Eigenspace Decomposition
2.4 Operators on Generalized Eigenspaces .
2.5 Real Canonical Form . . . . . . . . . .
2.6 Solving Linear Systems . . . . . . . . .
2.7 Qualitative Behavior of Linear Systems
2.8 Exponential Decay . . . . . . . . . . .
2.9 Nonautonomous Linear Systems . . . .
2.10 Nearly Autonomous Linear Systems . .
2.11 Periodic Linear Systems . . . . . . . .
3 Topological Dynamics
3.1 Invariant Sets and Limit Sets . .
3.2 Regular and Singular Points . .
3.3 Definitions of Stability . . . . .
3.4 Principle of Linearized Stability
3.5 Lyapunovs Direct Method . . .
i
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
6
9
13
15
18
.
.
.
.
.
.
.
.
.
.
.
25
25
27
31
34
37
39
46
50
52
56
59
.
.
.
.
.
65
65
69
72
77
82
C ONTENTS
3.6
4
ii
Conjugacies
4.1 Hartman-Grobman Theorem:
4.2 Hartman-Grobman Theorem:
4.3 Hartman-Grobman Theorem:
4.4 Hartman-Grobman Theorem:
4.5 Hartman-Grobman Theorem:
4.6 Constructing Conjugacies . .
4.7 Smooth Conjugacies . . . .
85
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
91
. 91
. 92
. 95
. 98
. 101
. 104
. 107
Invariant Manifolds
5.1 Stable Manifold Theorem: Part 1 . . . .
5.2 Stable Manifold Theorem: Part 2 . . . .
5.3 Stable Manifold Theorem: Part 3 . . . .
5.4 Stable Manifold Theorem: Part 4 . . . .
5.5 Stable Manifold Theorem: Part 5 . . . .
5.6 Stable Manifold Theorem: Part 6 . . . .
5.7 Center Manifolds . . . . . . . . . . . .
5.8 Computing and Using Center Manifolds
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Part 1
Part 2
Part 3
Part 4
Part 5
. . . .
. . . .
.
.
.
.
.
.
.
113
113
116
119
122
125
129
132
134
Periodic Orbits
139
6.1 Poincare-Bendixson Theorem . . . . . . . . . . . . . . . . . . 139
6.2 Lienards Equation . . . . . . . . . . . . . . . . . . . . . . . . 143
6.3 Lienards Theorem . . . . . . . . . . . . . . . . . . . . . . . . 147
1
Fundamental Theory
1.1 ODEs and Dynamical Systems
Ordinary Differential Equations
An ordinary differential equation (or ODE) is an equation involving derivatives
of an unknown quantity with respect to a single variable. More precisely, suppose
j; n 2 N, E is a Euclidean space, and
n C 1 copies
F W dom.F / R E E ! Rj :
(1.1)
(1.2)
1. F UNDAMENTAL T HEORY
First-order Equations
Every ODE can be transformed into an equivalent first-order equation. In particular, given x W I ! E, suppose we define
y1 WD x
y2 WD xP
y3 WD xR
::
:
yn WD x .n
1/
u2
G3 .t; u; p/ WD p3
::
:
u4
G2 .t; u; p/ WD p2
Gn
1 .t; u; p/
WD pn
u3
un ;
dom.Gn / D .t; u; p/ 2 R E n E n .t; u1 ; : : : ; un ; pn / 2 dom.F / :
we see that x satisfies (1.2) if and only if y satisfies G.t; y.t/; y.t//
P
D 0.
1,
P D0
<F .t; x; x/
x.t0 / D x0
:
x.t
P 0 / D p0 ;
(1.3)
Autonomous Equations
Let f W dom.f / R Rn ! Rn . The ODE
xP D f .t; x/
(1.4)
dom.fQ/ D p 2 RnC1 .p1 ; .p2 ; : : : ; pnC1 // 2 dom.f / :
Then x satisfies (1.4) if and only if y satisfies yP D fQ.y/.
1. F UNDAMENTAL T HEORY
Because of the discussion above, we will focus our study on first-order autonomous ODEs that are resolved with respect to the derivative. This decision
is not completely without loss of generality, because by converting other sorts
of ODEs into equivalent ones of this form, we may be neglecting some special
structure that might be useful for us to consider. This trade-off between abstractness and specificity is one that you will encounter (and have probably already
encountered) in other areas of mathematics. Sometimes, when transforming the
equation would involve too great a loss of information, well specifically study
higher-order and/or nonautonomous equations.
Dynamical Systems
As we shall see, by placing conditions on the function f W Rn ! Rn and
the point x0 2 we can guarantee that the autonomous IVP
(
xP D f .x/
(1.5)
x.0/ D x0
has a solution defined on some interval I containing 0 in its interior, and this solution will be unique (up to restriction or extension). Furthermore, it is possible
to splice together solutions of (1.5) in a natural way, and, in fact, get solutions to IVPs with different initial times. These considerations lead us to study a
structure known as a dynamical system.
Given Rn , a continuous dynamical system (or a flow) on is a function
' W R ! satisfying:
1. '.0; x/ D x for every x 2 ;
2. '.s; '.t; x// D '.s C t; x/ for every x 2 and every s; t 2 R;
3. ' is continuous.
If f and are sufficiently nice we will be able to define a function ' W
R ! by letting '.; x0 / be the unique solution of (1.5), and this definition will make ' a dynamical system. Conversely, any continuous dynamical
system '.t; x/ that is differentiable with respect to t is generated by an IVP.
@'.t; x/
exists for every t 2 R and every x 2 ;
@t
x0 2 is given;
y W R ! is defined by y.t/ WD '.t; x0 /;
f W!
Rn
@'.s; p/
is defined by f .p/ WD
.
@s sD0
Rn ,
yP D f .y/
y.0/ D x0 :
'.n; x/ D F .F . .F .x// //;
then ' will almost meet the requirements to be a dynamical system, the only
exception being that property 2, known as the group property may fail because
'.n; x/ is not even defined for n < 0. We may still call this a dynamical system;
if were being careful we may call it a semidynamical system.
In a dynamical system, the set is called the phase space. Dynamical systems are used to describe the evolution of physical systems in which the state
of the system at some future time depends only on the initial state of the system and on the elapsed time. As an example, Newtonian mechanics permits us
5
1. F UNDAMENTAL T HEORY
to view the earth-moon-sun system as a dynamical system, but the phase space
is not physical space R3 , but is instead an 18-dimensional Euclidean space in
which the coordinates of each point reflect the position and momentum of each
of the three objects. (Why isnt a 9-dimensional space, corresponding to the three
spatial coordinates of the three objects, sufficient?)
xP D f .t; x/
x.t0 / D a;
(1.6)
(1.7)
t0
How does one go about proving that (1.7) has a solution if, unlike the case
with so many IVPs studied in introductory courses, a formula for a solution cannot be found? One idea is to construct a sequence of approximate solutions,
with the approximations becoming better and better, in some sense, as we move
along the sequence. If we can show that this sequence, or a subsequence, converges to something, that limit might be an exact solution.
One way of constructing approximate solutions is Picard iteration. Here, we
plug an initial guess in for x on the right-hand side of (1.7), take the resulting
value of the left-hand side and plug that in for x again, etc. More precisely, we
can set x1 .t/ WD a and recursively define xkC1 in terms of xk for k > 1 by
xkC1 .t/ WD a C
Note that if, for some k, xk D xkC1 then we have found a solution.
Another approach is to construct a Tonelli sequence. For each k 2 N, let
xk .t/ be defined by
8
if t0 t t0 C 1=k
<a;
Z t 1=k
xk .t/ D
(1.8)
Existence of Solutions
for t t0 , and define xk .t/ similarly for t t0 .
We will use the Tonelli sequence to show that (1.7) (and therefore (1.6)) has a
solution, and will use Picard iterates to show that, under an additional hypothesis
on f , the solution of (1.7) is unique.
Existence
For the first result, we will need the following definitions and theorems.
Definition. A sequence of functions gk W U R ! Rn is uniformly bounded if
there exists M > 0 such that jgk .t/j M for every t 2 U and every k 2 N.
Definition. A sequence of functions gk W U R ! Rn is uniformly equicontinuous if for every " > 0 there exists a number > 0 such that jgk .t1 / gk .t2 /j < "
for every k 2 N and every t1 ; t2 2 U satisfying jt1 t2 j < .
Definition. A sequence of functions gk W U R ! Rn converges uniformly to a
function g W U R ! Rn if for every " > 0 there exists a number N 2 N such
that if k N and t 2 U then jgk .t/ g.t/j < ".
Definition. If a 2 Rn and > 0, then the open ball of radius centered at a,
denoted B.a; /, is the set
x 2 Rn jx aj < :
1. F UNDAMENTAL T HEORY
Proof. For simplicity, we will only consider t 2 t0 ; t0 C b. For each k 2 N, let
xk W t0 ; t0 C b ! Rn be defined by (1.8). We will show that .xk / converges to
a solution of (1.6).
Step 1: Each xk is well-defined.
Fix k 2 N. Note that the point .t0 ; a/ is in the interior of a set on which f is
well-defined. Because of the formula for xk .t/ and the fact that it is, in essence,
recursively defined on intervals of width 1=k moving steadily to the right, if xk
failed to be defined on t0 ; t0 C b then there would be t1 2 t0 C 1=k; t0 C b/ for
which jxk .t1 / aj D . Pick the first such t1 . Using (1.8) and the bound on f ,
we see that
Z
Z
t1 1=k
t1 1=k
jxk .t1 / aj D
f .s; xk .s// ds
jf .s; xk .s//j ds
t0
t0
Z t1 1=k
M ds D M.t1 t0 1=k/ < M.b 1=k/
t0
aj;
which is a contradiction.
Step 2: .xk / is uniformly bounded.
Calculating as above, we find that (1.8) implies that, for k 1=b,
jxk .t/j jaj C
bCt0 1=k
t0
M jt2
t1 j:
t2
t1
jf .s; xk .s//j ds
Uniqueness of Solutions
jp qj < . Since .xk` / converges uniformly to x, we can pick N 2 N such
that jxk` .s/ x.s/j < whenever s 2 t0 ; t0 C b and ` N . If ` N , then
jf .s; xk` .s// f .s; x.s//j < ".
Step 6: The function x is a solution of (1.6).
Fix t 2 t0 ; t0 C b. If t D t0 , then clearly (1.7) holds. If t > t0 , then for `
sufficiently large
xk` .t/ D a C
t
t0
t
t 1=k`
(1.9)
Obviously, the left-hand side of (1.9) converges to x.t/ as ` " 1. By the Uniform Convergence Theorem and the uniform convergence of .f .; xk` .///, the
first integral on the right-hand side of (1.9) converges to
Z
1. F UNDAMENTAL T HEORY
has a unique solution. In particular Lipschitz continuity of f .t; / is sufficient.
(Recall that g W dom.g/ Rn ! Rn is Lipschitz continuous if there exists a
constant L > 0 such that jg.x1 / g.x2 /j Ljx1 x2 j for every x1 ; x2 2
dom.g/; L is called a Lipschitz constant for g.)
One approach to uniqueness is developed in the following exercise, which
uses what are know as Gronwall inequalities.
for every t 2 t0 ; t0 C b.
(b) Pick " > 0 and let
V .t/ WD " C L
U.s/ ds:
t0
Show that
V 0 .t/ LV .t/
(1.12)
Uniqueness of Solutions
First, we review some definitions and results pertaining to metric spaces.
Definition. A metric space is a set X together with a function d W X X ! R
satisfying:
1. d.x; y/ 0 for every x; y 2 X , with equality if and only if x D y;
2. d.x; y/ D d.y; x/ for every x; y 2 X ;
3. d.x; y/ C d.y; z/ d.x; z/ for every x; y; z 2 X .
Definition. A normed vector space is a vector space together with a function
k k W X ! R satisfying:
1. kxk 0 for every x 2 X , with equality if and only if x D 0;
2. kxk D jjkxk for every x 2 X and every scalar ;
3. kx C yk kxk C kyk for every x; y 2 X .
Every normed vector space is a metric space with metric d.x; y/ D kx
yk.
Definition. A sequence .xn / in a metric space X is said to be (a) Cauchy (sequence) if for every " > 0, there exists N 2 N such that d.xm ; xn / < " whenever
m; n N .
11
1. F UNDAMENTAL T HEORY
Definition. A metric space is said to be complete if every Cauchy sequence in X
converges (in X ). A complete normed vector space is called a Banach space. A
complete inner product space is called a Hilbert space.
Definition. A function f W X ! Y from a metric space to a metric space is
said to be Lipschitz continuous if there exists L 2 R such that d.f .u/; f .v//
Ld.u; v/ for every u; v 2 X . We call L a Lipschitz constant, and write Lip.f /
for the smallest Lipschitz constant that works.
Definition. A contraction is a Lipschitz continuous function from a metric space
to itself that has Lipschitz constant less than 1.
Definition. A fixed point of a function T W X ! X is a point x 2 X such that
T .x/ D x.
Theorem. (Contraction Mapping Principle) If X is a nonempty, complete metric space and T W X ! X is a contraction, then T has a unique fixed point in
X.
Proof. Pick < 1 such that d.T .x/; T .y// d.x; y/ for every x; y 2 X .
Pick any point x0 2 X . Define a sequence .xk / by the recursive formula
xkC1 D T .xk /:
(1.13)
If k ` N , then
d.xk ; x` / d.xk ; xk
::
:
d.xk
k
1/
C d.xk
1 ; xk 2 /
C d.xk
d.x1 ; x0 / C k
N
d.x1 ; x0 /:
1
1 ; xk 2 /
C C d.x`C1 ; x` /
2 ; xk 3 /
C C d.x` ; x`
1/
d.x1 ; x0 / C C ` d.x1 ; x0 /
Picard-Lindelof Theorem
kf k1 WD sup jf .x/j x 2 a; b
is a Banach space.
Definition. Two different norms kk1 and kk2 on a vector space X are equivalent
if there exist constants m; M > 0 such that
mkxk1 kxk2 M kxk1
for every x 2 X .
Theorem. If .X ; k k1 / is a Banach space and k k2 is equivalent to k k1 on X ,
then .X ; k k2 / is a Banach space.
Theorem. A closed subspace of a complete metric space is a complete metric
space.
We are now in a position to state and prove the Picard-Lindelof ExistenceUniqueness Theorem. Recall that we are dealing with the IVP
(
xP D f .t; x/
(1.14)
x.t0 / D a:
Theorem. (Picard-Lindelof) Suppose f W t0 ; t0 C B.a; / ! Rn is
continuous and bounded by M . Suppose, furthermore, that f .t; / is Lipschitz
continuous with Lipschitz constant L for every t 2 t0 ; t0 C . Then (1.14)
has a unique solution defined on t0 b; t0 C b, where b D minf; =M g.
Proof. Let X be the set of continuous functions from t0
The norm
kgkw WD sup e
2Ljt t0 j
jg.t/j t 2 t0
b; t0 C b to B.a; /.
b; t0 C b
13
1. F UNDAMENTAL T HEORY
Given x 2 X , define T .x/ to be the function on t0
formula
Z
b; t0 C b given by the
T .x/.t/ D a C
t0
jT .x/.t/ aj D
f .s; x.s// ds
jf .s; x.s//j ds M b ;
t0
t0
2Ljt t0 j
sup e
f
.s;
x.s//
f
.s;
y.s//
ds
t0
For a fixed t 2 t0
t 2 t0
b; t0 C b :
b; t0 C b,
Z t
2Ljt t0 j
e
f .s; x.s// f .s; y.s// ds
t0
Z t
2Ljt t0 j
jf .s; x.s// f .s; y.s//j ds
e
t
Z 0t
2Ljt t0 j
e
Ljx.s/ y.s/j ds
t
Z0 t
2Ljt t0 j
2Ljs t0 j
Le
kx ykw e
ds
t0
kx ykw
D
1
2
1
kx ykw :
2
2Ljt t0 j
Intervals of Existence
Rn
has a solution existing on some time interval containing t0 in its interior and that
the solution is unique on that interval. Lets say that an interval of existence is an
interval containing t0 on which a solution of (1.15) exists. The following theorem
indicates how large an interval of existence may be.
Theorem. (Maximal Interval of Existence) The IVP (1.15) has a maximal interval of existence, and it is of the form .! ; !C /, with ! 2 1; 1/ and
!C 2 . 1; 1. There is a unique solution x.t/ of (1.15) on .! ; !C /, and
.t; x.t// leaves every compact subset K of D as t # ! and as t " !C .
Proof.
Step 1: If I1 and I2 are open intervals of existence with corresponding solutions x1 and x2 , then x1 and x2 agree on I1 \ I2 .
Let I D I1 \ I2 , and let I be the largest interval containing t0 and contained in
15
1. F UNDAMENTAL T HEORY
I on which x1 and x2 agree. By the Picard-Lindelof Theorem, I is nonempty. If
I I, then I has an endpoint t1 in I. By continuity, x1 .t1 / D x2 .t1 / DW a1 .
The Picard-Lindelof Theorem implies that the new IVP
(
xP D f .t; x/
(1.16)
x.t1 / D a1
has a local solution that is unique. But restrictions of x1 and x2 near t1 each
provide a solution to (1.16), so x1 and x2 must agree in a neighborhood of t1 .
This contradicts the definition of t1 and tells us that I D I.
Now, let .! ; !C / be the union of all open intervals of existence.
Step 2: .! ; !C / is an interval of existence.
Given t 2 .! ; !C /, pick an open interval of existence IQ that contains t, and
Q Because of step 1, this
let x.t/ D x.t/,
Q
where xQ is a solution to (1.15) on I.
determines a well-defined function x W .! ; !C / ! Rn ; clearly, it solves (1.15).
Step 3: .! ; !C / is the maximal interval of existence.
An extension argument similar to the one in Step 1 shows that every interval of
existence is contained in an open interval of existence. Every open interval of
existence is, in turn, a subset of .! ; !C /.
Step 4: x is the only solution of (1.15) on .! ; !C /.
This is a special case of Step 1.
Step 5: .t; x.t// leaves every compact subset K D as t # ! and as
t " !C .
We only treat what happens as t " !C ; the other case is similar. Furthermore,
the case when !C D 1 is immediate, so suppose !C is finite.
Let a compact subset K of D be given. Since D is open, for each point
.t; a/ 2 K we can pick numbers .t; a/ > 0 and .t; a/ > 0 such that
t
.t .t; a/; t C .t; a// B.a; .t; a// .t; a/ 2 K
is a cover of K. Since K is compact, a finite subcollection, say
m
.ti .ti ; ai /; ti C .ti ; ai // B.ai ; .ti ; ai // iD1 ;
covers K. Let
K0 WD
16
m
[
iD1
ti
2.ti ; ai /; ti C 2.ti ; ai / B.ai ; 2.ti ; ai // ;
Intervals of Existence
let
m
Q WD min .ti ; ai / iD1 ;
and let
Q K0 ;
;
Q t0 C
Q B.a; /
Global Existence
For the solution set of the autonomous ODE xP D f .x/ to be representable by
a dynamical system, it is necessary for solutions to exist for all time. As the
discussion above illustrates, this is not always the case. When solutions do die
out in finite time by hitting the boundary of the phase space or by going off to
infinity, it may be possible to change the vector field f to a vector field fQ that
points in the same direction as the original but has solutions that exist for all time.
For example, if D Rn , then we could consider the modified equation
f .x/
:
xP D p
1 C jf .x/j2
17
1. F UNDAMENTAL T HEORY
Clearly, jxj
P < 1, so it is impossible for jxj to approach infinity in finite time.
If, on the other hand, Rn , then consider the modified equation
d.x; Rn n /
f .x/
p
;
xP D p
1 C jf .x/j2
1 C d.x; Rn n /2
18
xP D f .t; x/
x.t0 / D a;
(1.17)
Dependence on Parameters
and the paramterized IVP
(
xP D f .t; x; /
x.t0 / D a;
(1.18)
Continuous Dependence
The following result can be proved by an approach like that outlined in Exercise
3.
Theorem. (Grownwall Inequality) Suppose that X.t/ is a nonnegative, continuous, real-valued function on t0 ; T and that there are constants C; K > 0 such
that
Z
t
X.t/ C C K
X.s/ ds
t0
t0 /
for every t 2 t0 ; T .
Using the Grownwall inequality, we can prove that the solution of (1.18)
depends continuously on .
19
1. F UNDAMENTAL T HEORY
Theorem. (Continuous Dependence) Suppose
f W t0
; t0 C 1 2 R Rn Rk ! Rn
x2 .t/j
L2
j1
L1
2 j.e L1 jt
t0 j
1/
(1.19)
; t0 C .
C
Let X.t/ D L2 j1
t0
t
t0
Z t
t0
jf .s; x1 .s/; 1 /
f .s; x2 .s/; 2 /j ds
jf .s; x1 .s/; 1 /
f .s; x1 .s/; 2 /j ds
jf .s; x1 .s/; 2 /
f .s; x2 .s/; 2 /j ds
t0
t
t0
L2 j1
2 j C L1 jx1 .t/
X.t/ L2 j1
20
2 j C L1 jx1 .s/
x2 .t/j. Then
Z t
2 j C L1
X.s/ ds;
t0
x2 .s/j ds
Dependence on Parameters
so by the Gronwall Inequality X.t/ L2 j1
(1.19) holds.
2 je L1 .t
t0 / .
yP D g.t; y/
y.t0 / D b:
(a) If f .t; p/ < g.t; p/ for every .t; p/ 2 R R and a < b, show that
x.t/ < y.t/ for every t t0 .
(b) If f .t; p/ g.t; p/ for every .t; p/ 2 R R and a b, show that
x.t/ y.t/ for every t t0 . (Hint: You may want to use the results
of part (a) along with a limiting argument.)
Differentiable Dependence
Theorem. (Differentiable Dependence) Suppose f W R R R ! R is a
continuous function and is continuously differentiable with respect to x and .
Then the solution x.; / of
(
xP D f .t; x; /
(1.20)
x.t0 / D a
is differentiable with respect to , and y D x WD @x=@ satisfies
(
yP D fx .t; x.t; /; /y C f .t; x.t; /; /
y.t0 / D 0:
(1.21)
That x , if it exists, should satisfy the IVP (1.21) is not terribly surprising;
(1.21) can be derived (formally) by differentiating (1.20) with respect to . The
real difficulty is showing that x exists. The key to the proof is to use the fact
21
1. F UNDAMENTAL T HEORY
that (1.21) has a solution y and then to use the Gronwall inequality to show that
difference quotients for x converge to y.
Proof. Given , it is not hard to see that the right-hand side of the ODE in (1.21)
is continuous in t and y and is locally Lipschitz continuous with respect to y,
so by the Picard-Lindelof Theorem we know that (1.21) has a unique solution
y.; /. Let
x.t; C / x.t; /
w.t; ; / WD
:
We want to show that w.t; ; / ! y.t; / as ! 0.
Let z.t; ; / WD w.t; ; / y.t; /. Then
dz
dw
.t; ; / D
.t; ; /
dt
dt
and
f .t; x.t; C /; C / f .t; x.t; /; /
dw
.t; ; / D
dt
f .t; x.t; C /; C / f .t; x.t; /; C /
D
f .t; x.t; /; C / f .t; x.t; /; /
C
D fx .t; x.t; / C 1 x; C /w.t; ; / C f .t; x.t; /; C 2 /;
for some 1 ; 2 2 0; 1 (by the Mean Value Theorem), where
x WD x.t; C /
x.t; /:
Hence,
dz
(1.22)
Dependence on Parameters
and
jp.t; ; /j < ";
then
so
(1.23)
Z t
Z t
dz
X.s/ ds
jz.t; ; /j
.s; ; / ds
t0 ds
t0
X.t/ " C .K C "/
t0 / ,
jzj
X.s/ ds;
t0
".e .KC"/.t t0 /
K C"
1/
!0
23
2
Linear Systems
2.1 Constant Coefficient Linear Equations
Linear Equations
Definition. Given
f W R Rn ! Rn ;
we say that the first-order ODE
xP D f .t; x/
(2.1)
(2.2)
2. L INEAR S YSTEMS
where A 2 L.Rn ; Rn /, or equivalently A is a (constant) n n matrix.
If n D 1, then were dealing with xP D ax. The solution is x.t/ D e t a x0 .
When n > 1, we will show that we can similarly define e tA in a natural way, and
the solution of (2.2) will be given by x.t/ D e tA x0 .
Given B 2 L.Rn ; Rn /, we define its matrix exponential
e
1
X
Bk
WD
:
k
kD0
We will show that this series converges, but first we specify a norm on L.Rn ; Rn /.
Definition. The operator norm kBk of an element B 2 L.Rn ; Rn / is defined by
jBxj
D sup jBxj:
x0 jxj
jxjD1
kBk D sup
L.Rn ; Rn / is a Banach space under the operator norm. Thus, to show that
the series for e B converges, it suffices to show that
m
X Bk
k
kD`
kB1 B2 k D sup
kD`
kD`
Since the regular exponential series (for real arguments) has an infinite radius
of convergence, we know that the last quantity in this estimate goes to zero as
`; m " 1.
Thus, e B makes sense, and, in particular, e tA makes sense for each fixed
t 2 R and each A 2 L.Rn ; Rn /. But does x.t/ WD e tA x0 solve (2.2)? To check
that, well need the following important property.
26
1
!
1
1
1 X
1
1 X
j
j
j
X
X
X
X
B1
B2k
B1 B2k
B1 B2k
@
A
D
D
D
j
k
j k
j k
j D0
j D0 kD0
kD0
iD0 j CkDi
1 X
i
1 X
i j .i
j .i j /
X
X
B1 B2
i B1 B2
D
D
j
j .i j /
i
iD0 j D0
iD0 j D0
1
X
.B1 C B2 /i
iD0
j/
D e B1 CB2 :
x.t/
e .t Ch/A
D lim
h
h!0
e tA
lim
h!0
1
X
hk
kD1
1 Ak
D lim
h!0
!
e .t Ch/A x0
h
e tA x0
e hA I
x0 D lim
h
h!0
e tA x0
e tA x0 D Ae tA x0 D Ax.t/;
2. L INEAR S YSTEMS
Suppose that B; P 2 L.Rn ; Rn / are related by a similarity transformation;
i.e., B D QPQ 1 for some invertible Q. Calculating, we find that
eB D
1
1
X
X
Bk
.QPQ
D
k
k
kD0
kD0
1 /k
1
X
QP k Q
k
kD0
1
X
Pk
DQ
k
kD0
D Qe P Q
Eigensystems
28
More specifically,
2
A D 4x1
2
3 1
6
60
xn 5 6
6 ::
4:
0
0
::
:
::
:
::
:
::
:
0
3
0
3
2
:: 7
: 7
7 4x1 xn 5
7
05
n
If the eigenvalues of A are real and distinct, then this means that
2
3
3 t1 0 0
2
3
2
6
:: 7
6 0 ::: :::
7
: 7 4
5
tA D 4x1 xn 5 6
6 ::
7 x1 xn
:: ::
4 :
:
: 0 5
0
tn
e t n
This formula should make clear how the projections of e tA x0 grow or decay as
t ! 1.
The same sort of analysis works when the eigenvalues are (nontrivially) complex, but the resulting formula is not as enlightening. In addition to the difficulty
of a complex change of basis, the behavior of e t k is less clear when k is not
real.
One way around this is the following. Sort the eigenvalues (and eigenvectors)
of A so that complex conjugate eigenvalues f1 ; 1 ; : : : ; m ; m g come first and
are grouped together and so that real eigenvalues fmC1 ; : : : ; r g come last. For
k m, set ak D Re k 2 R, bk D Im k 2 R, yk D Re xk 2 Rn , and
29
2. L INEAR S YSTEMS
zk D Im xk 2 Rn . Then
Ayk D A Re xk D Re Axk D Re k xk D .Re k /.Re xk /
D ak y k
.Im k /.Im xk /
bk zk ;
and
Azk D A Im xk D Im Axk D Im k xk D .Im k /.Re xk / C .Re k /.Im xk /
D bk yk C ak zk :
1,
where
3
Q D 4z1 y1 zm ym xmC1 xr 5
xP D ak x bk y
<yP D b x C a y
k
k
x.0/
D
c
:y.0/ D d:
(2.3)
tAk
a t
e k cos bk t
D ak t
e sin bk t
e ak t sin bk t
e ak t cos bk t
2. L INEAR S YSTEMS
of V and write
V D V1 Vm
if for each v 2 V there is a unique .v1 ; : : : ; vm / 2 V1 Vm such that
v D v1 C C vm .
Theorem. (Primary Decomposition Theorem) Let B be an operator on E,
where E is a complex vector space, or else E is real and B has real eigenvalues.
Then E is the direct sum of the generalized eigenspaces of B. The dimension of
each generalized eigenspace is the algebraic multiplicity of the corresponding
eigenvalue.
Before proving this theorem, we introduce some notation and state and prove
two lemmas.
Given T W V ! V, let
and let
R.T / D x 2 V T k u D x has a solution u for every k > 0 :
Note that N.T / is the union of the null spaces of the positive powers of T and
R.T / is the intersection of the ranges of the positive powers of T . This union
and intersection are each nested, and that implies that there is a number m 2 N
such that R.T / is the range of T m and N.T / is the nullspace of T m .
Lemma. If T W V ! V, then V D N.T / R.T /.
Proof. Pick m such that R.T / is the range of T m and N.T / is the nullspace
of T m . Note that T jR.T / W R.T / ! R.T / is invertible. Given x 2 V, let
m m
y D T jR.T /
T x and z D x y. Clearly, x D y C z, y 2 R.T /, and
T m z D T m x T m y D 0, so z 2 N.T /. If x D yQ C zQ for some other yQ 2 R.T /
and zQ 2 N.T / then T m yQ D T m x T m zQ D T m x, so yQ D y and zQ D z.
Lemma. If j ; k are distinct eigenvalues of T W V ! V, then
N.T
j I / R.T
k I /jN.T
j I /
k I /:
W N.T
j I / ! N.T
j I /
j I /x D T x
2
.T
.T
j I / x D T .k
::
:
j x D k x
j /x
j I /mj x D D .k
j x D .k
j .k
j /x;
j /x D .k
j /2 x;
j /mj x 0;
k I /N.T
j I / D N.T
j I /
k I /m N.T
j I / D N.T
j I /
j I / R.T
k I /:
1 I /; N.B
2 I /; : : : ; N.B
q
1 I /;
so
E D N.B
1 I / N.B
2 I / N.B
q
1I /
N.B
q I /:
33
2. L INEAR S YSTEMS
Now, by the second lemma, we know that BjN.B
eigenvalue, so dim N.B k I / nk . Since
q
X
kD1
nk D dim E D
q
X
dim N.B
k I /
k I /;
kD1
k I / D nk .
34
1 xg
Proof. Obviously these vectors span Z.x/; the question is whether they are linearly independent. If they were not, we could write down a linear combination
1 S p1 xC Ck S pk x, with j 0 and 0 p1 < p2 < < pk nil.x/ 1,
that added up to zero. Applying S nil.x/ p1 1 to this linear combination would
yield 1 S nil.x/ 1 x D 0, contradicting the definition of nil.x/.
Theorem. If S W V ! V is nilpotent then V can be written as the direct sum of
cyclic subspaces of S on V. The dimensions of these subspaces are determined
by the operator S.
Proof. The proof is inductive on the dimension of V. It is clearly true if dim V D
0 or 1. Assume it is true for all operators on spaces of dimension less than dim V.
Step 1: The dimension of SV is less than the dimension of V.
If this werent the case, then S would be invertible and could not possibly be
nilpotent.
Step 2: For some k 2 N and for some nonzero yj 2 SV, j D 1; : : : ; k,
SV D Z.y1 / Z.yk /:
(2.4)
1S
nil.xj / 1
xj
(2.6)
2S
nil.xj / 2
yj 2 Z.yj /:
35
2. L INEAR S YSTEMS
If
1S
nil.xj / 1
xj
then by Step 5
0 D Szj D 0 yj C 1 Syj C C nil.xj /
2S
nil.xj / 2
yj :
Since nil.xj / 2 D nil.yj / 1, the vectors in this linear combination are linearly
independent; thus, i D 0 for i D 0; : : : ; nil.xj / 2. In particular, 0 D 0, so
zj D 1 yj C C nil.xj /
1S
nil.xj / 2
yj 2 Z.yj /:
sj D 0 yj C 1 Syj C C nil.yj /
1S
uj D 0 xj C 1 Sxj C C nil.yj /
1S
yj ;
let
nil.yj / 1
xj ;
u/ D Sx
Su D .s1 C C sk /
.s1 C C sk / D 0;
with the corresponding basis being fx; Sx; : : : ; S nil.x/ 1 xg. Thus, if is an
eigenvalue of an operator T , then the restriction of T to a cyclic subspace of
T I on the generalized eigenspace N.T I / can be represented by a matrix
of the form
2
3
0 0
6
:: 7
61 : : : : : :
:7
6
7
6
:7
(2.7)
6 0 : : : : : : : : : :: 7 :
7
6
6 :: : : : : : :
7
4:
:
:
: 05
0 0
1
37
2. L INEAR S YSTEMS
and T zj D byj C azj C zj C1 for j D 0; : : : ; k 1, and T yk D ayk bzk
and T zk D byk C azk . The 2k C 2 real vectors fz0 ; y0 ; : : : ; zk ; yk g span
Z.x; T I / Z.x; T I / over C and also span a .2k C 2/-dimensional
space over R that is invariant under T . On this real vector space, the action of T
can be represented by the matrix
2
a
6 b
6
6
6 1
6
6
6 0
6
6
6 0
6
6
6 0
6
6 ::
6 :
6
6 ::
6 :
6
4 0
0
b
a
0
1
0
0
::
:
::
:
0
0
0
0
::
:
::
:
::
:
::
:
0
0
::
:
::
:
::
:
::
:
::
:
::
:
::
:
::
:
::
:
::
:
::
:
::
:
::
::
::
::
0
0
::
:
::
:
::
:
::
:
::
::
::
::
::
::
::
:
0
1
0
a
b
::
:
::
:
::
:
0
0
::
0
0
:
1
0
::
0
0
::
:
::
:
::
:
::
:
7
7
7
7
7
7
7
7
7
7
7:
7
7
7
7
0 7
7
7
0 7
7
b 5
a
(2.8)
::
7
7
7
7
7
7
7
7
7
7
7
5
(2.9)
if the eigenvalue is real, with blocks of the form (2.7) running down the diagonal. If the eigenvalue is complex, then the matrix representation is similar to
(2.9) but with blocks of the form (2.8) instead of the form (2.7) on the diagonal.
Finally, the matrix representation of the entire operator is block diagonal,
with blocks of the form (2.9) (or its counterpart for complex eigenvalues). This
is called the real canonical form. If we specify the order in which blocks should
appear, then matrices are similar if and only if they have the same real canonical
form.
38
Computing e tA
Given an operator A 2 L.Rn ; Rn /, let M be its real canonical form. Write
M D S C N , where S has M s diagonal elements k and diagonal blocks
a
b
b a
and 0s elsewhere, and N has M s off-diagonal 1s and 2 2 identity matrices. If
you consider the restrictions of S and N to each of the cyclic subspaces of A I
into which the generalized eigenspace N.A I / of A is decomposed, youll
probably be able to see that these restrictions commute. As a consequence of this
fact (and the way Rn can be represented in terms of these cyclic subspaces), S
and N commute. Thus e tM D e tS e tN .
Now, e tS has e k t where S has k , and has
a t
e k cos bk t
e ak t sin bk t
e ak t sin bk t e ak t cos bk t
where S has
ak
bk
bk
:
ak
39
2. L INEAR S YSTEMS
The series definition can be used to compute e tN , since the fact that N is
nilpotent implies that the series is actually a finite sum. The entries of e tN will
be polynomials in t. For example,
3
2
3
2
0
1
7
6 ::
7
6
61
7
6 t :::
7
:
7
6
7 7! 6
7
6 :: ::
7
6 :: : : : :
4
4 :
:
: 5
:
: 5
tm t 1
1 0
and
2
0
6 0
6
6 1
6
6 0
6
6
6
6
4
0
0
::
0
:
1
::
:
::
:
1 0
0
0 1
0
2
7
7
7
7
7
7 7!
7
7
7
0 5
0
1 0
0 1
t 0
0 t
::
:
6
7
6
7
6
7
:
::
6
7
6
7
6
7:
6
7
::
::
6
7
:
:
6
7
4 t m =m
0
t 0
1 0 5
m
0
t =m
0 t
0 1
Identifying A with its matrix representation with respect to the standard basis, we have A D PMP 1 for some invertible matrix P . Consequently, e tA D
P e tM P 1 . Thus, the entries of e tA will be linear combinations of polynomials
times exponentials or polynomials times exponentials times trigonometric functions.
Exercise 8 Compute e tA (and justify your computations) if
2
3
0 0 0 0
61 0 0 17
7
1. A D 6
41 0 0 15
0
1 1 0
40
1
2
3
4
1
2
3
4
3
1
27
7
35
4
0
0
0
1
a
b
b
;
a
.b 0/
u1
saddle
41
2. L INEAR S YSTEMS
u2
2
AD
0
0
. < < 0/
u1
stable node
u2
0
AD
0
. D < 0/
u1
stable node
u2
0
AD
0
.0 < < /
unstable node
42
u1
5
AD
0
0
.0 < D /
u1
unstable node
6
0
AD
0
u2
b
. < D 0/
degenerate
7
0
AD
0
u1
u2
b
.0 D < /
degenerate
u1
43
2. L INEAR S YSTEMS
u2
8
0
0
.0 D D /
degenerate
AD
u1
u2
0
AD
1
. < 0/
u1
stable node
10
u2
0
AD
1
.0 < /
unstable node
44
u1
11
AD
0
1
. D 0/
degenerate
u2
12
a
AD
b
u1
b
a
.a < 0 < b/
u1
stable spiral
u2
13
a
AD
b
b
a
.b < 0 < a/
u1
unstable spiral
45
2. L INEAR S YSTEMS
u2
14
AD
a
b
b
a
b
.a D 0; b > 0/
u1
center
If A is not in real canonical form, then the phase portrait should look (topologically) similar but may be rotated, flipped, skewed, and/or stretched.
stable node
degenerate
D
unstable node
degenerate
saddle
46
unstable spiral
center
stable spiral
< M
Re u u 2 N.A
:Re >0
Im 0
E D
N.A
9
>
>
=
I / ;
>
>
;
9 8
>
>
= < M
I /
Im u u 2 N.A
>
>
;
: Re <0
9
>
>
=
I / ;
>
>
;
Im 0
I /
<0
8
< M
Re u u 2 N.A
:Re <0
Im 0
and
E c D N.A/
8
< M
Re u u 2 N.A
:Re D0
Im 0
9 8
>
>
= < M
I /
Im u u 2 N.A
>
>
Re >0
;
:
Im 0
9
>
>
=
< M
I /
Im u u 2 N.A
>
>
;
:Re D0
Im 0
From our previous study of the real canonical form, we know that
I /
9
>
>
=
>
>
;
Rn D E u E s E c :
We call E u the unstable space of A, E s the stable space of A, and E c the center
space of A.
Each of these subspaces of Rn is invariant under the differential equation
xP D Ax:
(2.10)
2. L INEAR S YSTEMS
Proof. Since equivalence of norms is transitive, it suffices to prove that every
norm N W Rn ! R is equivalent to the standard Euclidean norm j j.
Given an arbitrary norm N , and letting xi be the projection of x 2 Rn onto
the ith standard basis vector ei , note that
!
n
n
n
X
X
X
xi ei
jxi jN.ei /
jxjN.ei /
N.x/ D N
iD1
n
X
iD1
iD1
iD1
N.ei / jxj:
This shows half of equivalence; it also shows that N is continuous, since, by the
triangle inequality,
!
n
X
N.ei / jx yj:
jN.x/ N.y/j N.x y/
iD1
The set S WD x 2 Rn jxj D 1 is clearly closed and bounded and, therefore,
compact, so by the extreme value theorem, N must achieve a minimum on S.
Since N is a norm (and is, therefore, positive definite), this minimum must be
strictly positive; call it k. Then for any x 0,
x
x
D jxjN
kjxj;
N.x/ D N jxj
jxj
jxj
Eu D
x 2 Rn lim je ct e tA xj D 0 ;
(2.11)
c>0
Es D
and
Ec D
c>0
t# 1
x 2 Rn lim je ct e tA xj D 0 ;
c>0
t "1
x 2 Rn lim je ct e tA xj D lim je
t# 1
t "1
e xj D 0 :
ct tA
(2.12)
(2.13)
(2.14)
(2.15)
max Re 2 .A/ and Re < 0
cD
:
2
so
lim sup ke ct e tA xk D 1:
t "1
2. L INEAR S YSTEMS
tb
kxk
je tA xj
If
x1 .t/
x.t/ D
x2 .t/
Exponential Decay
Theorem.
(a) If e tA is an expansion then there is some norm k k and some constant b > 0
such that
ke tA xk e t b kxk
for every t 0 and x 2 Rn .
(b) If e tA is a contraction then there is some norm k k and some constant b > 0
such that
ke tA xk e t b kxk
for every t 0 and x 2 Rn .
Proof. The idea of the proof is to pick a basis with respect to which A is represented by a matrix like the real canonical form but with some small constant
" > 0 in place of the off-diagonal 1s. (This can be done by rescaling.) If the
Euclidean norm with respect to this basis is used, the desired estimates hold. The
details of the proof may be found in Chapter 7, 1, of Hirsch and Smale.
Exercise 9
(a) Show that if e tA and e tB are both contractions on Rn , and BA D AB,
then e t .ACB/ is a contraction.
(b) Give a concrete example that shows that (a) can fail if the assumption
that AB D BA is dropped.
t# 1
t# 1
(c) there exist constants M; N > 0 such that M < jx.t/j < N for all
t 2 R.
51
2. L INEAR S YSTEMS
Is what they ask you to prove true? If so, prove it. If not, determine what
other possible alternatives exist, and prove that you have accounted for all
possibilities.
Solution Formulas
In the scalar, or one-dimensional, version of (2.16)
xP D a.t/x
(2.17)
Rt
t0
a. / d
for the solution of (2.17) that satisfies the initial condition x.t0 / D x0 .
It seems like the analogous formula for the solution of (2.16) with initial
condition x.t0 / D x0 should be
x.t/ D e
Rt
t0
A. / d
x0 :
(2.18)
Certainly, the right-hand side of (2.18) makes sense (assuming that A is continuous). But does it give the correct answer?
Lets consider a specific example. Let
0 0
A.t/ D
1 t
and t0 D 0. Note that
Z
52
t2 0
0
0
A./ d D
D
t t 2 =2
2 2=t
0
;
1
0
0 5
4
t t 2 =2
1
D
0
1
D
0
2 2
t
2
t
2
0
0 0
0 0
C
C
C
1
2=t 1
2 2=t 1
2
"
#
0 0
1
0
2
0
2
C e t =2 1
D 2 t 2 =2
:
1
2=t 1
e
1
e t =2
t
xP 2 D x1 C tx2
directly. Clearly x1 .t/ D for some constant . Plugging this into the equation
for x2 , we have a first-order scalar equation which can be solved by finding an
integrating factor. This yields
Z t
2
t 2 =2
t 2 =2
x2 .t/ D e
C e
e s =2 ds
0
for some constant . Since x1 .0/ D and x2 .0/ D , the solution of (2.16) is
Since
1
x1 .t/
D t 2=2 R t
x2 .t/
e
0 e
e
t 2 =2
s 2 =2
x1 .0/
:
s 2 =2 ds e t 2 =2
x2 .0/
0
ds
2 t 2 =2
e
t
1
h!0
R tCh
0
Rt
0
A. / d
A. / d
h
h
Rt
R tCh
t
A. / d
A. / d
2. L INEAR S YSTEMS
Definition. If x .1/; x .2/ ; : : : ; x .n/ are linearly independent solutions of (2.16)
(i.e., no nontrivial linear combination gives the zero function) then the matrix
X.t/ WD x .1/ .t/ x .n/ .t/
is called a fundamental matrix for (2.16).
kD1
54
k x .k/ D 0
P
P
for some constants 1 ; : : : ; n with nkD1 k2 0. Then nkD1 k x .k/ .t/ is 0
for every t, so the columns of the Wronskian W .t/ are linearly dependent for
every t. This means W 0.
Conversely, suppose that the Wronskian W of n solutions x .1/ ; : : : ; x .n/ is
identically zero. In particular, W .0/ D 0, so x .1/P
.0/; : : : ; x .n/ .0/ are linearly
dependent vectors. Pick constants 1 ; : : : ; n , with nkD1 k2 nonzero, such that
P
k x .k/ .0/ D 0. The function nkD1 k x .k/ is a solution of (2.16) that is
0 when t D
is the function that is identically zero. By uniqueness of
Pn0, but so.k/
solutions, kD1 k x D 0; i.e., x .1/ ; : : : ; x .n/ are linearly dependent.
kD1
Note that this proof also shows that if the Wronskian of n solutions of (2.16)
is zero for some t, then it is zero for all t.
What if were dealing with n arbitrary vector-valued functions (that are not
necessarily solutions of (2.16))? If they are linearly dependent then their Wronskian is identically zero, but the converse is not true. For example,
1
t
and
0
0
have a Wronskian that is identically zero, but they are not linearly dependent.
Also, n functions can have a Wronskian that is zero for some t and nonzero for
other t. Consider, for example,
1
0
and
:
0
t
Initial-Value Problems
Given a fundamental matrix X.t/ for (2.16), define G.t; t0 / to be the quantity
X.t/X.t0 / 1 . We claim that x.t/ WD G.t; t0 /v solves the IVP
(
xP D A.t/x
x.t0 / D v:
To verify this, note that
d
d
xD
.X.t/X.t0 /
dt
dt
v/ D A.t/X.t/X.t0 /
v D A.t/x;
and
x.t0 / D G.t0 ; t0 /v D X.t0 /X.t0 /
v D v:
Inhomogeneous Equations
Consider the IVP
xP D A.t/x C f .t/
x.t0 / D x0 :
(2.19)
In light of the results from the previous section when f was identically zero, its
reasonable to look for a solution x of (2.19) of the form x.t/ WD G.t; t0 /y.t/,
where G is as before, and y is some vector-valued function.
55
2. L INEAR S YSTEMS
Note that
x.t/
P
D A.t/X.t/X.t0 /
y.t/
P D X.t0 /X.t/
(2.20)
since G.t; t0 /G.t0 ; s/ D G.t; s/. This is called the Variation of Constants formula or the Variation of Parameters formula.
(2.22)
Rt
t0
.s/ ds
Proof. The proof is very similar to the proof of the standard Gronwall inequality.
The details are left to the reader.
The first main result deals with the case when A.t/ converges to A sufficiently
quickly as t " 1.
Theorem. Suppose that each solution of (2.22) remains bounded as t " 1 and
that, for some t0 2 R,
Z 1
kA.t/ Ak dt < 1;
(2.23)
t0
where k k is the standard operator norm. Then each solution of (2.21) remains
bounded as t " 1.
Proof. Let t0 be such that (2.23) holds. Given a solution x of (2.21), let f .t/ D
.A.t/ A/x.t/, and note that x satisfies the constant-coefficient inhomogeneous
problem
xP D Ax C f .t/:
(2.24)
57
2. L INEAR S YSTEMS
Since the matrix exponential provides a fundamental matrix solution to constantcoefficient linear systems, applying the variation of constants formula to (2.24)
yields
Z
x.t/ D e .t
t0 /A
x.t0 / C
e .t
s/A
.A.s/
A/x.s/ ds:
(2.25)
t0
M jx.t0 /j C
t0
M kA.s/
Ak jx.s/j ds:
Setting X.t/ D jx.t/j, .t/ D M kA.t/ Ak, and C D M jx.t0 /j, and
applying the generalized Gronwall inequality, we find that
jx.t/j M jx.t0 /je
Rt
t0
kA.s/ Ak ds
b.t t0 /
jx.t0 /j C
ke
t0
b.t s/
"jx.s/j ds
t0 /
t0 / jx.t/j
y.s/ ds
t0
for all t t0 . The standard Gronwall inequality applied to this estimate gives
y.t/ kjx.t0 /je k".t
t0 /
b/.t t0 /
for all t t0 . Since " < b=k, this inequality implies that x.t/ ! 0 as t "
1.
Thus, the origin remains a sink even when we perturb A by a small timedependent quantity. Can we perhaps just look at the (possibly, time-dependent)
eigenvalues of A.t/ itself and conclude, for example, that if all of those eigenvalues have negative real part for all t then all solutions of (2.21) converge to the
origin as t " 1? The following example of Markus and Yamabe shows that the
answer is No.
3
2
sin t cos t
1 C 32 sin2 t
then the eigenvalues of A.t/ both have negative real part for every t 2 R,
but
cos t t =2
x.t/ WD
e ;
sin t
which becomes unbounded as t ! 1, is a solution to (2.21).
(2.26)
59
2. L INEAR S YSTEMS
when A is a continuous periodic n n matrix function of t; i.e., when there is a
constant T > 0 such that A.t C T / D A.t/ for every t 2 R. When that condition
is satisfied, we say, more precisely, that A is T -periodic. If T is the smallest
positive number for which this condition holds, we say that T is the minimal
period of A. (Every continuous, nonconstant periodic function has a minimal
period).
Let A be T -periodic, and let X.t/ be a fundamental matrix for (2.26). Define
Q
X W R ! L.Rn ; Rn / by XQ .t/ D X.t C T /. Clearly, the columns of XQ are
linearly independent functions of t. Also,
d Q
d
Q
X .t/ D
X.t C T / D X 0 .t C T / D A.t C T /X.t C T / D A.t/X.t/;
dt
dt
so XQ solves the matrix equivalent of (2.26). Hence, XQ is a fundamental matrix
for (2.26).
Because the dimension of the solution space of (2.26) is n, this means that
there is a nonsingular (constant) matrix C such that X.t CT / D X.t/C for every
t 2 R. C is called a monodromy matrix.
Lemma. There exists B 2 L.Cn ; Cn / such that C D e TB .
Proof. Without loss of generality, we assume that T D 1, since if it isnt we can
just rescale B by a scalar constant. We also assume, without loss of generality,
that C is in Jordan canonical form. (If it isnt, then use the fact that P 1 CP D
1
e B implies that C D e PBP .) Furthermore, because of the way the matrix
exponential acts on a block diagonal matrix, it suffices to show that for each
p p Jordan block
3
2
0 0
6
:: 7
61 : : : : : :
:7
6
7
6
7
:
:
:
CQ WD 6 0 : : : : : : ::: 7 ;
6
7
7
6 :: : : : : : :
4:
:
:
: 05
0 0
1
Q
CQ D e B for some BQ 2 L.Cp ; Cp /.
Now, an obvious candidate for BQ is the natural logarithm of CQ , defined in
some reasonable way. Since the matrix exponential was defined by a power series, it seems reasonable to use a similar definition for a matrix logarithm. Note
that CQ D I CN D I.I C 1 N /, where N is nilpotent. (Since C is invertible,
we know that all of the eigenvalues are nonzero.) We guess
60
N /;
(2.27)
1
X
. M /k
;
k
kD1
in analogy to the Maclaurin series for log.1 C x/. Since N is nilpotent, this
series terminates in our application of it to (2.27). Direct substitution shows that
Q
e B D CQ , as desired.
The eigenvalues of C are called the Floquet multipliers (or characteristic
multipliers) of (2.26). The corresponding numbers satisfying D e T are
called the Floquet exponents (or characteristic exponents) of (2.26). Note that
the Floquet exponents are only determined up to a multiple of .2 i/=T . Given
B for which C D e TB , the exponents can be chosen to be the eigenvalues of B.
Theorem. There exists a T -periodic function P W R ! L.Rn ; Rn / such that
X.t/ D P .t/e tB :
Proof. Let P .t/ D X.t/e
P .t C T / D X.t C T /e
D X.t/e TB e
tB .
Then
.t CT /B
TB
tB
D X.t C T /e
D X.t/e
tB
TB
tB
D X.t/C e
TB
tB
D P .t/:
The decomposition of X.t/ given in this theorem shows that the behavior of
solutions can be broken down into the composition of a part that is periodic in
time and a part that is exponential in time. Recall, however, that B may have
entries that are not real numbers, so P .t/ may be complex, also. If we want
to decompose X.t/ into a real periodic matrix times a matrix of the form e tB
where B is real, we observe that X.t C 2T / D X.t/C 2 , where C is the same
monodromy matrix as before. It can be shown that the square of a real matrix
can be written as the exponential of a real matrix. Write C 2 D e TB with B real,
and let P .t/ D X.t/e tB as before. Then, X.t/ D P .t/e tB where P is now
2T -periodic, and everything is real.
The Floquet multipliers and exponents do not depend on the particular fundamental matrix chosen, even though the monodromy matrix does. They depend
only on A.t/. To see this, let X.t/ and Y.t/ be fundamental matrices with corresponding monodromy matrices C and D. Because X.t/ and Y.t/ are fundamental matrices, there is a nonsingular constant matrix S such that Y.t/ D X.t/S for
all t 2 R. In particular, Y.0/ D X.0/S and Y.T / D X.T /S. Thus, C D
X.0/
X.T / D SY.0/
Y.T /S
D SY.0/
Y.0/DS
D SDS
:
61
2. L INEAR S YSTEMS
This means that the monodromy matrices are similar and, therefore, have the
same eigenvalues.
1 x.t/,
d
d
P .t/y.t/ D
x.t/ D A.t/x.t/ D A.t/P .t/y.t/
dt
dt
D A.t/X.t/e tB y.t/:
But
d
P .t/y.t/ D P 0 .t/y.t/ C P .t/y 0 .t/
dt
D X 0 .t/e tB X.t/e tB By.t/ C X.t/e
D A.t/X.t/e
tB
y.t/
X.t/e
tB
tB 0
y .t/
By.t/ C X.t/e
tB 0
y .t/;
so
X.t/e
tB 0
y .t/ D X.t/e
tB
By.t/;
which implies that y 0 .t/ D By.t/; i.e., y solves a constant coefficient linear
equation. Since P is periodic and, therefore, bounded, the growth and decay of
62
1 n D exp
trace A.t/ dt
(2.28)
and
1
1 C C n
T
2 i
T
(2.29)
Proof. We focus on (2.28). The formula (2.29) will follow immediately from
(2.28).
Let W .t/ be the determinant of the principal fundamental matrix X.t/. Let
Sn be the set of permutations of f1; 2; : : : ; ng and let W Sn ! f 1; 1g be the
parity map. Then
W .t/ D
. /
2Sn
n
Y
Xi;.i/ ;
iD1
2. L INEAR S YSTEMS
Differentiating yields
n
X
d W .t/
d Y
. /
Xi;.i/
D
dt
dt
iD1
2Sn
Y
n X
X
d
D
. /
Xj;.j /
Xi;.i/
dt
j D1 2Sn
ij
" n
#
n X
X
X
Y
D
. /
Aj;k .t/Xk;.j /
Xi;.i/
j D1 2Sn
n X
n
X
j D1 kD1
kD1
Aj;k .t/ 4
ij
. /Xk;.j /
2Sn
ij
Xi;.i/ 5 :
Thus,
W .t/ D e
Rt
0
trace A.s/ ds
W .0/ D e
Rt
0
trace A.s/ ds
In particular,
e
RT
0
trace A.s/ ds
3
2
b
C sin t
and a and b are constants. Show that there is a solution of (2.26) that becomes unbounded as t " 1.
64
3
Topological Dynamics
3.1 Invariant Sets and Limit Sets
We will now begin a study of the continuously differentiable autonomous system
xP D f .x/
or, equivalently, of the corresponding dynamical system '.t; x/. We will denote
the phase space and assume that it is an open (not necessarily proper) subset
of Rn .
Orbits
Definition. Given x 2 , the (complete) orbit through x is the set
.x/ WD '.t; x/ t 2 R ;
the positive semiorbit through x is the set
Invariant Sets
3. T OPOLOGICAL DYNAMICS
Definition. A set M is negatively invariant under ' if it contains the negative semiorbit of every point of M. In other words, for every x 2 M and every
t 0, '.t; x/ 2 M.
Limit Sets
Definition. Given x 2 , the !-limit set of x, denoted !.x/, is the set
y 2 lim inf j'.t; x/ yj D 0
t "1
y 2 lim inf j'.t; x/ yj D 0
t# 1
and
.x/ D
.'.; x//:
(3.2)
2R
Proof. It suffices to prove (3.1); (3.2) can then be established by time reversal.
Let y 2 !.x/ be given. Pick a sequence t1 ; t2 ; : : : ! 1 such that '.tk ; x/ !
y as k " 1. Let 2 R be given. Pick K 2 N such that tk for all k K.
Note that '.tk ; x/ 2
C .'.; x// for all k K, so
y 2
C .'.; x//:
Since this holds for all 2 R, we know that
\
y2
C .'.; x//:
(3.3)
2R
2R
C .'.; x//:
(3.4)
y 2 C .'.k; x//
so we can pick zk 2
C .'.k; x// such that jzk yj < 1=k. Since zk 2
C .'.k; x//, we can pick sk 0 such that zk D '.sk ; '.k; x//. If we set
tk D k C sk , we see that tk k, so the sequence t1 ; t2 ; : : : goes to infinity. Also,
since
j'.tk ; x/
yj D j'.sk C k; x/
< 1=k;
yj D jzk
yj
we know that '.tk ; x/ ! y as k " 1. Hence, y 2 !.x/. Since this holds for
every
\
C .'.; x//;
y2
2R
we know that
2R
3. T OPOLOGICAL DYNAMICS
Step 2: !.x/ is invariant.
Let y 2 !.x/ and t 2 R be given. Choose a sequence of times .tk / converging
to infinity such that '.tk ; x/ ! y as k " 1. For each k 2 N, let sk D tk C t,
and note that .sk / converges to infinity and
'.sk ; x/ D '.tk C t; x/ D '.t; '.tk ; x// ! '.t; y/
as k " 1 (by the continuity of '.t; /). Hence, '.t; y/ 2 !.x/. Since t 2 R and
y 2 !.x/ were arbitrary, we know that !.x/ is invariant.
Now, suppose that
C .x/ is contained in a compact subset K of .
Step 3: !.x/ is nonempty.
The sequence '.1; x/; '.2; x/; : : : is contained in
C .x/ K, so by the BolzanoWeierstrass Theorem, some subseqence '.t1 ; x/; '.t2 ; x/; : : : converges to some
y 2 K. By definition, y 2 !.x/.
Step 4: !.x/ is compact.
By the Heine-Borel Theorem, K is closed (relative to Rn ), so, by the choice of
K, !.x/ K. Since, by Step 1, !.x/ is closed relative to , it is also closed
relative to K. Since K is compact, this means !.x/ is closed (relative to Rn ).
Also, by the Heine-Borel Theorem, K is bounded so its subset !.x/ is bounded,
too. Thus, !.x/ is closed (relative to Rn ) and bounded and, therefore, compact.
Step 5: !.x/ is connected.
Suppose !.x/ were disconnected. Then there would be disjoint open subsets G
and H of such that G\!.x/ and H\!.x/ are nonempty, and !.x/ is contained
in G[H. Then there would have to be a sequence s1 ; s2 ; : : : ! 1 and a sequence
t1 ; t2 ; : : : ! 1 such that '.sk ; x/ 2 G, '.tk ; x/ 2 H, and sk < tk < skC1 for
each k 2 N. Because (for each fixed k 2 N)
'.t; x/ t 2 sk ; tk
68
Examples of empty !-limit sets are easy to find. Consider, for example, the
one-dimensional dynamical system '.t; x/ WD xCt (generated by the differential
equation xP D 1.
!.x/ D . 1; y/ y 2 R [ .1; y/ y 2 R
.x; y/ 2 R2 jxj < 1 and x 2 C y 2 > 0 :
Consider the differential equation xP D f .x/ and its associated dynamical system
'.t; x/ on a phase space .
Definition. We say that a point x 2 is an equilibrium point or a singular point
or a critical point if f .x/ D 0. For such a point, '.t; x/ D x for all t 2 R.
Definition. A point x 2 that is not a singular point is called a regular point.
We shall show that all of the interesting local behavior of a continuous dynamical system takes place close to singular points. We shall do this by showing
that in the neighborhood of each regular point, the flow is very similar to unidirectional, constant-velocity flow.
One way of making the notion of similarity of flows precise is the following.
Definition. Two dynamical systems ' W R ! and
W R !
are topologically conjugate if there exists a homeomorphism (i.e., a continuous
bijection with continuous inverse) h W ! such that
h.'.t; x// D
.t; h.x//
(3.5)
.t; / D h '.t; / h
1,
or,
'.t;/
!
?
?
yh
.t;/
69
3. T OPOLOGICAL DYNAMICS
commutes for each t 2 R. The function h is called a topological conjugacy. If,
in addition, h and h 1 are r-times continuously differentiable, we say that ' and
are C r -conjugate.
A weaker type of similarity is the following.
Definition. Two dynamical systems ' W R ! and
W R !
are topologically equivalent if there exists a homeomorphism h W ! and
a time reparametrization function W R ! R such that, for each x 2 ,
.; x/ W R ! R is an increasing surjection and
h.'..t; x/; x// D
.t; h.x//
cos 2t
.t; y/ D
sin 2t
sin 2t
y;
cos 2t
0
2
2
y:
0
The functions h.x/ D x and .t; x/ D 2t show that these two flows are topologically equivalent. But these two flows are not topologically conjugate, since, by
setting t D we see that any function h W R2 ! R2 satisfying (3.5) would have
to satisfy h.x/ D h. x/ for all x, which would mean that h is not invertible.
Because of examples like this, topological equivalence seems to be the preferred concept when dealing with flows. The following theorem, however, shows
that in a neighborhood of a regular point, a smooth flow satisfies a local version
of C r -conjugacy with respect to a unidirectional, constant-velocity flow.
70
in W.
3. T OPOLOGICAL DYNAMICS
Step 4: DG.0/ is an invertible matrix.
Since
@G.y/
@
D
'.t;
0/
D f .0/ D e1
@y1 yD0
@t
t D0
and
@G.y/
@
@p
D
'.0; p/
ek D
ek D ek ;
@yk yD0
@p
@p pD0
pD0
for k 1, we have
DG.0/ D 4 e1 e2 en 5 ;
@
d
G.y.t// D
'.s; .0; y2 ; : : : ; yn //
yP1
dt
@s
sDy1
2 3
0
6
7
y
P
@
6 27
C
'.y1 ; p/
6 : 7
:
@p
pD.0;y2 ;:::;yn / 4 : 5
yPn
Definitions of Stability
has to do with whether solutions that start near a given solution stay near it for all
time and/or move closer to it as time elapses. This question, which is the subject
of stability theory, is not just of interest when the given solution corresponds to
an equilibrium solution, so we study itinitially, at leastin a fairly broad context.
Definitions
First, we define some types of stability for solutions of the (possibly) nonautonomous equation
xP D f .t; x/:
(3.9)
Definition. A solution x.t/ of (3.9) is (Lyapunov) stable if for each " > 0 and
t0 2 R there exists D ."; t0 / > 0 such that if x.t/ is a solution of (3.9) and
jx.t0 / x.t0 /j < then jx.t/ x.t/j < " for all t t0 .
Definition. A solution x.t/ of (3.9) is asymptotically stable if it is (Lyapunov)
stable and if for every t0 2 R there exists D .t0 / > 0 such that if x.t/ is a
solution of (3.9) and jx.t0 / x.t0 /j < then jx.t/ x.t/j ! 0 as t " 1.
Definition. A solution x.t/ of (3.9) is uniformly stable if for each " > 0 there
exists D ."/ > 0 such that if x.t/ is a solution of (3.9) and jx.t0 / x.t0 /j <
for some t0 2 R then jx.t/ x.t/j < " for all t t0 .
Some authors use a weaker definition of uniform stability that turns out to
be equivalent to Lyapunov stability for autonomous equations. Since our main
interest is in autonomous equations and this alternative definition is somewhat
more complicated than the definition given above, we will not use it here.
Definition. A solution x.t/ of (3.9) is orbitally stable if for every " > 0 there
exists D ."/ > 0 such that if x.t/ is a solution of (3.9) and jx.t1 / x.t0 /j <
for some t0 ; t1 2 R then
[
[
x.t/
B.x.t/; "/:
t t1
t t0
3. T OPOLOGICAL DYNAMICS
Note that the definition implies that stable sets are positively invariant.
Definition. The set A is asymptotically stable if it is stable and there is some
neighborhood V of A such that !.x/ A for every x 2 V. (If V can be chosen
to be the entire phase space, then A is globally asymptotically stable.)
Examples
We now consider a few examples that clarify some properties of these definitions.
y
1
(
xP D y=2
yP D 2x:
b
Orbits are ellipses with major axis along the y-axis. The equilibrium solution
at the origin is Lyapunov stable even though nearby orbits sometimes move away
from it.
y
2
(
rP D 0
P D r 2 ;
or, equivalently,
(
xP D .x 2 C y 2 /y
yP D .x 2 C y 2 /x:
The solution moving around the unit circle is not Lyapunov stable, since
nearby solutions move with different angular velocities. It is, however, orbitally
stable. Also, the set consisting of the unit circle is stable.
74
Definitions of Stability
y
3
(
rP D r.1 r/
P D sin2 .=2/:
b
The constant solution .x; y/ D .1; 0/ is not Lyapunov stable and the set
f.1; 0/g is not stable. However, every solution beginning near .1; 0/ converges to
.1; 0/ as t " 1. This shows that it is not redundant to require Lyapunov stability
(or stability) in the definition of asymptotic stability of a solution (or a set).
(3.10)
on an open set Rn , all of the varieties of stability can be applied to essentially the same object. In particular, let x be a function that solves (3.10), and
let
A.x/ WD x.t/ t 2 R
be the corresponding orbit. Then it makes sense to talk about the Lyapunov,
asymptotic, orbital, or uniform stability of x, and it makes sense to talk about the
stability or asymptotic stability of A.x/.
In this context, certain relationships between the various types of stability
follow from the definitions without too much difficulty.
Theorem. Let x be a function that solves (3.10), and let A.x/ be the corresponding orbit. Then:
1. If x is asymptotically stable then x is Lyapunov stable;
2. If x is uniformly stable then x is Lyapunov stable;
3. If x is uniformly stable then x is orbitally stable;
4. If A.x/ is asymptotically stable then A.x/ is stable;
75
3. T OPOLOGICAL DYNAMICS
5. If A.x/ contains only a single point, then Lyapunov stability of x, orbital
stability of x, uniform stability of x, and stability of A.x/ are equivalent.
We will not prove this theorem, but we will note that parts 1 and 2 are immediate results of the definitions (even if we were dealing with a nonautonomous
equation) and part 4 is also an immediate result of the definitions (even if A were
an arbitrary set).
x1 x2 g, D R2 , x.t/ WD .t; 0/
.e t /
.e t /; 0/
.e t /
x2 g, D R2 , x.t/ WD .sinh
.t/; 0/
.t/
x1 ; xP 2 D
x2 g, D R2 , x.t/ WD .e t ; 0/
x, D R, x.t/ WD 0
(3.11)
(3.12)
(3.13)
77
3. T OPOLOGICAL DYNAMICS
for x near x0 . Equation (3.13) (or sometimes (3.14)) is called the linearization
of (3.11) at x0 .
Now, weve defined (several types of) stability for equilibrium solutions of
(3.11) (as well as for other types of solutions and sets), but we havent really
given any tools for determining stability. In this lecture we present one such tool,
using the linearized equation(s) discussed above.
Definition. An equilibrium point x0 of (3.11) is hyperbolic if none of the eigenvalues of Df .x0 / have zero real part.
If x0 is hyperbolic, then either all the eigenvalues of A WD Df .x0/ have
negative real part or at least one has positive real part. In the former case, we
know that 0 is an asymptotically stable equilibrium solution of (3.13); in the latter
case, we know that 0 is an unstable solution of (3.13). The following theorem
says that similar things can be said about the nonlinear equation (3.11).
Theorem. (Principle of Linearized Stability) If x0 is a hyperbolic equilibrium
solution of (3.11), then x0 is either unstable or asymptotically stable, and its stability type (with respect to (3.11)) matches the stability type of 0 as an equilibrium
solution of (3.13) (where A WD Df .x0 /).
This theorem is an immediate consequence of the following two propositions.
Proposition. (Asymptotic Stability) If x0 is an equilibrium point of (3.11) and
all the eigenvalues of A WD Df .x0/ have negative real part, then x0 is asymptotically stable.
Proposition. (Instability) If x0 is an equilibrium point of (3.11) and some eigenvalue of A WD Df .x0 / has positive real part, then x0 is unstable.
Before we prove these propositions, we state and prove a lemma to which we
have referred before in passing.
Lemma. Let V be a finite-dimensional real vector space and let L 2 L.V; V/.
If all the eigenvalues of L have real part larger than c, then there is an inner
product h; i and an induced norm k k on V such that
hv; Lvi ckvk2
78
for every v 2 V.
i
j
`i i i2 C
`ij i j
`i i i2
"
hv; Lvi D
2
iD1
iD1 j i
n
X
n
X
iD1
`i i i2
iD1
n"i2
iD1
n
X
.`i i
iD1 j i
n"/i2
iD1
n
X
iD1
ci2 D ckvk2 :
Note that applying this theorem to L also tells us that, for some inner product,
hv; Lvi ckvk2
(3.15)
ckx.t/k2 D ckx.t/k2 :
This means that x.t/ 2 Br for all t 0, and x.t/ converges to 0 (exponentially
quickly) as t " 1.
The proof of the second proposition will be geometric and will contain ideas
that will be used to prove stronger results later in this text.
79
3. T OPOLOGICAL DYNAMICS
Proof of Proposition on Instability. We assume again that x0 D 0. If E u ,E s ,
and E c are, respectively, the unstable, stable, and center spaces corresponding to
(3.13), set E WD E s E c and E C WD E u . Then Rn D E C E , all of the
eigenvalues of AC WD AjE C have positive real part, and all of the eigenvalues
of A WD AjE have nonpositive real part. Pick constants a > b > 0 such that
all of the eigenvalues of AC have real part larger than a, and note that all of the
eigenvalues of A have real part less than b. Define an inner product h; iC (and
induced norm k kC ) on E C such that
hv; AviC akvk2C
for all v 2 E C , and define an inner product h; i (and induced norm k k ) on
E such that
hw; Awi bkwk2
for all w 2 E . Define h; i on E C E to be the direct sum of h; iC and h; i ;
i.e., let
hv1 C w1 ; v2 C w2 i WD hv1 ; v2 iC C hw1 ; w2 i
for all .v1 ; w1 /; .v2 ; w2 / 2 E C E . Let k k be the induced norm, and note that
kv C wk2 D kvk2C C kwk2 D kvk2 C kwk2
for all .v; w/ 2 E C E .
Now, take (3.11) and project it onto E C and E
system for .v; w/ 2 E C E
(
vP D AC v C RC .v; w/
wP D A w C R .v; w/;
(3.16)
v C w 2 Br WD v C w 2 E C E kv C wk < r :
Kr WD v C w 2 E C E
80
kvk > kwk \ Br :
Kr
b
1=2
The first estimate says that as long as the solution stays in Kr , kvk grows exponentially, which means that the solution must eventually leave Kr . Combining
the first and second estimates, we have
d
.kvk2
dt
kwk2 / 2.a
p
2 2"/kvk2 > 0;
81
3. T OPOLOGICAL DYNAMICS
so g.v C w/ WD kvk2 kwk2 increases as t increases. But g is 0 on the lateral surface of Kr and is strictly positive in Kr , so the solution cannot leave Kr
through its lateral surface. Thus, the solution leaves Kr by leaving Br . Since
this holds for all solutions starting in Kr , we know that x0 must be an unstable
equilibrium point for (3.11).
82
m WD min W .x/ jxj D " :
max V .t0 ; x/ jxj < m:
d
V .t; x.t// D VP .t; x.t// 0;
dt
for all t, so V .t; x.t// < m for every t t0 . Thus, W .x.t// < m for every
t t0 , so, for every t t0 , jx.t/j ". Since jx.t0 /j < ", this tells us that
jx.t/j < " for every t t0 .
83
3. T OPOLOGICAL DYNAMICS
Theorem. (Asymptotic Stability) Suppose that there is a neighborhood D of
0 and a continuously differentiable positive definite function V W R D !
R whose orbital derivative VP is negative definite, and suppose that there is a
positive definite function W W D ! R such that V .t; x/ W .x/ for every
.t; x/ 2 R D. Then 0 is an asymptotically stable solution of (3.17).
Proof. By the previous theorem, 0 is a Lyapunov stable solution of (3.17). Let
t0 2 R be given. Assume, without loss of generality, that D is compact. By
Lyapunov stability, we know that we can choose a neighborhood U of 0 such that
if x.t/ is a solution of (3.17) and x.t0 / 2 U, then x.t/ 2 D for every t t0 . We
claim that, in fact, if x.t/ is a solution of (3.17) and x.t0 / 2 U, then x.t/ ! 0 as
t " 1. Verifying this claim will prove the theorem.
Suppose that V .t; x.t// does not converge to 0 as t " 1. The negative
definiteness of VP implies that V .; x.// is nonincreasing, so, since V 0, there
must be a number c > 0 such that V .t; x.t// c for every t t0 . Then
W .x.t// c > 0 for every t t0 . Since W .0/ D 0 and W is continuous,
(3.18)
inf jx.t/j t t0 "
for some constant " > 0. Pick a negative definite function Y W D ! R such that
VP .t; x/ Y.x/ for every .t; x/ 2 R D. The compactness of D n B.0; "/, along
with (3.18), implies that
Y.x.t// t t0
is bounded away from 0. This, in turn, implies that
VP .t; x.t// t t0
d
V .t; x.t// D VP .t; x.t//
(3.19)
dt
for some constant > 0. Clearly, (3.19) contradicts the nonnegativity of V for
large t.
That contradiction implies that V .t; x.t// ! 0 as t " 1. Pick a positive definite function W W D ! R such that V .t; x/ W .x/ for every .t; x/ 2 R D,
and note that W .x.t// ! 0 as t " 1.
Let r > 0 be given, and let
wr D min W .p/ p 2 D n B.0; r/ ;
84
which is defined and positive by the compactness of D and the continuity and
positive definiteness of W . Since W .x.t// ! 0 as t " 1, there exists T such
that W .x.t// < wr for every t > T . Thus, for t > T , it must be the case that
x.t/ 2 B.0; r/. Hence, 0 is asymptotically stable.
(3.20)
3. T OPOLOGICAL DYNAMICS
The direct method requires you to find an appropriate Lyapunov function, which
doesnt seem so straightforward. But, in fact, anytime linearization works, a
simple Lyapunov function works, as well.
To be more precise, suppose x0 D 0 and all the eigenvalues of A WD Df .0/
have negative real part. Pick an inner product h; i and induced norm k k such
that, for some c > 0,
hx; Axi ckxk2
for all x 2 Rn . Pick r > 0 small enough that kf .x/ Axk .c=2/kxk whenever
kxk r, let
D D x 2 Rn kxk r ;
Axk/
Axi/
ckxk2 ;
so VP is negative definite.
On the other hand, there are very simple examples to illustrate that the direct
method works in some cases where linearization doesnt. For example, consider
xP D x 3 on R. The equilibrium point at the origin is not hyperbolic, so linearization fails to determine stability, but it is easy to check that x 2 is positive
definite and has a negative definite orbital derivative, thus ensuring the asymptotic stability of 0.
86
The origin is a nonhyperbolic equilibrium point, with 0 being the only eigenvalue,
so the principle of linearized stability is of no use. A sketch of the phase portrait
indicates that orbits circle the origin in the counterclockwise direction, but it is
not obvious whether they spiral in, spiral out, or move on closed curves.
The simplest potential Lyapunov function that often turns out to be useful is
the square of the standard Euclidean norm, which in this case is V WD x 2 C y 2 .
2xy
2x 4 :
(3.21)
For some points .x; y/ near the origin (e.g., .; /) VP < 0, while for other points
near the origin (e.g., .; /) VP > 0, so this function doesnt seem to be of much
use.
Sometimes when the square of the standard Euclidean norm doesnt work,
some other homogeneous quadratic function does. Suppose we try the function
V WD x 2 C xy C y 2 , with and to be determined. Then
VP D .2x C y/xP C .x C 2y/yP
D
2xy
x 3y C 2x 5 y
y 2 :
Setting .x; y/ D .; 2 / for positive and small, we see that VP is not going to
be negative semidefinite, no matter what we pick and to be.
If these quadratic functions dont work, maybe something customized for the
particular equation might. Note that the right-hand side of the first equation in
(3.21) sort of suggests that x 3 and y should be treated as quantities of the same
order of magnitude. Lets try V WD x 6 C y 2 , for some > 0 to be determined.
Clearly, V is positive definite, and
VP D 6x 5 xP C 2y yP D .2
6/x 5 y
6x 8 :
3. T OPOLOGICAL DYNAMICS
Lemma. (Youngs Inequality) If a; b 0, then
ab
ap
bq
C ;
p
q
(3.22)
(3.23)
Proof. Assume that (3.23) holds. Clearly (3.22) holds if b D 0, so assume that
b > 0, and fix it. Define g W 0; 1/ by the formula
g.x/ WD
xp
bq
C
p
q
xb:
Note that g is continuous, and g 0 .x/ D x p 1 b for every x 2 .0; 1/. Since
limx#0 g 0 .x/ D b < 0, limx"1 g 0 .x/ D 1, and g 0 is increasing on .0; 1/,
we know that g has a unique minimizer at x0 D b 1=.p 1/. Thus, for every
x 2 0; 1/ we see, using (3.23), that
g.x/ g.b
1=.p 1/
b p=.p
/D
p
1/
bq
C
q
p=.p 1/
1
1
C
p
q
1 b q D 0:
5jyj18=5
1
jxj6
5
C
x6 C y2
6
6
6
6
5
13
V x6 C y2
6
6
if jyj 1. Also,
VP D
88
6x 8 C y 3 xP C 3xy 2 yP D
6x 8
x 3 y 3 C 3x 6 y 2
6x 8
y 4:
y 3 .y C x 3 / C 3x 6 y 2
Applying Youngs inequality to the two mixed terms in this orbital derivative, we
have
3jxj8
5jyj24=5
3
5
j x 3 y 3 j D jxj3 jyj3
C
x8 C y4
8
8
8
8
jyj8
9
3jxj8
3
9
3
C
D x8 C y8 x8 C y4
4
4
4
4
4
64
27 8
x
8
VP
21 4
y
64
(3.24)
x 2 D VP .x/ D 0 :
Then there is a neighborhood U of 0 such that for every x0 2 U, !.x0 / I.
89
3. T OPOLOGICAL DYNAMICS
definiteness of V , we know that V .'.t; x0 // remains nonnegative, so it must
approach some constant c 0 as t " 1. By continuity of V , V .z/ D c for
every z 2 !.x0 /. Since !.x0 / is invariant, V .'.t; y// D c for every t 2 R. The
definition of orbital derivative then implies that VP .'.t; y// D 0 for every t 2 R.
Hence, y 2 I.
Exercise 15 Show that .x.t/; y.t// D .0; 0/ is an asymptotically stable
solution of
(
xP D x 3 C 2y 3
yP D 2xy 2 :
90
4
Conjugacies
4.1 Hartman-Grobman Theorem: Part 1
The Principle of Linearized Stability indicates one way in which the flow near
a singular point of an autonomous ODE resembles the flow of its linearization.
The Hartman-Grobman Theorem gives further insight into the extent of the resemblance; namely, there is a local topological conjugacy between the two. We
will spend the next 5 sections talking about the various forms of this theorem and
their proofs. This amount of attention is justified not only by the significance of
the theorem but the general applicability of the techniques used to prove it.
Let Rn be open and let f W ! Rn be continuously differentiable.
Suppose that x0 2 is a hyperbolic equilibrium point of the autonomous equation
xP D f .x/:
(4.1)
Let B D Df .x0 /, and let ' be the (local) flow generated by (4.1). The version of
the Hartman-Grobman Theorem were primarily interested in is the following.
Theorem. (Local Hartman-Grobman Theorem for Flows) Let , f , x0 , B,
and ' be as described above. Then there are neighborhoods U and V of x0 and
a homeomorphism h W U ! V such that
'.t; h.x// D h.x0 C e tB .x
whenever x 2 U and x0 C e tB .x
x0 //
x0 / 2 U.
Cb0 .Rn / D w 2 C.Rn ; Rn / sup jw.x/j < 1 :
x2Rn
91
4. C ONJUGACIES
When equipped with the norm
kwk0 WD sup kw.x/k;
x2Rn
where k k is some norm on Rn , Cb0 .Rn / is a Banach space. (We shall pick a
particular norm k k later.)
Let
Cb1 .Rn / D w 2 C 1 .Rn ; Rn / \ Cb0 .Rn / sup kDw.x/k < 1 ;
x2Rn
is defined on Cb1 .Rn /. We will not define a norm on Cb1 .Rn /, but will often use
Lip, which is not a norm, to describe the size of elements of Cb1 .Rn /.
Definition. If A 2 L.Rn ; Rn / and none of the eigenvalues of A lie on the unit
circle, then A is hyperbolic.
Note that if x0 is a hyperbolic equilibrium point of (4.1) and A D e Df .x0/ ,
then A is hyperbolic.
Theorem. (Global Hartman-Grobman Theorem for Maps) Suppose that the
map A 2 L.Rn ; Rn / is hyperbolic and invertible. Then there exists a number
" > 0 such that for every g 2 Cb1 .Rn / satisfying Lip.g/ < " there exists a
unique function v 2 Cb0 .Rn / such that
F .h.x// D h.Ax/
for every x 2 Rn , where F D A C g and h D I C v. Furthermore, h W Rn ! Rn
is a homeomorphism.
1/
n B.0; a/:
Using the notation developed when we were deriving the real canonical form, let
8
9
< M
=
E D
N.A I /
:
;
2. a;a/
8
9 8
9
>
>
>
>
< M
= < M
=
Re u u 2 N.A I /
Im u u 2 N.A I / ;
>
>
>
>
: jj<a
; : jj<a
;
Im 0
Im 0
93
4. C ONJUGACIES
and let
EC D
8
<
:
2. 1; a
9
=
N.A
I /
;
9 8
>
>
>
= < M
I /
Im u u 2 N.A
>
>
jj>a 1
>
;
:
1 /[.a 1 ;1/
< M
Re u u 2 N.A
:jj>a 1
Im 0
Im 0
9
>
>
>
=
I / :
>
>
>
;
kAC xkC a
kxkC
(4.2)
This is the norm on Rn that we will use throughout our proof of the (global)
Hartman-Grobman Theorem (for maps). Note that kxk D kxk if x 2 E , and
kxk D kxkC if x 2 E C .
Recall that we equipped Cb0 .Rn / with the norm k k0 defined by the formula
kwk0 WD sup kw.x/k:
x2Rn
The norm on Rn on the right-hand side of this formula is that given in (4.2).
Recall also that we will use the functional Lip defined by the formula
Lip.w/ WD
sup
x1 ;x2 2Rn
x1 x2
kw.x1 /
kx1
The norm on Rn on the right-hand side of this formula is also that given in (4.2).
Let
Cb0 .E / D w 2 C.Rn ; E / sup kw.x/k < 1 ;
x2Rn
94
w.x2 /k
x2 k
Cb0 .E C / D w 2 C.Rn ; E C / sup kw.x/kC < 1 :
x2Rn
w C P C w:
We equip Cb0 .E / and Cb0 .E C / with the same norm k k0 that we used on
thereby making each of these two spaces a Banach space. It is not hard
to see that
kwk0 D maxfkP wk0 ; kP C wk0 g:
Cb0 .Rn /,
kAxk
a; inf
:
x0 kxk
Choose, and fix, a function g 2 C1b .Rn / for which Lip.g/ < ". The (global)
Hartman-Grobman Theorem (for maps) will be proved by constructing a map
from Cb0 .Rn / to Cb0 .Rn / whose fixed points would be precisely those objects v
which, when added to the identity I , would yield solutions h to the conjugacy
equation
.A C g/ h D h A;
(4.3)
and then showing that is a contraction (and that h is a homeomorphism).
Plugging h D I C v into (4.3) and manipulating the result, we can see that
that equation is equivalent to the equation
Lv D .v/;
(4.4)
95
4. C ONJUGACIES
where .v/ WD g .I C v/ A
Lv D v
and
AvA
DW .id A/v:
Since the composition of continuous functions is continuous, and the composition of functions is bounded if the outer function in the composition is bounded,
it is clear that is a (nonlinear) map from Cb0 .Rn / to Cb0 .Rn /. Similarly, A and,
therefore, L are linear maps from Cb0 .Rn / to Cb0 .Rn /. We will show that L can
be inverted and then apply L 1 to both sides of (4.4) to get
vDL
..v// DW .v/
(4.5)
Inverting L
Since A behaves significantly differently on E than it does on E C , A and, therefore, L behave significantly differently on Cb0 .E / than they do on Cb0 .E C /.
For this reason, we will analyze L by looking at its restrictions to Cb0 .E / and
to Cb0 .E C /. Note that Cb0 .E / and Cb0 .E C / are invariant under A and, therefore, under L. Therefore, it makes sense to let A 2 L.Cb0 .E /; Cb0 .E // and
AC 2 L.Cb0 .E C /; Cb0 .E C // be the restrictions of A to Cb0 .E / and Cb0 .E C /, respectively, and let L 2 L.Cb0 .E C /; Cb0 .E C // and LC 2 L.Cb0 .E C /; Cb0 .E C //
be the corresponding restrictions of L. Then L will be invertible if and only if L
and LC are each invertible. To invert L and LC we use the following general
result about the invertibility of operators on Banach spaces.
Lemma. Let X be a Banach space with norm k kX and corresponding operator
norm kkL.X ;X / . Let G be a linear map from X to X , and let c < 1 be a constant.
Then:
(a) If kGkL.X ;X / c, then id G is invertible and
k.id G/
(b) If G is invertible and kG
1k
L.X ;X /
k.id G/
96
kL.X ;X /
1
1
c
1
Proof. The space of bounded linear maps from X to X is a Banach space using
the operator norm. In case (a), the bound on kGkL.X ;X / , along with the Cauchy
Gk
kD0
1
1
kA wk0 D kA w A
k0 D sup kAw.A
x2Rn
so the operator norm of A is bounded by a. Applying the first half of the lemma
with X D Cb0 .E /, G D A , and c D a, we find that L is invertible, and its
inverse has operator norm bounded by .1 a/ 1 .
Next, consider AC . It is not too hard to see that AC is invertible, and
C
.A / 1 w D A 1 w A. If w 2 Cb0 .E C /, then (since the eigenvalues of
the restriction of A 1 to E C all have magnitude less than a)
k.AC /
wk0 D kA
w Ak0 D sup kA
D sup kA
y2Rn
w.Ax/k
x2Rn
97
4. C ONJUGACIES
so the operator norm of .AC / 1 is bounded by a. Applying the second half of the
lemma with X D Cb0 .E C /, G D AC , and c D a, we find that LC is invertible,
and its inverse has operator norm bounded by a.1 a/ 1 .
Putting these two facts together, we see that L is invertible, and, in fact,
D .L /
C .LC /
P C:
kL
wk0 D sup kL
w.x/k
x2Rn
D sup maxfkP L
x2Rn
w.x/k; kP CL
w.x/kg
is bounded by .1
a/
1.
.v2 /k0 D kL
D
1
1
..v1 /
kg .I C v1 / A
k.v1 /
g .I C v2 / A
.v2 //k0
g.A
.A
k0
"
a
x C v2 .A
x C v2 .A
v2 .A
1
.v2 /k0
x//k
x//k
x/k
kv1
v2 k0 :
This shows that is a contraction, since " was chosen to be less than 1 a.
By the contraction mapping theorem, we know that has a unique fixed point
v 2 Cb0 .Rn /; the function h WD I Cv satisfies F h D hA, where F WD A Cg.
It remains to show that h is a homeomorphism.
Injectivity
Before we show that h itself is injective, we show that F is injective. Suppose
it werent. Then we could choose x1 ; x2 2 Rn such that x1 x2 but F .x1 / D
F .x2 /. This would mean that Ax1 C g.x1 / D Ax2 C g.x2 /, so
kAx1
kA.x1 x2 /k
D
kx1 x2 k
kx1
kg.x1 /
Ax2 k
D
x2 k
kx1
kAxk
< " inf
;
x0 kxk
g.x2 /k
Lip.g/
x2 k
x1 // D h.AA
D F .h.A
x2 /
x2 //;
99
4. C ONJUGACIES
so the injectivity of F implies that h.A 1 x1 / D h.A 1 x2 /; by induction, we
have h.A n x1 / D h.A n x2 / for every n 2 N. Set z D x1 x2 . Since I D h v,
we know that for any n 2 Z
kAn zk D kAn x1
An x2 k
D k.h.An x1 /
D kv.An x1 /
v.An x1 //
.h.An x2 /
v.An x2 //k
v.An x2 /k 2kvk0 :
Because of the way the norm was chosen, we then know that for n 0
kP C zk an kAn P C zk an kAn zk 2an kvk0 ! 0;
as n " 1, and we know that for n 0
kP zk a
as n #
kAn P zk a
kAn zk 2a
kvk0 ! 0;
1. Hence, z D P z C P Cz D 0, so x1 D x2 .
Surjectivity
It may seem intuitive that a map like h that is a bounded perturbation of the
identity is surjective. Unfortunately, there does not appear to be a way of proving
this that is simultaneously elementary, short, and complete. We will therefore
rely on the following topological theorem without proving it.
Theorem. (Invariance of Domain) Every continuous injective map from Rn to
Rn maps open sets to open sets.
In particular, this theorem implies that h.Rn / is open. If we can show that
h.Rn / is closed, then (since h.Rn / is clearly nonempty) this will mean that
h.Rn / D Rn , i.e., h is surjective.
So, suppose we have a sequence .h.xk // of points in h.Rn / that converges to
a point y 2 Rn . Without loss of generality, assume that
kh.xk /
yk 1
for every k. This implies that kh.xk /k kyk C 1, which in turn implies that
kxk k kyk C kvk0 C 1. Thus, the sequence .xk / is bounded and therefore
has a subsequence .xk` / converging to some point x0 2 Rn . By continuity of
h, .h.xk` // converges to h.x0 /, which means that h.x0 / D y. Hence, h.Rn / is
closed.
100
(4.6)
with an equilibrium point that, without loss of generality, is located at the origin.
For x near 0, f .x/ Bx, where B D Df .0/. Our goal is to come up with
a modification fQ of f such that fQ.x/ D f .x/ for x near 0 and fQ.x/ Bx
for all x. If we accomplish this goal, whatever information we obtain about the
relationship between the equations
xP D fQ.x/
(4.7)
xP D Bx
(4.8)
and
will also hold between (4.6) and (4.8) for x small.
Pick W 0; 1/ ! 0; 1 to be a C 1 function satisfying
(
1 if s 1
.s/ D
0 if s 2;
and let C D sups20;1/ j 0 .s/j. Given " > 0, pick r > 0 so small that
kDf .x/
Bk <
"
2C C 1
whenever kxk 2r. (We can do this since Df .0/ D B and Df is continuous.)
Define fQ by the formula
kxk
Q
f .x/ D Bx C
.f .x/
r
Bx/:
Note that fQ is continuously differentiable, agrees with f for kxk r, and agrees
with B for kxk 2r. We claim that fQ B has Lipschitz constant less than ".
101
4. C ONJUGACIES
Assuming, without loss of generality, that kxk and kyk are less than or equal to
2r, we have (using the Mean Value Theorem)
k.fQ.x/ Bx/ .fQ.y/ By/k
kxk
kyk
D
.f .x/ Bx/
.f .y/ By/
r
r
kxk
k.f .x/ Bx/ .f .y/ By/k
r
kxk
kyk
kf .y/ Byk
r
r
"
jkxk kykj
"
kx yk C C
kyk
2C C 1
r
2C C 1
"kx yk:
Now, consider the difference between e B and '.1; /, where ' is the flow
generated by fQ. Let g.x/ D '.1; x/ e B x. Then, since fQ.x/ D B.x/ for
all large x, g.x/ D 0 for all large x. Also, g is continuously differentiable, so
g 2 Cb1 .Rn /. If we apply the variation of constants formula to (4.7) rewritten as
xP D Bx C .fQ.x/
Bx/;
we find that
g.x/ D
so
e .1
s/B
fQ.'.s; x//
B'.s; x/ ds;
kg.x/ g.y/k
Z 1
ke .1 s/B kk.fQ.'.s; x// B'.s; x// .fQ.'.s; y// B'.s; y//k ds
0
Z 1
"
ke .1 s/B kk'.s; x/ '.s; y/k ds
0
Z 1
kx yk"
ke .1 s/B kke .kBkC"/s 1k ds;
0
Conjugacy for t D 1
If 0 is a hyperbolic equilibrium point of (4.6) (and therefore of (4.7)) then none
of the eigenvalues of B are imaginary. Setting A D e B , it is not hard to show
that the eigenvalues of A are the exponentials of the eigenvalues of B, so none of
the eigenvalues of A have modulus 1; i.e., A is hyperbolic. Also, A is invertible
(since A 1 D e B ), so we can apply the global Hartman-Grobman Theorem for
maps and conclude that there is a homeomorphism h W Rn ! Rn such that
'.1; h.x// D h.e B x/
(4.9)
Conjugacy for t 1
For the Hartman-Grobman Theorem for flows, we need
'.t; h.x// D h.e tB x/
for every x 2 Rn and every t 2 R. Fix t 2 R, and consider the function hQ defined
by the formula
Q
h.x/
D '.t; h.e tB x//:
(4.10)
As the composition of homeomorphisms, hQ is a homeomorphism. Furthermore,
the fact that h satisfies (4.9) implies that
Q
'.1; h.x//
D '.1; '.t; h.e
B
D '.t; h.e e
tB
Q
so (4.9) holds if h is replaced by h.
Now,
hQ I D '.t; / h e tB I
D .'.t; /
tB
e tB / h e
tB
C e tB .h
tB
x///
Q B x/;
e x/// D h.e
tB B
I/ e
tB
DW v1 C v2 :
The fact that '.t; x/ and e tB x agree for large x implies that '.t; / e tB is
bounded, so v1 is bounded, as well. The fact that h I is bounded implies that
v2 is bounded. Hence, hQ I is bounded.
The uniqueness part of the global Hartman-Grobman Theorem for maps now
implies that h and hQ must be the same function. Using this fact and substituting
y D e tB x in (4.10) yields
h.e tB y/ D '.t; h.y//
for every y 2 Rn and every t 2 Rn . This means that the flows generated by (4.8)
and (4.7) are globally topologically conjugate, and the flows generated by (4.8)
and (4.6) are locally topologically conjugate.
103
4. C ONJUGACIES
(4.11)
xP D g.x/;
(4.12)
and
generating, respectively, the flows ' and . Recall that the conjugacy equation
for ' and is
'.t; h.x// D h. .t; x//
(4.13)
for every x and t. Not only is (4.13) somewhat complicated, it appears to require
you to solve (4.11) and (4.12) before you can look for a conjugacy h. Suppose,
however, that h is a differentiable conjugacy. Then, we can differentiate both
sides of (4.13) with respect to t to get
f .'.t; h.x/// D Dh. .t; x//g. .t; x//:
Substituting (4.13) into the left-hand side of (4.14) and then replacing
x, we get the equivalent equation
f .h.x// D Dh.x/g.x/:
104
(4.14)
.t; x/ by
(4.15)
Note that (4.15) involves the functions appearing in the differential equations,
rather than the formulas for the solutions of those equations. Note, also, that
(4.15) is the same equation you would get if you took a solution x of (4.12) and
required the function h x to satisfy (4.11).
Constructing Conjugacies
(4.17)
(4.18)
(4.19)
for some constant C . Clearly, (4.19) does not define a topological conjugacy for
a single constant C , because it fails to be injective on R; however, the formula
(
xjxja=b 1 if x 0
(4.20)
h.x/ D
0
if x D 0;
which is obtained from (4.19) by taking C D 1 for positive x and C D 1
for negative x, defines a homeomorphism if ab > 0. Even though the function
defined in (4.20) may fail to be differentiable at 0, substitution of it into
e t a h.x/ D h.e t b x/;
(4.21)
which is (4.13) for this example, shows that it does, in fact, define a topological
conjugacy when ab > 0. (Note that in no case is this a C 1 -conjugacy, since
either h0 .0/ or .h 1 /0 .0/ does not exist.)
Now, suppose that ab 0. Does a topological (possibly nondifferentiable)
conjugacy exist? If ab D 0, then (4.21) implies that h is constant, which violates
injectivity, so suppose that ab < 0. In this case, substituting x D 0 and t D 1
into (4.21) implies that h.0/ D 0. Fixing x 0 and letting t sgn b # 1 in
(4.21), we see that the continuity of h implies that h.x/ D 0, also, which again
violates injectivity.
Summarizing, for a b there is a topological conjugacy of (4.16) and (4.17)
if and only if ab > 0, and these are not C 1 -conjugacies.
105
4. C ONJUGACIES
(4.22)
(4.23)
(4.24)
1 or
If b > 0 and a < 0, then (4.23) implies that h.1/ and h.b/ have opposite
signs even though 1 and b have the same sign; consequently, the Intermediate Value Theorem yields a contradiction to injectivity.
If b < 0 and a > 0, then (4.23) gives a similar contradiction.
Thus, the only cases where we could possibly have conjugacy is if a and b
are both in the same component of
. 1; 1/ [ . 1; 0/ [ .0; 1/ [ .1; 1/:
When this condition is met, experimentation (or experience) suggests trying h of
the form h.x/ D xjxjp 1 for some constant p > 0 (with h.0/ D 0). This is a
homeomorphism from R to R, and plugging it into (4.23) shows that it provides
a conjugacy if a D bjbjp 1 or, in other words, if
pD
106
log jaj
:
log jbj
Smooth Conjugacies
Since a b, either h or h 1 fails to be differentiable at 0. Is there some other
formula that provides a C 1 -conjugacy? No, because if there were we could differentiate both sides of (4.23) with respect to x and evaluate at x D 0 to get
h0 .0/ D 0, which would mean that .h 1 /0 .0/ is undefined.
Exercise 16 Define F W R2 ! R2 by the formula
x
x=2
F
D
;
y
2y C x 2
and let A D DF .0/.
(a) Show that the maps F and A are topologically conjugate.
(b) Show that the flows generated by the differential equations
zP D F .z/
and
zP D Az
are topologically conjugate.
(Hint: Try quadratic conjugacy functions.)
Hartmans Example
Consider the system
8
<xP D x
yP D .
/y C "xz
:
zP D
z;
107
4. C ONJUGACIES
where >
> 0 and " 0. We will not cut off this vector field but will instead
confine our attention to x; y; z small. A calculation shows that the time-1 map
F D '.1; / of this system is given by
02 3 1 2
3
x
ax
F @4y 5A D 4ac.y C "xz/5 ;
z
cz
where a D e and c D e
. Note that a > ac > 1 > c > 0. The time-1 map B
of the linearization of the differential equation is given by
2 3 2
3
x
ax
B 4y 5 D 4acy 5 :
y
cz
A local conjugacy H D .f; g; h/ of B with F must satisfy
for every x; y; z near 0. Writing k.x; z/ for k.x; 0; z/, where k 2 ff; g; hg, we
have
af .x; z/ D f .ax; cz/
(4.25)
(4.26)
(4.27)
(4.28)
n j.u/ D j. n u/
(4.29)
Smooth Conjugacies
for every n 2 N. Letting n " 1 in (4.29), we see that j.u/ must be zero. If
jj < 1 < jj, substitute v D u into (4.28) to get
j.
for every v;
1v
v/ D j.v/
(4.30)
v/ D j.v/
(4.31)
(4.32)
for every z near zero. Setting z D 0 in (4.27) and applying the Lemma gives
h.x; 0/ D 0
(4.33)
for every x near zero. Setting x D 0 in (4.26), using (4.32), and applying the
Lemma gives
g.0; z/ D 0
(4.34)
for every z near zero. If we set z D 0 in (4.26), use (4.33), and then differentiate
both sides with respect to x, we get cgx .x; 0/ D gx .ax; 0/; applying the Lemma
yields
(4.35)
gx .x; 0/ D 0
for every x near zero. Setting z D 0 in (4.34) and using (4.35), we get
g.x; 0/ D 0
(4.36)
(4.37)
f .an x; c n z/
(4.38)
x; z/ D c
g.x; c n z/
(4.39)
109
4. C ONJUGACIES
for every n 2 N.
The existence of gx .0; z/ and gz .x; 0/ along with equations (4.34) and (4.36)
imply that an g.a n x; z/ and c n g.x; c n z/ stay bounded as n " 1. Using this
fact, and letting n " 1 in (4.39), we get
f .x; 0/h.0; z/ D 0;
so f .x; 0/ D 0 or h.0; z/ D 0. If f .x; 0/ D 0, then, in combination with (4.33)
and (4.36), this tells us that H is not injective in a neighborhood of the origin.
Similarly, if h.0; z/ D 0 then, in combination with (4.32) and (4.34), this implies
a violation of injectivity, as well.
(4.41)
kD1
mk 2
and
s D
n
X
mk k :
kD1
Smooth Conjugacies
Definition. We say that .1 ; 2 ; : : : ; n / 2 Cn satisfy a Siegel condition if there
are constants C > 0 and > 1 such that
X
C
mk k Pn
s
. kD1 mk /
kD1
kD1
mk 2:
111
5
Invariant Manifolds
5.1 Stable Manifold Theorem: Part 1
The Hartman-Grobman Theorem states that the flow generated by a smooth vector field in a neighborhood of a hyperbolic equilibrium point is topologically
conjugate with the flow generated by its linearization. Hartmans counterexample shows that, in general, the conjugacy cannot be taken to be C 1 . However, the
Stable Manifold Theorem will tell us that there are important structures for the
two flows that can be matched up by smooth changes of variable. In this section,
we will discuss the Stable Manifold Theorem on an informal level and discuss
two different approaches to proving it.
Let f W Rn ! Rn be C 1 , and let ' W R ! be the flow generated
by the differential equation
xP D f .x/:
(5.1)
Suppose that x0 is a hyperbolic equilibrium point of (5.1).
Definition. The (global) stable manifold of x0 is the set
n
o
n
o
Definition. Given a neighborhood U of x0 , the local stable manifold of x0 (relative to U) is the set
n
o
s
Wloc
.x0 / WD x 2 U
C .x/ U and lim '.t; x/ D x0 :
t "1
113
5. I NVARIANT M ANIFOLDS
Definition. Given a neighborhood U of x0 , the local unstable manifold of x0
(relative to U) is the set
n
o
u
.x0 / WD x 2 U
.x/ U and lim '.t; x/ D x0 :
Wloc
t# 1
Note that:
s .x / W s .x /, and W u .x / W u .x /.
Wloc
0
0
0
loc 0
s
u
Wloc
.x0 / and Wloc
.x0 / are both nonempty, since they each contain x0 .
114
s
Wloc
.0/ D x C hs .x/ x 2 U s
and
u
.0/ D x C hu .x/ x 2 U u :
Wloc
Liapunov-Perron Approach
This approach to proving the Stable Manifold Theorem rewrites (5.1) as
xP D Ax C g.x/;
(5.2)
(5.3)
t1
where the subscript s attached to a quantity denotes the projection of that quantity
onto E s . If we assume that the solution x.t/ lies on W s .0/, set t2 D t, let t1 " 1,
and project (5.3) onto E u , we get
Z 1
xu .t/ D
e .t s/Au gu .x.s// ds:
t
115
5. I NVARIANT M ANIFOLDS
Now, fix as 2 E s , and define a functional T by
Z t
Z
tAs
.t s/As
.T x/.t/ D e as C
e
gs .x.s// ds
0
e .t
s/Au
gu .x.s// ds:
A fixed point x of this functional will solve (5.2), will have a range contained in
the stable manifold, and will satisfy xs .0/ D as . If we set hs .as / D xu .0/ and
define hs similarly for other inputs, the graph of hs will be the stable manifold.
Hadamard Approach
The Hadamard approach uses what is known as a graph transform. Here we
define a functional not by an integral but by letting the graph of the input function
move with the flow ' and selecting the output function to be the function whose
graph is the image of the original graph after, say, 1 unit of time has elapsed.
More precisely, suppose h is a function from E s to E u . Define its graph
transform F h to be the function whose graph is the set
'.1; C h.// 2 E s :
(5.4)
116
xP D f .x/:
(5.5)
vs C h.vs / vs 2 E s .r/ :
Moreover, there is a constant c > 0 such that
n
o
s
Wloc
.0/ D v 2 B.r/
C .v/ B.r/ and lim e ct '.t; v/ D 0 :
t "1
Two immediate and obvious corollaries, which we will not state explicitly,
describe the stable manifolds of other equilibrium points (via translation) and
describe unstable manifolds (by time reversal).
We will actually prove this theorem by first proving an analogous theorem
for maps (much as we did with the Hartman-Grobman Theorem). Given a neighborhood U of a fixed point p of a map F , we can define the local stable manifold
of p (relative to U) as
n
o
s
Wloc
.p/ WD x 2 U F j .x/ 2 U for every j 2 N and lim F j .x/ D p :
j "1
s
Wloc
.0/ D vs C h.vs / vs 2 E s .r/
n
o
n
o
Preliminaries
The proof of the Stable Manifold Theorem for Maps will be broken up into a
series of lemmas. Before stating and proving those lemmas, we need to lay a
117
5. I NVARIANT M ANIFOLDS
foundation by introducing some terminology and notation and by choosing some
constants.
We know that F .0/ D 0 and DF .0/ is hyperbolic. Then Rn D E s E u , s
and u are the corresponding projection operators, E s and E u are invariant under
DF .0/, and there are constants < 1 and > 1 such that all of the eigenvalues
of DF .0/jE s have magnitude less than and all of the eigenvalues of DF .0/jE u
have magnitude greater than .
When we deal with a matrix representation of DF .q/, it will be with respect
to a basis that consists of a basis for E s followed by a basis for E u . Thus,
3
2
Ass .q/ Asu .q/
5;
DF .q/ D 4
Aus .q/ Auu .q/
where, for example, Asu .q/ is a matrix representation of s DF .q/jE u in terms
of the basis for E u and the basis for E s . Note that, by invariance, Asu .0/ D
Aus .0/ D 0. Furthermore, we can pick our basis vectors so that, with k k being
the corresponding Euclidean norm of a vector in E s or in E u ,
kAss .0/vs k
<
kvs k
vs 0
kAuu .0/vu k
> :
kvu k
vu 0
(The functional m./ defined implicitly in the last formula is sometimes called
the minimum norm even though it is not a norm.) For a vector in v 2 Rn , let
kvk D maxfks vk; ku vkg. This will be the norm on Rn that will be used
throughout the proof. Note that B.r/ WD E s .r/ E u .r/ is the closed ball of
radius r in Rn by this norm.
Next, we choose r. Fix > 0. Pick " > 0 small enough that
C " C " < 1 <
"=
118
kDF .q/
2":
Wrs
C s ./ WD v 2 Rn ku vk ks vk ;
and the unstable cone (of slope ) is
C u ./ WD v 2 Rn ku vk ks vk :
vu C .vu / vu 2 E u .r/
kAus .p/vs k
m.Auu .p//kvu k
.
"=/kvu k;
"kvs k
119
5. I NVARIANT M ANIFOLDS
and
ks DF .p/vk D kAss .p/vs C Asu .p/vu k kAss .p/vs k C kAsu .p/vu k
kAss .p/kkvs k C kAsu .p/kkvu k kvs k C "kvu k
.= C "/kvu k:
Since
so DF .p/v 2 C u ./.
X C Y WD x C y x 2 X and y 2 Y :
F .p//k .
"=
p/k;
"/ku .q
p/k;
d
s F .tq C .1
0 dt
t/p/.q p/ dt
ks .F .q/
Z 1
s DF .tq C .1
D
0
Z 1
D
Ass .tq C .1 t/p/s .q
0
t/p/ dt
p/ dt
Asu .tq C .1
t/p/u .q
p/k dt
p/ dt
t/p/kku .q
p/k dt
ks .q
0
p/k C "ku .q
p/k:
ku .q
.
"=
p/k
"ks .q
"/ku .q
p/k
"ku .q
p/k
p/k:
5. I NVARIANT M ANIFOLDS
From (a), (b), and the choice of ", we have
ku .F .q/
F .p//k .
"=
"/ku .q
. C "/ku .q
ks .F .q/
so F .q/
p/k
p/k
F .p//k;
1/
\ B.r/
(5.6)
iD0
for each j 2 N.
Proof. Because of induction, we only need to handle the case j D 1. The estimate on the diameter of the u projection of the preimage of D1 under F is
a consequence of part (b) of the lemma on moving invariant cones. That D1 is
the graph of an 1 -Lipschitz function 1 from a subset of E u .r/ to E s .r/ is a
consequence of part (c) of that same lemma. Thus, all we need to show is that
dom. 1 / D E u .r/ and that 1 is C 1 .
Let 0 W E u .r/ ! E s .r/ be the C 1 function (with Lipschitz constant less
than or equal to 1 ) such that
D0 D vu C 0 .vu / vu 2 E u .r/ :
Define g W E u .r/ ! E u by the formula g.vu / D u F .vu C 0 .vu //. If we can
show that for each y 2 E u .r/ there exists x 2 E u .r/ such that
122
g.x/ D y;
(5.7)
D Auu .x C
0 .x//.I
0 .x// C
CD
0 .x//
Aus .x C
0 .x//D
0 .x/;
G.0/k
k.kg.0/k C kyk/ C
" C "=
kxk
1
.kg.0/k C r C ." C "=/r/:
Let W E s .r/ ! E u .r/ be defined by the formula .vs / D u F .vs /. Since
.0/ D 0 and, for any vs 2 E s .r/, kD.vs /k D kAus .vs /k ", the Mean Value
Theorem tells us that
kg.0/k D ku F .
0 .0//k
D k.
0 .0//k
"k
0 .0/k
"r:
(5.8)
1
1 C "= C 2"
."r C r C ." C "=/r/ D
r < r;
so G.x/ 2 E u .r/.
That completes the verification that (5.7) has a solution for each y 2 E u .r/
and, therefore, that dom. 1 / D E u .r/. To finish the proof, we need to show that
1
Q be the restriction of g to g 1 .D1 /, and observe that
1 is C . Let g
1
gQ D s F .I C
0 /:
(5.9)
123
5. I NVARIANT M ANIFOLDS
We have shown that gQ is a bijection of g 1 .D1 / with D1 and, by the Inverse
Function Theorem, gQ 1 is C 1 . Thus, if we rewrite (5.9) as
1
1,
D s F .I C
0/
gQ
s .0/.
In particular, Wrs D Wloc
(5.10)
0
1
s
Wloc
.0/ D vs C h.vs / vs 2 E s .r/
is C 1 , and Dh.0/ D 0.
Proof. Let q 2 Wrs be given. We will first come up with a candidate for a plane
that is tangent to Wrs at q, and then we will show that it really works.
For each j 2 N and each p 2 Wrs , define
C s;j .p/ WD D.F j /.p/
C s ./;
and let
C s;0 .p/ WD C s ./:
By definition (and by the invertibility of DF .v/ for all v 2 B.r/), C s;j .p/ is the
image of the stable cone under an invertible linear transformation. Note that
C s;1 .p/ D DF .p/
C s ./ C s ./ D C s;0 .p/
125
5. I NVARIANT M ANIFOLDS
by the (proof of the) lemma on linear invariance of the unstable cone. Similarly,
C s;2 .p/ D D.F 2 /.p/
D DF .p/
C s ./ D DF .F .p//DF .p/
1
DF .F .p//
DF .p/
C s ./ D DF .p/
C s ./
C s;1 .F .p//
C s ./ D C s;1 .p/
and
C s ./ D DF .F 2 .p//DF .F .p//DF .p/
D DF .p/
DF .F .p//
DF .F 2 .p//
DF .F .p//
C s;1 .F 2 .p//
DF .p/
DF .F .p//
D DF .p/
C s ./
C s ./
C s ./ D C s;2 .p/:
s;1
.q/ WD
1
\
C s;j .q/
j D0
.q//DF .F j
.q// DF .q/xk
On the other hand, if y is also in C s;j .q/ and s x D s y, then repeated applications of the estimates in the lemma on linear invariance of the unstable cone
yield
ku D.F j /.q/x
u D.F j /.q/yk .
"=/j ku x
u yk:
Since D.F j /.q/C s;j .q/ D C s ./, it must, therefore, be the case that
.
126
"=/j ku x u yk
2:
. C "/j ks xk
u yk 2
C "
"=
j
ks xk:
(5.11)
Letting j " 1 in (5.11), we see that for each vs 2 E s there can be no more
than 1 point x in C s;1 .q/ satisfying s x D vs . On the other hand, each
C s;j .q/ contains a plane of dimension dim.E s / (namely, the preimage of E s
under D.F j /.q/), so (since the set of planes of that dimension passing through
the origin is a compact set in the natural topology), C s;1 .q/ contains a plane, as
well. This means that C s;1 .q/ is a plane Pq that is the graph of a linear function
Lq W E s ! E u .
Before we show that Lq D Dh.q/, we make a few remarks.
(a) Because E s C s;j .0/ for every j 2 N, P0 D E s and L0 D 0.
(b) The estimate (5.11) shows that the size of the largest angle between two
vectors in C s;j .q/ having the same projection onto E s goes to zero as
j " 1.
(c) Also, the estimates in the proof of the lemma on linear invariance of the
unstable cone show that the size of the minimal angle between a vector in
C s;1 .F j .q// and a vector outside of C s;0 .F j .q// is bounded away from
zero. Since
C s;j .q/ D D.F j /.q/
C s ./ D D.F j /.q/
C s;0 .F j .q//
and
C s;j C1 .q/ D D.F j C1 /.q/
D D.F j /.q/
D D.F j /.q/
C s ./
DF .F j .q//
C s;1 .F j .q//;
C s ./
this also means that the size of the minimal angle between a vector in
C s;j C1 .q/ and a vector outside of C s;j .q/ is bounded away from zero
(for fixed j ).
(d) Thus, since C s;j C1 .q/ depends continuously on q,
Pq 0 2 C s;j C1 .q 0 / C s;j .q/
for a given j if q 0 is sufficiently close to q. This means that Pq depends
continuously on q.
127
5. I NVARIANT M ANIFOLDS
Now, we show that DF .q/ D Lq . Let " > 0 be given. By remark (b) above,
we can choose j 2 N such that
ku v
Lq s vk "ks vk
(5.12)
whenever v 2 C s;j .q/. By remark (c) above, we know that we can choose "0 > 0
such that if w 2 C s;j C1 .q/ and krk "0 kwk, then w C r 2 C s;j .q/. Because
of the differentiability of F j 1 , we can choose > 0 such that
kF
j 1
.F j C1 .q/ C v/ q
D.F
j 1
/.F j C1 .q//vk
"0
kvk
kD.F j C1 /.q/k
(5.13)
(5.14)
j 1
.F j C1 .q/ C v/ D q C D.F
j 1
/.F j C1 .q//v C r
D q C D.F j C1 /.q/
1 v.
"0
kD.F j C1 /.q/k
vCr
(5.15)
kvk:
/kvk
so krk "0 kwk. Thus, by the choice of "0 , w C r 2 C s;j .q/ . Consequently,
(5.15) implies that
F
j 1
j 1
(5.16)
qs k . By
qs
h.qs /
Lq .vs
qs /k "kvs
qs k;
and, in general,
p
F j .p/
H
D
:
v
D.F j /.p/v
j
129
5. I NVARIANT M ANIFOLDS
Also,
so
3
2
DF .p/
0
p
5;
DH
D4
v
D 2 F .p/v DF .p/
2
3
DF
.0/
0
0
5;
DH
D4
0
0
DF .0/
which is hyperbolic and invertible, since DF .0/ is. Applying the induction hypothesis, we can conclude that the fixed point of H at the origin (in Rn Rn ) has
a local stable manifold W that is C k 1 .
Fix q 2 Wrs , and note that F j .q/ ! 0 as j " 1 and
n
o
Since W has a C k
)
q
Pq D v 2 R
2W :
v
(
Flows
Now we discuss how the Stable Manifold Theorem for maps implies the Stable
Manifold Theorem for flows. Given f W ! Rn satisfying f .0/ D 0, let
F D '.1; /, where ' is the flow generated by the differential equation
xP D f .x/:
(5.17)
130
d
g.t/ D Df .0/g.t/
dt
5. I NVARIANT M ANIFOLDS
(5.18)
o
[n
x 2 Rn lim je ct e tA xj D 0 ;
Eu D
t# 1
c>0
o
[n
n
x 2 R lim je ct e tA xj D 0 ;
E D
s
t "1
c>0
and
Ec D
\n
c>0
x 2 Rn lim je ct e tA xj D lim je
t# 1
t "1
o
e xj D 0 :
ct tA
The Stable Manifold Theorem tells us that for the nonlinear differential equation
xP D f .x/;
(5.19)
with f .0/ D 0, the stable manifold W s .0/ and the unstable manifold W u .0/
have characterizations similar to E s and E u , respectively:
o
[n
s
n
x 2 R lim je ct '.t; x/j D 0 ;
W .0/ D
c>0
and
W u .0/ D
t "1
[n
x 2 Rn lim je
c>0
t# 1
ct
o
'.t; x/j D 0 ;
where ' is the flow generated by (5.19). (This was only verified when the equilibrium point at the origin was hyperbolic, but a similar result holds in general.)
Is there a useful way to modify the characterization of E c similarly to get
a characterization of a center manifold W c .0/? Not really. The main problem
is that the characterizations of E s and E u only depend on the local behavior of
solutions when they are near the origin, but the characterization of E c depends
on the behavior of solutions that are, possibly, far from 0.
Still, the idea of a center manifold as some sort of nonlinear analogue of
E c .0/ is useful. Heres one widely-used definition:
Definition. Let A D Df .0/. A center manifold W c .0/ of the equilbrium point 0
of (5.19) is an invariant manifold whose dimension equals the dimension of the
invariant subspace E c of (5.18) and which is tangent to E c at the origin.
132
Center Manifolds
Nonuniqueness
While the fact that stable and unstable manifolds are really manifolds is a theorem (namely, the Stable Manifold Theorem), a center manifold is a manifold by
definition. Also, note that we refer to the stable manifold and the unstable manifold, but we refer to a center manifold. This is because center manifolds are not
necessarily unique. An extremely simple example of nonuniqueness (commonly
credited to Kelley) is the planar system
(
xP D x 2
yP D y:
Clearly, E c is the x-axis, and solving the system explicitly reveals that for any
constant c 2 R the curve
.x; y/ 2 R2 x < 0 and y D ce 1=x [ .x; 0/ 2 R2 x 0
is a center manifold.
Existence
There is a Center Manifold Theorem just like there was a Stable Manifold Theorem. However, the goal of the Center Manifold Theorem is not to characterize
a center manifold; that is done by the definition. The Center Manifold Theorem
asserts the existence of a center manifold.
We will not state this theorem precisely nor prove it, but we can give some
indication how the proof of existence of a center manifold might go. Suppose
that none of the eigenvalues of Df .0/ have real part equal to , where is a
given real number. Then we can split the eigenvalues up into two sets: Those
with real part less than and those with real part greater than . Let E be the
vector space spanned by the generalized eigenvectors corresponding to the first
set of eigenvalues, and let E C be the vector space spanned by the generalized
eigenvectors corresponding to the second set of eigenvalues. If we cut off f so
that it is stays nearly linear throughout Rn , then an analysis very much like that in
the proof of the Stable Manifold Theorem can be done to conclude that there are
invariant manifolds called the pseudo-stable manifold and the pseudo-unstable
manifold that are tangent, respectively, to E and E C at the origin. Solutions
x.t/ in the first manifold satisfy e t x.t/ ! 0 as t " 1, and solutions in the
second manifold satisfy e t x.t/ ! 0 as t # 1.
Now, suppose that is chosen to be negative but larger than the real part
of the eigenvalues with negative real part. The corresponding pseudo-unstable
manifold is called a center-unstable manifold and is written W cu .0/. If, on the
133
5. I NVARIANT M ANIFOLDS
other hand, we choose to be between zero and all the positive real parts of
eigenvalues, then the resulting pseudo-stable manifold is called a center-stable
manifold and is written W cs .0/. It turns out that
W c .0/ WD W cs .0/ \ W cu .0/
is a center manifold.
B.x/
G.x; .x//;
the function h whose graph is the center manifold is a solution of the equation
Mh D 0.
B.x/
G.x; .x//:
.x/j D O.jxjq /
as x ! 0.
Stability
If we put y D h.x/ in the first equation in (5.20), we get the reduced equation
xP D Ax C F .x; h.x//;
(5.22)
(5.23)
To determine the stability of the origin in (5.23) (and, therefore, in the original system) we need to approximate h. Therefore, we consider the operator M
defined by
.M/.x/ D 0 .x/x.x/ C ax 3 C b..x//2 x C .x/
cx 2
dx 2 .x/;
135
5. I NVARIANT M ANIFOLDS
and seek polynomial (satisfying .0/ D 0 .0/ D 0) for which the quantity
.M/.x/ is of high order in x. By inspection, if .x/ D cx 2 then .M/.x/ D
O.x 4 /, so h.x/ D cx 2 C O.x 4 /, and (5.23) becomes
xP D .a C c/x 3 C O.x 5 /:
Hence, the origin is asymptotically stable if aCc < 0 and is unstable if aCc > 0.
What about the borderline case when a C c D 0? Suppose that a C c D 0 and
lets go back and try a different , namely, one of the form .x/ D cx 2 C kx 4 .
Plugging this in, we find that .M/.x/ D .k cd /x 4 C O.x 6 /, so if we choose
k D cd then .M/.x/ D O.x 6 /; thus, h.x/ D cx 2 C cdx 4 C O.x 6 /. Inserting
this in (5.23), we get
xP D .cd C bc 2 /x 5 C O.x 7 /;
so the origin is asymptotically stable if cd C bc 2 < 0 (and a C c D 0) and is
unstable if cd C bc 2 > 0 (and a C c D 0).
What if aCc D 0 and cd Cbc 2 D 0? Suppose that these two conditions hold,
and consider of the form .x/ D cx 2 C cdx 4 C kx 6 for some k 2 R yet to be
determined. Calculating, we discover that .M/.x/ D .k b 2 c 3 /x 6 C O.x 8 /,
so by choosing k D b 2 c 3 , we see that h.x/ D cx 2 C cdx 4 C b 2 c 3 x 6 C O.x 8 /.
Inserting this in (5.23), we see that (if a C c D 0 and cd C bc 2 D 0)
xP D
b 2 c 3 x 7 C O.x 9 /:
Bifurcation Theory
Bifurcation theory studies fundamental changes in the structure of the solutions
of a differential equation or a dynamical system in response to change in a parameter. Consider the parametrized equation
xP D F .x; "/;
(5.24)
:
"P D 0:
The Center Manifold Theorem asserts the existence of a center manifold for the
origin that is locally given by points .u; v; "/ satisfying an equation of the form
v D h.u; "/:
Furthermore, a theorem of Carr says that every solution .u.t/; v.t/; "/ of (5.25)
for which .u.0/; v.0/; "/ is sufficiently close to zero converges exponentially
quickly to a solution on the center manifold as t " 1. In particular, no persistent structure near the origin lies off the center manifold of this expanded system. Hence, it suffices to consider persistent structures for the lower-dimensional
equation
uP D Au C f .u; h.u; "/; "/:
137
6
Periodic Orbits
6.1 Poincare-Bendixson Theorem
Definition. A periodic orbit of a continuous dynamical system ' is a set of the
form
'.t; p/ t 2 0; T
for some time T and point p satisfying '.T; p/ D p. If this set is a singleton,
we say that the periodic orbit is degenerate.
6. P ERIODIC O RBITS
then it is monotone on S.
Proof. Let p be a point on T . Since S is closed and f is nowhere tangent
to S, the times t at which '.t; p/ 2 S form an increasing sequence (possibly
biinfinite). Consequently, if the lemma fails then there are times t1 < t2 < t3 and
distinct points pi D '.ti ; p/ 2 S, i 2 f1; 2; 3g, such that
fp1 ; p2 ; p3 g D '.t1 ; t3 ; p/ \ S
and p3 is between p1 and p2 . Note that the union of the line segment p1 p2 from
p1 to p2 with the curve '.t1 ; t2 ; p/ is a simple closed curve in the plane, so by
the Jordan Curve Theorem it has an inside I and an outside O. Assuming,
without loss of generality, that f points into I all along the interior of p1 p2 ,
we get a picture something like:
p1 b
O
bp2
Note that
I [ p1 p2 [ '.t1 ; t2 ; p/
is a positively invariant set, so, in particular, it contains '.t2 ; t3 ; p/. But the fact
that p3 is between p1 and p2 implies that f .p3 / points into I, so '.t3 "; p/ 2 O
for " small and positive. This contradiction implies that the lemma holds.
The proof of the next lemma uses something called a flow box. A flow box
is a (topological) box such that f points into the box along one side, points out
of the box along the opposite side, and is tangent to the other two sides, and the
140
Poincare-Bendixson Theorem
restriction of ' to the box is conjugate to unidirectional, constant-velocity flow.
The existence of a flow box around any regular point of ' is a consequence of
the C r -rectification Theorem.
Lemma. No !-limit set intersects a transversal in more than one point.
Proof. Suppose that for some point x and some transversal S, !.x/ intersects S
at two distinct points p1 and p2 . Since p1 and p2 are on a transversal, they are
regular points, so we can choose disjoint subintervals S1 and S2 of S containing,
respectively, p1 and p2 , and, for some " > 0, define flow boxes B1 and B2 by
Bi WD '.t; x/ t 2 "; "; x 2 Si :
Now, the fact that p1 ; p2 2 !.x/ means that we can pick an increasing sequence of times t1 ; t2 ; : : : such that '.tj ; x/ 2 B1 if j is odd and '.tj ; x/ 2 B2
if j is even. In fact, because of the nature of the flow in B1 and B2 , we can assume that '.tj ; x/ 2 S for each j . Although the sequence '.t1 ; x/; '.t2 ; x/; : : :
is monotone on the trajectory T WD
.x/, it is not monotone on S, contradicting
the previous lemma.
Definition. An !-limit point of a point p is an element of !.p/.
Lemma. Every !-limit point of an !-limit point lies on a periodic orbit.
Proof. Suppose that p 2 !.q/ and q 2 !.r/. If p is a singular point, then it
obviously lies on a (degenerate) periodic orbit, so suppose that p is a regular
point. Pick S to be a transversal containing p in its interior. By putting a
suitable flow box around p, we see that, since p 2 !.q/, the solution beginning
at q must repeatedly cross S. But q 2 !.r/ and !-limit sets are invariant, so the
solution beginning at q remains confined within !.r/. Since !.r/ \ S contains at
most one point, the solution beginning at q must repeatedly cross S at the same
point; i.e., q lies on a periodic orbit. Since p 2 !.q/, p must lie on this same
periodic orbit.
Lemma. If an !-limit set !.x/ contains a nondegenerate periodic orbit P, then
!.x/ D P.
Proof. Fix q 2 P. Pick T > 0 such that '.T; q/ D q. Let " > 0 be given.
By continuous dependence, we can pick > 0 such that j'.t; y/ '.t; q/j < "
whenever t 2 0; 3T =2 and jy qj < . Pick a transversal S of length less than
141
6. P ERIODIC O RBITS
with q in its interior, and create a flow box
B WD '.t; u/ u 2 S; t 2 ;
for some 2 .0; T =2. By continuity of '.T; /, we know that we can pick a
subinterval S 0 of S that contains q and that satisfies '.T; S 0 / B. Let tj be the
j th smallest element of
t 0 '.t; x/ 2 S 0 :
tj t C T 3T =2:
(6.1)
'.t
tj ; q/j D j'.t
tj ; '.tj ; x//
'.t
t "1
142
Lienards Equation
iC
b
C
L
iL
iR
b
iC ;
(6.2)
and Kirchhoffs voltage law tells us that the corresponding voltage drops satisfy
VC D VL C VR :
(6.3)
dVC
D iC ;
dt
(6.4)
VR D F .iR /:
(6.6)
143
6. P ERIODIC O RBITS
Let x D iL and f .u/ WD F 0 .u/. By (6.5),
xP D
1
VL ;
L
1
iC
C
d iR
F .iR /
dt
0
Hence,
1
1
f .x/xP C
x D 0:
L
LC
By rescaling f and t (or, equivalently, by choosing units judiciously), we get
Lienards Equation:
xR C f .x/xP C x D 0:
xR C
144
Definition. A limit cycle for a flow is a nondegenerate periodic orbit P that is the
!-limit set or the -limit set of some point q P.
Lienards Equation
Theorem. (Lienards Theorem) The flow generated by (6.7) has at least one
limit cycle. If D then this limit cycle is the only nondegenerate periodic
orbit, and it is the !-limit set of all points other than the origin.
The significance of Lienards Theorem can be seen by comparing Lienards
Equation with the linear equation that would have resulted if we had assumed
a linear resistor. Such linear RCL circuits can have oscillations with arbitrary
amplitude. Lienards Theorem says that, under suitable hypotheses, a nonlinear
resistor selects oscillations of one particular amplitude.
We will prove the first half of Lienards Theorem by finding a compact, positively invariant region that does not contain an equilibrium point and then using
the Poincare-Bendixson Theorem. Note that the origin is the only equilibrium
point of (6.7). Since
d 2
.x C y 2 / D 2.x xP C y y/
P D
dt
2xF .x/;
assumption (vi) implies that for " small, R2 n B.0; "/ is positively invariant.
The nullclines x D 0 and y D F .x/ of (6.7) (i.e. curves along which the
flow is either vertical or horizontal) separate the plane into four regions A, B, C,
and D, and the general direction of flow in those regions is as shown below. Note
that away from the origin, the speed of trajectories is bounded below, so every
solution of (6.7) except .x; y/ D .0; 0/ passes through A, B, C, and D in succession an infinite number of times as it circles around the origin in a clockwise
direction.
145
6. P ERIODIC O RBITS
We claim that if a solution starts at a point .0; y0 / that is high enough up on
the positive y-axis, then the first point .0; yQ0 / it hits on the negative y-axis is
closer to the origin then .0; y0 / was. Assume, for the moment, that this claim is
true. Let S1 be the orbit segment connecting .0; y0 / to .0; yQ0 /. Because of the
symmetry in (6.7) implied by Assumption (iii), the set
S2 WD .x; y/ 2 R2 . x; y/ 2 S1
S3 WD .0; y/ 2 R2
and let
S4 WD .0; y/ 2 R2
yQ0 < y < y0 ;
y0 < y < yQ0 ;
S5 WD .x; y/ 2 R2 x 2 C y 2 D "2 ;
for some small ". Then it is not hard to see that [5iD1 Si is the boundary of a
compact, positively invariant region that does not contain an equilibrium point.
y0
yQ0
yQ0
y0
To verify the claim, we will use the function R.x; y/ WD .x 2 C y 2 /=2, and
show that if y0 is large enough (and yQ0 is as defined above) then
146
Lienards Theorem
We will chop the orbit segment connecting .0; y0 / to .0; yQ0 / up into pieces and
use (6.8) and (6.9) to estimate the change R in R along each piece and, therefore, along the whole orbit segment.
Let D C 1, and let
B D sup jF .x/j:
0x
In R,
R WD .x; y/ 2 R2 x 2 0; ; y 2 B C ; 1/ :
dy
x
D
D 1I
dx
y F .x/
hence, if y0 > B C 2 , then the corresponding trajectory must exit R through
its right boundary, say, at the point .; y /. Similarly, if yQ0 < B 2 , then
the trajectory it lies on must have last previously hit the line x D at a point
.; yQ /. Now, assume that as y0 ! 1, yQ0 ! 1. (If not, then the claim clearly
holds.) Based on this assumption we know that we can pick a value for y0 and
a corresponding value for yQ0 that are both larger than B C 2 in absolute value,
and conclude that the orbit segment connecting them looks qualitatively like:
147
6. P ERIODIC O RBITS
.0; y0 /
.; y /
.; yQ /
.0; yQ0 /
We will estimate R on the entire orbit segment from .0; y0 / to .0; yQ0 / by
considering separately, the orbit segment from .0; y0 / to .; y /, the segment
from .; y / to .; yQ /, and the segment from .; yQ / to .0; yQ0 /.
First, consider the first segment. On this segment, the y-coordinate is a function y.x/ of the x-coordinate. Thus,
Z
xF .x/
jR.; y / R.0; y0 /j D
dx
F .x/
0 y.x/
Z
Z
B
xF .x/
dx
y.x/ F .x/ dx
B
0
0 y0
2B
D
!0
y0 B
as y0 ! 1. A similar estimate shows that jR.0; yQ0 / R.; yQ /j ! 0 as
y0 ! 1.
On the middle segment, we know that the x-coordinate is a function x.y/ of
the y-coordinate y. Hence,
R.; yQ /
R.; y / D
yQ
y
F .x.y// dy
jy
as y0 ! 1.
Putting these three estimates together, we see that
148
R.0; yQ0 /
R.0; y0 / !
yQ jF . / !
Lienards Theorem
as y0 ! 1, so jyQ0 j < jy0 j if y0 is sufficiently large. This shows that the orbit
connecting these two points forms part of the boundary of a compact, positively
invariant set that surrounds (but omits) the origin. By the Poincare-Bendixson
Theorem, there must be a limit cycle in this set.
Now for the second half of Lienards Theorem. We need to show that if
D (i.e., if F has a unique positive zero) then the limit cycle whose existence
weve deduced is the only nondegenerate periodic orbit and it attracts all points
other than the origin. If we can show the uniqueness of the limit cycle, then the
fact that we can make our compact, positively invariant set as large as we want
and make the hole cut out of its center as small as we want will imply that it
attracts all points other than the origin. Note also, that our observations on the
general direction of the flow imply that any nondegenerate periodic orbit must
circle the origin in the clockwise direction.
So, suppose that D and consider, as before, orbit segments that start on
the positive y-axis at a point .0; y0 / and end on the negative y-axis at a point
.0; yQ0 /. Such orbit segments are nested and fill up the open right half-plane.
We need to show that only one of them satisfies yQ0 D y0 . In other words, we
claim that there is only one segment that gives
R.0; yQ0 /
R.0; y0 / D 0:
Now, if such a segment hits the x-axis on 0; , then x all along that
segment, and F .x/ 0 with equality only if .x; y/ D .; 0/. Let x.y/ be the
x-coordinate as a function of y and observe that
R.0; yQ0 /
R.0; y0 / D
yQ0
F .x.y// dy > 0:
(6.10)
y0
We claim that for values of y0 generating orbits intersecting the x-axis in .; 1/,
R.0; yQ0 / R.0; y0 / is a strictly decreasing function of y0 . In combination with
(6.10) (and the fact that R.0; yQ0 / R.0; y0 / < 0 if y0 is sufficiently large), this
will finish the proof.
Consider 2 orbits (whose coordinates we denote .x; y/ and .X; Y /) that intersect the x-axis in .; 1/ and contain selected points as shown in the following
diagram.
149
6. P ERIODIC O RBITS
.0; Y0 /
b
b
.; Y /
.0; y0 /
.; y /
.; y /
.; yQ /
.; yQ /
.0; yQ0 /
b
b
.0; YQ0 /
.; YQ /
Note that
R.0; YQ0 /
R.; y /
(6.11)
R.; Y /
R.0; Y0 /
DW 1 C 2 C 3 C 4 C 5 :
Let X.Y / and x.y/ give, respectively, the first coordinate of a point on the
outer and inner orbit segments as a function of the second coordinate. Similarly,
let Y.X / and y.x/ give the second coordinates as functions of the first coordinates (on the segments where thats possible). Estimating, we find that
1 D
XF .X /
dX <
Y.X / F .X /
2 D
3 D
yQ
F .X.Y // d Y <
xF .x/
dx D R.0; yQ0 /
y.x/ F .x/
R.; yQ /;
(6.12)
YQ
F .X.Y // d Y < 0;
(6.13)
yQ
yQ
F .x.y// dy D R.; yQ /
4 D
150
R.; y /; (6.14)
F .X.Y // d Y < 0;
(6.15)
Lienards Theorem
and
5 D
XF .X /
dX <
Y.X / F .X /
xF .x/
dx D R.; y /
y.x/ F .x/
R.0; y0 /:
(6.16)
By plugging, (6.12), (6.13), (6.14), (6.15), and (6.16) into (6.11), we see that
R.0; YQ0 /
R.0; Y0 /
< R.0; yQ0 /
C R.; yQ /
R.; yQ / C 0
C R.; y /
R.; y / C 0
R.0; y0 /
D R.0; yQ0 /
R.0; y0 /:
151