Você está na página 1de 185

Guanrong Chen

Nonlinear Dynamical and Control


Systems

CRC PRESS, INC


Boca Raton, New York, London, Tokyo
Contents

1 Nonlinear Dynamical Systems: Preliminaries 2


1.1 A Typical Nonlinear Dynamical System Model . . . . . . . 2
1.2 Autonomous Systems and Iteration of Maps . . . . . . . . . 4
1.3 Dynamical Analysis on the Phase Plane . . . . . . . . . . . 8
1.3.1 The Phase Plane of a Planar System . . . . . . . . . 8
1.3.2 Phase Plane Analysis . . . . . . . . . . . . . . . . . 13
1.4 Qualitative Behaviors of Dynamical Systems . . . . . . . . . 18
1.4.1 Qualitative Analysis of Linear Dynamics . . . . . . . 19
1.4.2 Qualitative Analysis of Nonlinear Dynamics . . . . . 26
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2 Stabilities of Nonlinear Systems (I) 37


2.1 Lyapunov Stabilities . . . . . . . . . . . . . . . . . . . . . . 38
2.2 Lyapunov Stability Theorems . . . . . . . . . . . . . . . . . 42
2.3 LaSalle Invariance Principle . . . . . . . . . . . . . . . . . . 58
2.4 Some Instability Theorems . . . . . . . . . . . . . . . . . . 61
2.5 Construction of Lyapunov Functions . . . . . . . . . . . . . 65
2.6 Stability Regions: Basins of Attraction . . . . . . . . . . . . 68
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

3 Stabilities of Nonlinear Systems (II) 78


3.1 Linear Stability of Nonlinear Systems . . . . . . . . . . . . 78
3.2 Linear Stability of Nonlinear Systems with Periodic Linearity 84
3.3 Comparison Principle and Vector Lyapunov Functions . . . 88
3.4 Orbital Stability . . . . . . . . . . . . . . . . . . . . . . . . 91
3.5 Structural Stability . . . . . . . . . . . . . . . . . . . . . . . 92
3.6 Total Stability: Stability Under Persistent Disturbances . . 94
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

1
2 CONTENTS

4 Stabilities of Nonlinear Systems (III) 98


4.1 Lure Systems Formulated in the Frequency Domain . . . . 98
4.2 Absolute Stability and Frequency Domain Criteria . . . . . 101
4.2.1 Background and Motivation . . . . . . . . . . . . . . 101
4.2.2 SISO Lure Systems . . . . . . . . . . . . . . . . . . 102
4.2.3 MIMO Lure Systems . . . . . . . . . . . . . . . . . 108
4.3 Harmonic Balance Approximation and Describing Function 109
4.4 BIBO Stability . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.4.1 Small Gain Theorem . . . . . . . . . . . . . . . . . . 117
4.4.2 Relationship between BIBO and Lyapunov Stabilities 119
4.4.3 Contraction Mapping Theorem . . . . . . . . . . . . 121
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

5 Nonlinear Dynamics: Bifurcations and Chaos 125


5.1 Some Typical Bifurcations . . . . . . . . . . . . . . . . . . . 125
5.2 Period-Doubling Bifurcations . . . . . . . . . . . . . . . . . 129
5.3 Hopf Bifurcations in Two-Dimensional Systems . . . . . . . 133
5.3.1 The Hyperbolic Case and the Normal Form Theorem 133
5.3.2 Decoupled Planar Systems . . . . . . . . . . . . . . . 137
5.3.3 Hopf Bifurcation of Two-Dimensional Systems . . . 140
5.4 Poincare Maps . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.5 Strange Attractors and Chaos . . . . . . . . . . . . . . . . . 145
5.5.1 The Lorenz Chaotic System . . . . . . . . . . . . . . 146
5.5.2 Some Characterizations of Chaos . . . . . . . . . . . 149
5.6 Chaos in Discrete-Time Systems . . . . . . . . . . . . . . . 159
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

6 Lyapunov Design of Nonlinear Feedback Controllers 166


6.1 Feedback Control of Nonlinear Systems . . . . . . . . . . . 166
6.2 Feedback Controllers Design for Nonlinear Systems . . . . . 170
6.2.1 Linear Feedback Controllers for Nonlinear Systems . 170
6.2.2 Nonlinear Feedback Controllers for Nonlinear Systems 173
6.2.3 Some General Criteria for Controllers Design . . . . 176
6.3 Lyapunov Design of Nonlinear Controllers . . . . . . . . . . 178
6.3.1 An Illustrative Design Example . . . . . . . . . . . . 178
6.3.2 Adaptive Control via Lyapunov Design . . . . . . . 180
6.3.3 Lyapunov Redesign of Nonlinear Controllers . . . . . 182
1
Nonlinear Dynamical Systems:
Preliminaries

A nonlinear system refers to a set of nonlinear equations, which can be al-


gebraic, difference, differential, integral, functional, and abstract operator
equations, or a combination of some of them. A nonlinear system is used to
describe a physical device or process that otherwise cannot be well defined
by a set of linear equations of any kind, although a linear system is con-
sidered as a special case of a nonlinear system. Dynamical system, on the
other hand, is used as a synonym of mathematical or physical system, in
which the output behavior evolves with time (and, sometimes, with other
varying system parameters as well). The system responses, or behaviors,
of a dynamical system is referred to as system dynamics.

1.1 A Typical Nonlinear Dynamical System Model


A historically important model of typical nonlinear dynamical systems is
the pendulum. The study of pendulum can be traced back to as earlier as
Christian Huygens who investigated in 1665 the perfect synchrony of two
identical pendulum clocks that he invented.
A simple and idealized pendulum consists of a volumeless ball and a rigid
and massless rod, which is connected to a pivot, as shown in Fig. 1.1. In
this figure, ` is the length of the rod, m the mass of the ball, g the constant
of gravity acceleration, = (t) the angle of the rod with respect to the
vertical axis, and f = f (t) the resistive force applied to the ball. The
straight down position finds the ball at rest; but if it is displaced by an
angle from the reference axis and then let go, it will swing back and forth
in a circular arc within a vertical plane to which it is confined.
For general purpose of mathematical analysis of the pendulum, a basic
assumption is that the resistive force f is proportional to the velocity along

3
4 Nonlinear Dynamical Systems: Preliminaries

FIGURE 1.1
Schematic diagram of an idealized pendulum model.

the arc of the motion trajectory of the volumeless ball, i.e., f = s,


where
0 is a constant and s = s(t) = ` (t) is the ball-traveled arc length
measured from the reference axis.
It follows from Newtons second law of motion that

m s = mg sin() s ,

or
g
+ + sin() = 0 . (1.1.1)
m `
This is the classical, idealized, damped pendulum equation, which is non-
linear due to the involvement of the sine function. If = 0 (i.e., f = 0),
then it becomes the undamped pendulum equation
g
+ sin() = 0 . (1.1.2)
`
To this end, by introducing two new variables,

x1 = and x2 = ,

the damped pendulum equation can be rewritten in the following state-


space form:

x 1 = x2 ,
g
x 2 = x2 sin(x1 ) . (1.1.3)
m `
Here, x1 = x1 (t) and x2 = x2 (t) are called the system state variables,
or simply system states, for they describe the physical states (the angular
position and angular velocity, respectively) of the pendulum.
It is known, from both physics and mathematics, that the pendulum
state x(t) is periodic. In general, even in a higher-dimensional case, a state
Autonomous Systems and Iteration of Maps 5

or a solution, x(t) of a dynamical system is a periodic solution if it is a


solution of the system and moreover satisfies x(t + tp ) = x(t) for some
constant tp > 0. The least value of such tp is called the (fundamental )
period of the periodic solution, while the solution is said to be tp -periodic.
Although conceptually straightforward and formulationally simple, this
pendulum model has many important and interesting properties. This
representative model of nonlinear systems will be frequently referred to,
within this chapter and throughout the rest of the textbook.

1.2 Autonomous Systems and Iteration of Maps


The pendulum model (1.1.3) is called an autonomous system, in which
there is no independent (or separated) time variable t (other than as a
time variable of the system states) anywhere in the model formulation. On
the contrary, the following forced pendulum is said to be nonautonomous:
x 1 = x2
g
x 2 = x2 sin(x1 ) + h(t) , (1.2.4)
m `
since t as a variable independent of the system states exists in h(t), which,
in this example, is an external force applied to the pendulum. One case
of such a force input is h(t) = a cos(t), which will change the angular
acceleration of the pendulum, where a and are some constants.
The forced pendulum (1.2.4) has a time variable, t, within the external
force term h(t), which does not appear as the time variables of the system
states x1 and x2 . However, if h(t) = a cos((t)), then the forced pendulum
is considered to be autonomous because the time variable in the force term
is merely the time variable of the system state x1 . In the latter case, the
external input should be denoted as h(x1 ) instead of h(t).
In general, an n-dimensional autonomous system has the form
x = f (x; p) , x0 Rn , t [t0 , ) , (1.2.5)

and a nonautonomous system takes on the form


x = f (x, t; p) , x0 R n , t [t0 , ) , (1.2.6)

where x = x(t) = [x1 (t) xn (t)]> is the state vector, x0 a given initial
state at the initial time t0 0, p a vector of system parameters, which can
be variable but are independent of time, and

f1 f1 (x1 , , xn )

f = ... = ..
.
fn fn (x1 , , xn )
6 Nonlinear Dynamical Systems: Preliminaries

is called the system function or vector field.


In a well-formulated mathematical model, the system function should
satisfy some defining conditions such that the model (1.2.5) or (1.2.6) has
a unique solution for each initial state x0 in a region of interest, Rn ,
and for each allowable set of parameters p. According to the elementary
theory of ordinary differential equations, this is ensured if the function f
satisfies the following Lipschitz condition:
||f (x) f (y)|| ||x y||
for all x and y in that satisfy the system equation, where 0 is a
constant. Here and throughout the textbook, || || denotes the standard
Euclidean norm (the length) of a vector. This general setup (i.e., the
fulfillment of some necessary defining conditions) for a given mathematical
model will not always be repeated for other systems later, for simplicity of
notation and presentation.
Sometimes, an n-dimensional continuous-time dynamical system is de-
scribed by a time-varying map,

Fc (t) : x g x, t; p , x0 Rn , t [t0 , ) , (1.2.7)

or by a time-invariant map,

Fc : x g x; p , x0 R n , t [t0 , ) . (1.2.8)

These two maps, in the continuous-time case, take a function to a function;


so by nature they are operators. Such operators will not be further studied
in this textbook.
For the discrete-time setting, with similar notation, a nonlinear dynam-
ical system is either described by a time-varying difference equation,

xk+1 = fk xk ; p , x0 R n , k = 0, 1, , (1.2.9)

where the subscript of fk signifies the dependence of f on the discrete time


variable k, or described by a time-invariant difference equation,

xk+1 = f xk ; p , x0 R n , k = 0, 1, . (1.2.10)

Also, it may be described either by a time-varying map,



Fd (k) : xk gk xk ; p , x0 R n , k = 0, 1, , (1.2.11)

or by a time-invariant map,

Fd : xk g xk ; p , x0 Rn , k = 0, 1, . (1.2.12)

These discrete-time maps, particularly the time-invariant ones, are very


important and convenient for the study of system dynamics; they will be
frequently used later on.
Autonomous Systems and Iteration of Maps 7

For a system given by a time-invariant difference equation, repeatedly


iterating the system function f leads to

f} x0 = f k x0 ,
xk = f| {z (1.2.13)
k times

where, as usual, denotes a composition operation of two functions or


maps. Similarly, for a discrete map, Fd , repeatedly iterating it backwards
yields

xk = Fd xk1 = Fd Fd (xk2 ) = = Fdk x0 , (1.2.14)

k = 0, 1, 2, .

Example 1.1
Let
f (x) = p x(1 x) , p R is a constant.
Then

f 2 (x) = f p x(1 x) = p [p x(1 x)] 1 [p x(1 x)] ,

where the last equality is obtained by substituting each x in the previous


step with [px(1 x)]. For higher iterates, of course, it quickly becomes
very messy and, in fact, almost impossible to write out the final explicit
formula for f n (x).

If a function (or map), f , is invertible, and f 1


1 n denotes the inverse of
2 1 2 n
f , then f (x) = f (x), and f (x) = f (x), and so on. With
the convention that f 0 denotes the identity map, f 0 (x) := x, a general
composition formula for an invertible f is obtained:

f n (x) = f f n1 (x) = f f n1 (x) , n = 0, 1, 2, . (1.2.15)

The derivative of a composed map can also be obtained via the chain
rule. For instance, for the one-dimensional case,
n 0
f x0 = f 0 xn1 f 0 x0 . (1.2.16)

This formula is convenient to use, because one does not need to explicitly
compute f n (x), or (f n )0 (x). Moreover, using the chain rule, it can be
verified that
1 0 1
f = 0 , x1 := f 1 (x) . (1.2.17)
f x1
8 Nonlinear Dynamical Systems: Preliminaries

Example 1.2
For f (x) = x(1 x) with x0 = 1/2 and n = 3, one has
f 0 (x) = 1 2x , x1 = f (x0 ) = 1/4 , and x2 = f (x1 ) = 3/16 ,

so that
3 0
f (1/2) = f 0 (3/16) f 0 (1/4) f 0 (1/2)

= 1 2(3/16) 1 2(1/4) 1 2(1/2)
= 0.

Finally in this section, consider a function (or map), f , given by either


(1.2.13) or (1.2.14).

DEFINITION 1.3 A point x is called a periodic point of period n (a pos-


itive integer), or an n-periodic point, of f if it satisfies

f n x = x but f k x 6= x for 0 < k < n . (1.2.18)

If x is of period one (n = 1), then it is also called a fixed point, or an


equilibrium point, which satisfies

f x = x . (1.2.19)

Moreover, a point x is said to be eventually periodic of period n if there


is an integer m > 0 such that

f m x is a periodic point and f m+n x = f m x . (1.2.20)

It is a consequence of the definition that if f is eventually periodic of period


n, with a certain integer m > 0 satisfying (1.2.20), then

f n+q x = f q x for all q m .

Example 1.4
The map f (x) = x3 x has three fixed points: x1 = 0 and x1,2 = 2,
which are solutions of the equation f (x ) = x . It has two eventually fixed
points of period one: x1,2 = 1, since their first iterates go to the fixed
point 0.

DEFINITION 1.5 For a continuous map (or function), f , the forward orbit
of a periodic point, x
n o
+ x = f k x : k 0 .
Dynamical Analysis on the Phase Plane 9

If f is invertible, then the backward orbit of x is


n o
x = f k x : k 0 .

The whole orbit of x then is


n o
x = + x x = f k x : k = 0, 1, 2, .

Note that if f is not invertible, then one may construct the backward
orbit of x by letting x1 = f (x0 ), x2 = f (x1 ), , or by defining
x1 = f 1 (x0 ),x2 = f 1 (x1 ), and so on, using the set of points f 1 (y) =

x : f (x ) = y .

DEFINITION 1.6 A set S Rn is said to be invariant under f if f k (x) S


for all x S and for all k = 0, 1, 2, .

1.3 Dynamical Analysis on the Phase Plane


In this section, a general two-dimensional nonlinear autonomous system is
discussed, which has the form

x = f x, y ,

y = g x, y . (1.3.21)

As mentioned above, the two functions f and g together describe the vector
field of the system. In the following, for lower (two or three) dimensional
systems, the state variables will be denoted as x, y, and z, instead of x1 ,
x2 , and x3 , for notational convenience.

1.3.1 The Phase Plane of a Planar System


The path traveled by a solution of the planar system (1.3.21), starting from
the initial state (x0 , y0 ), is a solution trajectory, or an orbit, of the system,
and is sometimes denoted by t (x0 , y0 ). For autonomous systems, the xx
coordinate plane is called the phase plane of the system. In general, even if
y 6= x,
the xy coordinate plane is called the (generalized) phase plane. In
the higher-dimensional case, it is called the phase space of the underlying
dynamical system. Moreover, the orbit family of an autonomous system,
corresponding to all possible initial conditions, is called a solution flow in
10 Nonlinear Dynamical Systems: Preliminaries

(a) phase plane

(b) phase portrait

FIGURE 1.2
Phase plane and phase portrait of a dynamical system.

the phase space. The graphical layout of the solution flow provides a phase
portrait of the system dynamics in the phase space, as depicted by Fig. 1.2.
The phase portrait of the damped and undamped pendulum systems,
(1.1.1) and (1.1.2), are shown in Fig. 1.3.
Examining the phase portraits shown in Fig. 1.3, a natural question
arises: how can one determine the motion direction of the orbit flow in the
phase plane as the time evolves? Of course, a computer graphic demonstra-
tion or some technical mathematical analysis can provide a fairly complete
answer to this question. However, a quick sketch to show the qualitative
behavior of the system dynamics is still quite possible. This is illustrated
by the following two examples.

Example 1.7
Consider the simple linear harmonic oscillator

+ = 0 .

By defining
x= and y = ,
Dynamical Analysis on the Phase Plane 11

(a) damped pendulum

(b) undamped pendulum

FIGURE 1.3
Phase portraits of the damped and undamped pendula.

this harmonic equation becomes

x = y
y = x .

With initial conditions x(0) = 1 and y(0) = 0, this harmonic oscillator has
solution
x(t) = cos(t) and y(t) = sin(t) .
This solution trajectory in the xyt space and the corresponding orbit
in the xy phase plane (together with some other solutions starting from
different initial conditions) are sketched in Fig. 1.4, which shows clearly
the direction of motion of the phase portrait.

Example 1.8
Consider the normalized undamped pendulum equation

+ sin() = 0 .

By defining
x= and y = ,
12 Nonlinear Dynamical Systems: Preliminaries

(a) phase portrait in the xyt space

(b) phase portrait in the xy phase plane

FIGURE 1.4
Phase portraits of the simple harmonic equation.

this pendulum equation becomes


x = y
y = sin(x) .
With initial conditions x(0) = 1 and y(0) = 0, it has solution

x(t) = (t) = 2 sin1 tanh(t) and .
y(t) = (t)
The phase portrait of this undamped pendulum, along with some other
solutions starting from different initial conditions, is sketched in the xy
phase plane, as shown in Fig. 1.5. This sketch also clearly indicates the
direction of motion of the solution flow. The shape of a solution trajectory
of this undamped pendulum in the xyt space can also be sketched, which
may be quite complex, depending on the initial conditions.

Example 1.9
Another way to understand the phase portrait of the general undamped
pendulum
x = y
g
y = sin(x)
`
Dynamical Analysis on the Phase Plane 13

FIGURE 1.5
Phase portrait of the undamped pendulum equation.

is to examine its total (kinetic and potential) energy


Z
y2 g x y2 g
E= + sin() d = + 1 cos(x) .
2 ` 0 2 `

Figure 1.6 shows the potential energy plot, P (x), versus x = , and the cor-
responding phase portrait in the xy phase plane of the damped pendulum.
It is clear that the lowest level of total energy is E = 0, which corresponds
to the angular positions x = = 2n, n = 0, 1, . As the total energy
increases, the pendulum swings up or down, with an increasing or decreas-
ing angular speed, |y| = ||, provided that E is within its limit indicated
by E2 . Within each period of oscillation, the total energy E = constant,
according to the conservation law of energy, for this idealized undamped
pendulum.

FIGURE 1.6
Phase portrait of the undamped pendulum versus its total energy.
14 Nonlinear Dynamical Systems: Preliminaries

1.3.2 Phase Plane Analysis


This subsection addresses the following important question: why is it im-
portant to study autonomous systems and their phase portraits?
The answer to this question is provided by the following several theo-
rems, which together summarize a few important and useful properties of
autonomous systems in the study of nonlinear dynamics. Although these
theorems are stated and proven for planar systems in this subsection, they
hold for general higher-dimensional autonomous systems as well.

THEOREM 1.1
A nonautonomous system can be reformulated as an autonomous one.

PROOF Consider an arbitrary nonautonomous system,



x = f x, t , x0 Rn .
Let the independent time variable t be a new variable, i.e., define xn+1 (t) =
t for this separated variable t of the system. Then, x n+1 = 1. Consequently,
the original system can be augmented as

x f x, xn+1
= ,
x n+1 1
which is an autonomous system.

Obviously, the price to pay for this conversion (from a nonautonomous


system to an autonomous one) is the increase of dimension. In dynamical
analysis, this usually is acceptable although a higher-dimensional system
usually has more complex dynamics, since these two systems are equivalent
through such a variables transform.
However, it is important to note that in a nonlinear control system of
the form
x = f x, u(t) ,
which will be studied in detail later throughout the textbook, the controller
u(t) is a time function and is yet to be designed (it is not a system vari-
able). In this case, one should not (cannot) convert the control system
to be autonomous using this technique; otherwise, the controller looses its
physical meaning and can nover be designed for control purposes. This
issue will be revisited later within the context of control systems design.

THEOREM 1.2
If x(t) is a solution of the autonomous system x = f (x), then so is the
orbit x(t + a), for any real constant a. Moreover, these two solutions are
the same, except that they may pass the same point in the phase plane at
two different instants.
Dynamical Analysis on the Phase Plane 15

The last statement of the theorem describes the inherent time-invariant


property of autonomous systems.
d
PROOF Because dt x(t) = f (x(t)), for any real constant , one has

d d

x(t + a) = x(s) = f (x(s)) = f (x(t + a)) .
dt t= ds s= +a s= +a t=

Since this holds for all real , it implies that x(t + a) is a solution of the
equation x = f (x). Moreover, the value assumed by x(t) at instant t = t
is the same as that assumed by x(t + a) at instant t = t a. Hence,
these two solutions are the identical, in the sense that they have the same
trajectory if they are both plotted in the phase plane of the system.

Example 1.10
The autonomous system x(t) = x(t) has a solution x(t) = et . It is easy
to verify that et+a is also a solution of this system for any real constant
a. These two solutions are the same, in the sense that they have the same
trajectory if they are plotted in the xx phase plane, except that they pass
the same point at two different instants (e.g., the first one passes the point
1 at t = 0 but the second, at t = a).

However, a nonautonomous system may not have such a property.

Example 1.11
The nonautonomous system x(t)
= et has a solution x(t) = et . But et+a is
not its solution if a 6= 0.

Note that if one applies Theorem 1.1 to Example 1.11 and let y(t) = t,
then
x(t)
= ey(t) ,
(a)
y(t)
= 1,
which has solution
x(t) = et ,
(b)
y(t) = t .
Theorem 1.1 states that (b) is a solution of (a); it does not mean that (b)
is a solution of the original equation x(t)
= et . Here, only the first part
t
of (b), i.e., x(t) = e , is a solution of the given equation, and the second
part of (b) is used to convert the given nonautonomous system to be an
autonomous one.
16 Nonlinear Dynamical Systems: Preliminaries

THEOREM 1.3
Suppose that the autonomous system x(t) = f (x(t)) has a unique solution
starting from a given initial state x(t0 ) = x0 . Then, there will not be any
other (different) orbit of the same system that also passes through this point
x0 in the phase plane.

Before giving a proof to this result, two remarks are in order. First,
this theorem implies that the solution flow of an autonomous system has
simple geometry, as depicted in Fig. 1.7, where different orbits starting from
different initial states do not cross each other. Second, in the phase portrait
of the (damped or undamped) pendulum (see Fig. 1.3, it may seem that
there are more than one orbits passing through the points (, 0) and (, 0)
etc. However, those orbits are periodic orbits, so the principal solution of
the pendulum corresponds to those curves located between the two vertical
lines passing through the two end points x = and x = , respectively.
Thus, within each 2 period, actually no self-crossing exists. It will be
seen later that all such seemingly self-crossing occur only at those special
points called stable node (sink ), unstable node (source), or saddle node (see
Fig. 1.8), where the orbits spiral into a sink, spiral out from a source, but
spiral in and out from a saddle node in different directions.

(a) flow has no self-crossing (b) crossing is impossible

FIGURE 1.7
Simple phase portrait of an autonomous system.

PROOF Let x1 and x2 be two solutions of x = f (x), satisfying

x1 (t1 ) = x0 and x2 (t2 ) = x0 .

By Theorem 1.2,

e2 (t) := x2 t (t1 t2 )
x
is the same solution of the given autonomous system, namely,

e2 (t) .
x2 (t) = x (a)
Dynamical Analysis on the Phase Plane 17

This solution satisfies



e2 (t1 ) := x2 t1 (t1 t2 ) = x2 (t2 ) = x0 .
x

e2 are
Therefore, it follows from the uniqueness of the solution that x1 and x
the same:
e2 (t) ,
x1 (t) = x (b)
since they both equal to x0 at the same initial time t1 . Thus, (a) and (b)
together imply that x1 and x2 are identical.

(a) source (b) sink (c) saddle node

FIGURE 1.8
Simple phase portrait of an autonomous system.

Note that a nonautonomous system may not have such a property.

Example 1.12
Consider the nonautonomous system

x(t)
= cos(t) .

This system has the following solutions, among others:

x1 (t) = sin(t) and x2 (t) = 1 + sin(t) .

These two solutions are different, for if they are plotted in the phase plane,
they show two different trajectories:
q q
x 1 (t) = cos(t) = 1 sin2 (t) = 1 x21 ,
q p
x 2 (t) = cos(t) = 1 sin2 (t) = 1 [1 + sin(t) 1]2
p
= 1 (x2 1)2 .

These two trajectories cross over at a point x1 = x2 = 1/2, as can be seen


from Fig. 1.9.
18 Nonlinear Dynamical Systems: Preliminaries

FIGURE 1.9
Two crossing trajectories of a nonautonomous system.

THEOREM 1.4
A closed orbit of x = f (x) in the phase plane corresponds to a periodic
solution of the system.

PROOF For a -periodic solution, x(t), one has x(t0 + ) = x(t0 ) for any
t0 R, which means that the trajectory of x(t) is closed.
On the contrary, suppose that the orbit of x(t) is closed. Let x0 be a point
in the closed orbit. Then, x0 = x(t0 ) for some t0 , and the trajectory of x(t)
will retrun to x0 after some time, say 0; that is, x(t0 + ) = x0 = x(t0 ).
Since x0 is arbitary, and so is t0 , this implies that x(t + ) = x(t) for all t,
meaning that x(t) is periodic (with period ).

Yet, a nonautonomous system may not have such a property.

Example 1.13
The nonautonomous system
x = 2t y
y = 2t x
has solution
x(t) = cos(t2 ) + sin(t2 )
y(t) = sin(t2 ) + cos(t2 ) ,
for some constants and determined by the initial conditions. This
solution is not periodic, but it is a circle (a closed orbit) in the xy phase
plane.

As mentioned at the beginning of this subsection, the above four the-


orems hold for general higher-dimensional autonomous systems. Since
Qualitative Behaviors of Dynamical Systems 19

these properties are simple, elegant, and easy to deal with, which a nonau-
tonomous system may not have, it is very natural to focus the investigation
of complex dynamics on autonomous systems of various forms in different
dimensions. This motivates the following studies.

1.4 Qualitative Behaviors of Dynamical Systems


In this section, the following general two-dimensional autonomous system
is considered:
x = f (x, y)
y = g(x, y) . (1.4.22)
Let be a periodic solution of the system, which has a closed orbit in the
xy phase plane.

DEFINITION 1.14 is said to be an inner (outer) limit cycle of system


(1.4.22) if in an arbitrarily small neighborhood of the inner (outer ) region
of there is always (part of ) a nonperiodic solution orbit of the system.
is called a limit cycle, if it is both inner and outer limit cycles.

Simply put, a limit cycle is a periodic orbit of the system that corresponds
to a closed orbit in the phase plane and possesses certain (attracting or
repelling) limiting properties. Figure 1.10 shows some typical limit cycles
for the two-dimensional system (1.4.22), where the attracting limit cycle is
said to be stable, while the repelling one, unstable.

Example 1.15
The simple harmonic oscillator discussed in Example 1.7 has no limit cycles.
The solution flow of the oscillator constitutes a ring of periodic orbits, called
periodic ring, as shown in Fig. 1.4. Similarly, the undamped pendulum has
no limit cycles (see Fig. 1.3).

This example shows that although a limit cycle is a periodic orbit, not
all periodic orbits are limit cycles (not even inner or outer limit cycles).

Example 1.16
A typical example of a stable limit cycle is the periodic solution of the
Rayleigh oscillator, given by

+ x = p x x 3 ,
x p > 0, (1.4.23)
20 Nonlinear Dynamical Systems: Preliminaries

(a) inner limit cycle (b) outer limit cycle (c) attracting limit cycle

(d) repelling limit cycle (e) saddle limit cycle (f) saddle limit cycle

FIGURE 1.10
Periodic orbits and limit cycles.

which was formulated in the 1920s to describe oscillations in some electrical


and mechanical systems. This limit cycle is shown in Fig. 5.22 for some
different values of p. These phase portraits are usually obtained either
numerically or experimentally, because they do not have a simple analytic
formula.

Example 1.17
Another typical example of a stable limit cycle is the periodic solution of
the van der Pol oscillator, given by

+ x = p 1 x2 x ,
x p > 0, (1.4.24)

which was formulated around 1920 to describe oscillations in a triode cir-


cuit. This limit cycle is shown in Fig. 5.22, which is usually obtained either
numerically or experimentally, because it does not have a simple analytic
formula.

1.4.1 Qualitative Analysis of Linear Dynamics


For illustration, consider a second-order linear autonomous (i.e., time-
invariant) system,

x(t) = A x(t) , x(0) = x0 , (1.4.25)
Qualitative Behaviors of Dynamical Systems 21

(a) (b)

(c) (d)

FIGURE 1.11
Phase portrait of the Rayleigh oscillator
(a) p = 0.01; (b) p = 0.1; (c) p = 1.0; (d) p = 10.0.

FIGURE 1.12
Phase portrait of the van der Pol oscillator.
22 Nonlinear Dynamical Systems: Preliminaries

where A is a given 2 2 constant matrix and, for simplicity, the initial time
is set to be t0 = 0. Obviously, this system has a unique fixed point, x = 0.
The solution of system (1.4.25) is

x(t) = etA x0 = M etJ M 1 x0 , (1.4.26)



where M = v1 v2 with v1 and v2 being two linearly independent real
eigenvectors associated with the two eigenvalues of A, and J is in the Jordan
canonical form, which is one of the following forms:

1 0
, , ,
0 2 0

where 1 , 2 , , , and are real constants, and = 0 or 1.


There are three cases to study, according to the three different canonical
forms of the Jordan matrix J shown above.
Case (i) The two constants 1 and 2 are different, but both real and
nonzero.
In this case, since 1 and 2 are actually the eigenvalues of matrix A,
the two eigenvectors v1 and v2 are associated with them, respectively. Let

z = M 1 x .

Then the given system is transformed to



1 0 1 z10
z = z, with z0 = M x0 := .
0 2 z20

Its solution is

z1 (t) = z10 et1 and z2 (t) = z20 et2 ,

which are related by


/1 2 /1
z2 (t) = c z1 2 (t) , with c = z20 z10 .

To show the phase portraits of the solution flow, there are three situations
to consider: (a) 2 < 1 < 0; (b) 2 < 0 < 1 ; (c) 0 < 2 < 1 .
Only situation (a) is discussed in detail here. In this case, the two eigen-
values are both negative, so that et1 0 and et2 0 as t , but the
latter tends to zero faster. The corresponding phase portrait is shown in
Fig. 1.13, where the fixed point (the origin) is called a stable node.
Now, return to the original state, x = M z. The original phase portrait is
somewhat twisted, as shown in Fig. 1.14. Qualitatively, Figs. 1.13 and 1.14
are considered to be the same, or topologically equivalent. A more precise
meaning of topological equivalence is given below in (1.4.29). Roughly
Qualitative Behaviors of Dynamical Systems 23

FIGURE 1.13
Phase portrait of case (a): 2 < 1 < 0.

FIGURE 1.14
Phase portrait of case (a): 2 < 1 < 0.
24 Nonlinear Dynamical Systems: Preliminaries

at this point, it means that their dynamical behaviors are qualitatively


similar.
The other two situations, (b) and (c), can be analyzed in the same way,
where case (b) shows a saddle node and case (c), an unstable node. This is
left as an exercise.
Case (ii) The two constants
1 and 2 are nonzero complex conjugates:
1,2 = j , where j = 1. Let
z = M 1 x ,
and transform the given system to

z10
z = z, with z0 = .
z20
In polar coordinates,
q
z2
r = z12 + z22 and = tan1 ,
z1
this system has solution
r(t) = r0 e t and (t) = 0 + t ,

where r0 = (z102
) and 0 = tan1 z20 /z10 . This solution trajectory
2 1/2
+z20
is visualized by Fig. 1.15, where the fixed point (the origin) in case (a) is
called a stable node, in case (b), an unstable focus, and in case (c), a center.
In the original xy phase plane, the phase portrait has a twisted shape, as
shown in Fig. 1.16.

(a) 0 is a stable focus (b) 0 is an unstable focus (c) 0 is a center

FIGURE 1.15
Phase portrait of case (ii): 1,2 = j .

Case (iii) The two constants 1 and 2 are nonzero multiple real values:
1 = 2 := . Let
z = M 1 x ,
Qualitative Behaviors of Dynamical Systems 25

(a) 0 is a stable focus (b) 0 is an unstable focus (c) 0 is a center

FIGURE 1.16
Phase portrait of case (ii): 1,2 = j .

and transform the given system to



z10
z = z, with z0 = .
0 z20
Its solution is

z1 (t) = et z10 + z20 t and z2 (t) = z20 et ,

which are related by



z10 z2 (t)
z1 (t) = z2 (t) + ln .
z20 z20
Its phase portrait is shown in Fig. 1.17, while its corresponding phase por-
trait in the xy phase plane is similar (in particular, the linear coordinates
transform M does not change the shape of a straight line in the two phase
planes).
Case (iv) One, or both, of 1,2 are zero (the degenerate case).
In this case, the matrix A in x = Ax has a nontrivial null space (of
dimension 1, or 2, respectively), so that any vector in the null space of A is
a fixed point. As a result, the system has a fixed or equilibrium subspace.
More precisely, these two situations are as follows:
(a) 1 = 0 but 2 6= 0
In this case, the system can be transformed to

0 0 z10
z = z, with z0 = ,
0 2 z20
which has solution

z1 (t) = z10 and z2 (t) = z20 e2 t .


26 Nonlinear Dynamical Systems: Preliminaries

(a) 0 is a stable focus (b) 0 is an unstable focus

(c) 0 is a stable node (d) 0 is an unstable node

FIGURE 1.17
Phase portrait of case (iii): 1 = 2 6= 0.

(a) a stable equilibrium subspace (b) an unstable equilibrium subspace

FIGURE 1.18
Phase portrait of case (iv): 1 = 0 but 2 6= 0.
Qualitative Behaviors of Dynamical Systems 27

The phase portrait of x = M z is shown in Fig. 1.18, where (a) shows a


stable equilibrium subspace and (b), an unstable subspace.
(b) 1 = 2 = 0
In this case, the system is transformed to

0 1 z10
z = z, with z0 = ,
0 0 z20

which has solution

z1 (t) = z10 + z20 t and z2 (t) = z20 .

The phase portrait of x = M z is shown in Fig. 1.19, which is a saddle


equilibrium subspace.

FIGURE 1.19
Phase portrait of case (iv): 1 = 2 = 0.

1.4.2 Qualitative Analysis of Nonlinear Dynamics


This subsection is devoted to some qualitative analysis of dynamical be-
haviors of a general nonlinear autonomous system in a neighborhood of a
fixed point (equilibrium point) of the system. It should be mentioned at
this point that all results derived here are local.
Consider a general nonlinear autonomous system,

x = f (x) , x0 Rn , (1.4.27)

where it is assumed that f C 1 (i.e., it is continuously differentiable with


respect to its arguments). Assume that this system has a fixed point, x .
Taylor-expanding f (x) at x yields

f
x = f x + x x + e(x) = J x x + e(x) ,
x x=x
28 Nonlinear Dynamical Systems: Preliminaries

where
f f1 /x1 f1 /x2
J= =
x x=x
f2 /x1 f2 /x2 x=x
2
is the Jacobian, and e(x) = O(||x|| ) represents the residual of all higher-
order terms, which satisfies
||e(x)||
lim = 0.
t ||x||
Letting y = x x leads to

y = J y + e(y)

where e(y) = O(||y||2 ). In a small neighborhood of x , ||x x || is small,


so O(||y||) 0. Thus, the nonlinear autonomous system (1.4.27) and its
linearized system x = J (xx ) have the same dynamical behaviors, where
the latter in a small neighborhood of x is the same as y = J y in a small
neighborhood of 0. In other words,

x = f (x) versus y = J (y) (1.4.28)




stable node
stable node



unstable node unstable node
x is stable focus 0 is stable focus (1.4.29)




unstable focus
unstable focus

saddle node saddle node

In this sense, the local dynamical behaviors of the two systems in (1.4.28)
are said to be qualitatively the same, or topologically equivalent. A precise
mathematical definition is given as follows.

DEFINITION 1.18 Two time-invariant system functions, f : X Y and


g : X Y , where X, Y , X , and Y are (open sets of ) metric spaces,
are said to be topologically equivalent, if there is a homeomorphism, h :
Y Y , such that h1 : X X and

g(x) = h1 f h(x) , xX.

This definition is illustrated by Fig. 1.4.2. Here, a homeomorphism is


an invertible continuous function whose inverse is also continuous. For
instance, for X = Y = R, the two functions f (x) = 2x3 and g(x) = 8x3 are
topologically equivalent. This is because one can find a homeomorphisim,
h(x) = (x)1/3 , which yields
3 3
h1 f h(x) = 2 (x)1/3 = 8x3 = g(x) .
Qualitative Behaviors of Dynamical Systems 29

f
X - Y
6
h1 h
?
X - Y
g

FIGURE 1.20
Two topologically equivalent functions
or topologically conjugate maps.

A homeomorphism preserves the system dynamics as seen from the ex-


ample of the one-one correspondence (1.4.29). When both X and Y are
Euclidean spaces, the homeomorphism h may be viewed as a coordinates
transform.
For discrete-time systems (maps), such topological equivalence refers to
as topological conjugacy, and the two maps are said to be topologically
conjugate if they satisfy the relationships shown in Fig. 1.4.2, where h is a
homeomorphism.

THEOREM 1.5
If f and g are topologically conjugate, then
(i) the orbits of f map to the orbits of g under h;
(ii) if x is a fixed point of f , then the eigenvalues of f 0 (x ) map to the
eigenvalues of g 0 (x ).

PROOF First, note that the orbit of x under iterates of map f is


n o
(x ) = , f k (x ), , f 1 (x ), x , f (x ), , f k (x ) .

Since f = h1 g h, for any given k > 0, one has



f k (x ) = h1 g h h1 g h (x )
= h1 g k h(x ) .

On the other hand, since f 1 = h1 g 1 h, for any given k > 0 one has

h f k (x ) = g k h(x ) .

A comparison of these two equalities shows that the orbit of x under


iterates of f is mapped by h to the orbit of h(x ) under iterates of map g.
This proves Part (i).
30 Nonlinear Dynamical Systems: Preliminaries

The conclusion of Part (ii) follows from a direct calculation:


df dh1 dg dh
= ,
dx x=x dx x=x dx x=h(x ) dx x=x

recalling that similar matrices have the same eigenvalues.

Example 1.19
The damped pendulum system (1.1.1), namely,

x = y
g
y = y sin(x) ,
m `
has two fixed points,

x , y = , = (0, 0) and x , y = , = (, 0) .

It is known from the pendulum physics (see Fig. 1.21) that the first fixed
point is stable while the send, unstable.

(a) the stable fixed point (b) the unstable fixed point

FIGURE 1.21
Two fixed points of a damped pendulum.

According to the above analysis, the Jacobian of the damped pendulum


system is
0 1
J= .
g `1 cos(x) /m
There are two cases to consider at the fixed points:
(a) x = = 0
In this case, the two eigenvalues of J are
q
1 2
1,2 = /m 4 g/` ,
2m 2
which implies that the fixed point is stable since <{1,2 } < 0.
Qualitative Behaviors of Dynamical Systems 31

(b) x = =
In this case, the two eigenvalues of J are
q
1 2
1,2 = /m + 4 g/` ,
2m 2
where <{1 } > 0 and <{2 } < 0, which implies that the fixed point is a
saddle node and, hence, is unstable in one direction (on the plane shown
in Fig. 1.21, along which the pendulum can swing back and forth).
Clearly, the mathematical analysis given here is consistent with the
physics of the damped pendulum discussed before.

Example 1.20
Consider a simplified coupled-neuron model,

x = x + h( y)
y = y + h( x) ,

where > 0 and > 0 are constants, and h(u) is a continuous func-
tion satisfying h(u) = h(u) with h0 (u) being two-sided monotonically
decreasing as u . One typical example is the sigmoidal function
2
h(u) = 1, (with constant a > 0) .
1 eau
In the above coupled-neuron model, with a general function h as described,
one can show that
(i) there is a fixed point at x = y := ;
(ii) if
0 dh( y)
h ( ) = < ,
dy y=

then this fixed point is unique and is a stable node;


(iii) if h0 ( ) > , then there are two other fixed points, at
1
, 1 h( ) and h( ), ,

respectively, for the same value of ; they are stable nodes; but the
one at (, ) becomes a saddle point in this case.
To proceed, one may first observe that showing there is a fixed point at
x = y = is equivalent to showing that + h( ) = 0, or that the
straightline z = x and the curve z = h( x) has a crossing point in the
xz plane. This is obvious from the geometry depicted in Fig. 1.22 since h
is continuous.
32 Nonlinear Dynamical Systems: Preliminaries

= =

FIGURE 1.22
Existence of a fixed point in the coupled-neuron model.

Then, notice that showing is the unique root of equation

f () := + h( ) = 0

is equivalent to showing that the function f () is strictly monotonic, so


that f () = 0 has only one root. To verify this, obverse that

f 0 () = + h0 ( ) < 0 .

where h0 ( ) = d h( )/d < by assumption. This implies that


f () is decreasing. Moreover,

h(u) = h(u) = h0 (u) = h0 (u) = h0 (u) = h0 (u) .

Since h0 (u) is two-sided monotonically decreasing as u , so are


h0 (u) and
f 0 () = + h0 ( ) .
Therefore, f () is strictly monotonic. Consequently, f () = 0 has only one
root. To determine the stability of this root, by examining the Jacobian

h0 ( )
J x=y= =
h0 ( )

one finds that its eigenvalues

s1 = h0 ( ) and s2 = + h0 ( )

satisfy s1 < s2 < 0, since h0 ( ) < . Hence, x = y = is a stable node.


Finally, consider the equation

f (x) = x + h( x) = 0
Qualitative Behaviors of Dynamical Systems 33

FIGURE 1.23
Three crossing points between the curve and the straight line.

on the x-z plane. If h0 ( ) > , then it can be verified that the curve
z = h( x) and the straight line z = x have three crossing points, as
shown by Fig. 1.23.
The above has shown that at least one crossing point is at x = , where
h0 ( ) > . One can show that there must be two more crossing points,
one at r > and the other at ` < , as depicted in Fig. 1.23.
Indeed, for x > , since h0 (x) is monotonically decreasing as discussed
above, one has

f 0 (x) = + h0 ( x) > + h0 ( ) > 0 ,

so that
h0 ( x) > > 0 , for all x > .
Since h0 ( x) is two-sided monotonically decreasing, h0 ( x) as
x , there must be a point, r , such that h0 ( r ) = . This implies
that the two curves h( x) and x have a crossing point, r > . The
existence of another crossing point, ` < , can be similarly verified.
Now, to find the two new fixed points of the system, one can set

x + h( y) = 0

to obtain
x1 = 1 h( )
y1 = ,
and set
y + h( x) = 0
to obtain
x2 =
y2 = 1 h( ) ,
34 Nonlinear Dynamical Systems: Preliminaries

where is a real value. These solutions have the same Jacobian, and the
eigenvalues of the Jacobian are
p
s1 = h0 ( x)h0 ( y)
p
s2 = + h0 ( x)h0 ( y) ,

which satisfy s1 < s2 < 0 at the above two crossing points, and satisfy
s1 < 0 < s2 at x = y = , where the latter is a saddle node.

Next, return to the general nonlinear autonomous system (1.4.27).

DEFINITION 1.21 The fixed point x of the autonomous system (1.4.27)


is said to be hyperbolic, if all the eigenvalues of the system Jacobian J at
this fixed point have nonzero real parts.

The importance of hyperbolic fixed points of a nonlinear autonomous


system can be appreciated by the following fundamental result about the
local dynamics of the autonomous system.

THEOREM 1.6 (Grobman-Hartman Theorem)


If x is a hyperbolic fixed point of the nonlinear autonomous system (1.4.27),
then the dynamical behaviors of this system is qualitatively the same as that
of its linearized system, in a (small) neighborhood of x .

Here, the equivalence of the dynamics of the two systems is local, and
this theorem is not applicable to a nonautonomous system in general.
PROOF [see R.W.Easton: p.52, or C.Robinson: p.158]
Exercises 35

Exercises
1.1 Sketch, by hand, the phase portraits and classify the fixed points of the
following two linear systems:
x = 3x + 4y , y = 2x + 3y
and
x = 4x 3y , y = 8x 6y .
1.2 Consider the Duffing oscillator equation
x
(t) + a x(t)
+ b x(t) + c x3 (t) = cos(t) , (1.4.30)
where a, b, c are constants and f (t) is an external force input. By in-
troducing y(t) = x(t),
rewrite this equation in a state-space formulation.
Use computer to plot its phase portraits for the following cases: a = 0.4,
b = 1.1, c = 1.0, = 1.8, and (1) = 0.620, (2) = 1.498, (3) = 1.800,
(4) = 2.100, (5) = 2.300, (6) = 7.000. Indicate the directions of the
orbit flow.
1.3 Consider Chuas circuit, shown in Fig. 1.24, which consists of one inductor
(L), two capacitors (C1 , C2 ), one linear resistor (R), and one piecewise-
linear resistor (g).

R
iL 6
+ +
vC2 C2 L vC1 C1 g

FIGURE 1.24
Chuas circuit.

The dynamical equation of the circuit is described by



C1 v C1 = R1 vC2 vC1 ) g(vC1

C2 v C2 = R1 vC1 vC2 + iL
= vC ,
L iL (1.4.31)
2

where iL is the current through the inductor L, vC1 and vC2 are the volt-
ages across C1 and C2 , respectively, and
1
g vC1 = m0 vC1 + m1 m0 |vC1 + 1| |vC1 1|
2
with m0 < 0 and m1 < 0 being some appropriately chosen constants. This
piecewise linear function is shown in Fig. 1.25 for clarity.
36 Nonlinear Dynamical Systems: Preliminaries

PP
PP 6
m0 P
P m1
@
@ -
1 @ 1
@P
PPm0
PP P

FIGURE 1.25
The piecewise linear resistance in Chuas circuit.

Verify that by defining p = C2 /C1 > 0 and q = C2 R2 /L > 0, and


a change of variable x( ) = vC1 (t), y( ) = VC2 (t), z( ) = R iL (t), and
= t/(GC2 ), the above circuit equations can be reformulated in the state-
space formulation as

x = p x + y f (x)
y = x y + z
z = q y , (1.4.32)

where f (x) = R g vC1 .
For p = 10.0, q = 14.87, m0 = 0.68, m1 = 1.27, and initial conditions
(0.1, 0.1, 0.1), use computer to plot the circuit orbit portrait in the
x-y-z space (or, show the portrait projections on the three principal planes:
(a) the x-y plane, z-y the plane, and (c) the z-x plane.
1.4 Consider the nonlinear system

x = y + x x2 + y 2

y = x + y x2 + y 2 .
Show that (0, 0) is the only fixed point, and find under what condition on
the constant , this fixed point is a stable or unstable focus. [Hint: Polar
coordinates may be convenient to use.]
1.5 Determine the types and stabilities of the fixed points for the following
systems:
y + y + y 3 = 0
and
x = x + xy
y = y xy
1.6 For each of the following systems, find the fixed points and determine their
types and stabilities:
(a)
x = y cos(x)
y = sin(x) ,
Exercises 37

(b)

x = (x y) x2 + y 2 1

y = (x + y) x2 + y 2 1 ,
(c)
x = 1 x y 1

y = x y 1 1 x y 1 ,
(d)
x = y
1 3
y = x x y.
3
1.7 Let f and g be topologically equivalent maps of metric spaces X and Y ,
and
k h be
thek homeomorphism
satisfying g = h1 f h. Verify that
h g (x) = f h(x) for any integer k 0.
1.8 Verify that the following two map are not topologically equivalent in any
neighborhood of the origin: f (x) = x and g(x) = x2 .
2
Stabilities of Nonlinear Systems (I)

Stability theory plays a central role in systems engineering, especially in


the field of control systems and automation, with regard to both dynamics
and control.
Stability of a dynamical system, with or without control and disturbance
inputs, is a fundamental requirement for its practical value, particularly
in real-world applications. Roughly speaking, stability means the system
outputs and its internal signals are bounded within some allowable lim-
its (the so-called bounded-input bounded-output stability) or, sometimes
more strictly, the system outputs tend to an equilibrium state of interest
(the so-called asymptotic stability). Conceptually, there are different kinds
of stabilities, among which three basic notions are the main concerns in
nonlinear dynamics and control systems: the stability of a system with
respect to its equilibria, the orbital stability of a system output trajectory,
and the structural stability of a system itself.
The basic concept of stability emerged from the study of an equilib-
rium state of a mechanical system, dated back to as early as 1644 when E.
Torricelli studied the equilibrium of a rigid body under the natural force
of gravity. The classical stability theorem of G. Lagrange, formulated in
1788, is perhaps the most well known result about stability of conservative
mechanical systems, which states that if the potential energy of a conserva-
tive system, currently at the position of an isolated equilibrium and perhaps
subject to some simple constraints, has a minimum, then this equilibrium
position of the system is stable. The evolution of the fundamental concepts
of system and trajectory stabilities then went through a long history, with
many fruitful advances and developments, till the celebrated Ph.D. Thesis
of A. M. Lyapunov, The General Problem of Motion Stability, finished
in 1892. This monograph is so fundamental that its ideas and techniques
are virtually leading all basic research and applications regarding stabilities
of dynamical systems today. In fact, not only dynamical behavioral anal-
ysis in modern physics but also controllers design in engineering systems

38
Lyapunov Stabilities 39

depend upon the principles of Lyapunovs stability theories. This chapter


is devoted to a brief introduction of these basic stability theories, criteria,
and methodologies of Lyapunov, as well as a few related important stability
concepts, for nonlinear dynamical systems.

2.1 Lyapunov Stabilities


Roughly speaking, the Lyapunov stability of a system with respect to its
equilibrium of interest is about the behavior of the system outputs toward
the equilibrium state wandering nearby and around the equilibrium
(stability in the sense of Lyapunov) or gradually approaches it (asymptotic
stability); the orbital stability of a system output is the resistance of the
trajectory to small perturbations; the structural stability of a system is
the resistance of the system structure against small perturbations. These
three basic types of stabilities are introduced in this section, for dynamical
systems without explicitly involving control inputs.
Consider a general nonautonomous system,

x = f (x, t) , x(t0 ) = x0 Rn , (2.1.1)

where, without loss of generality, it is assumed that the origin x = 0 is a


system fixed point. Lyapunov stability theory concerns various stabilities of
the system orbits with respect to this fixed point. When another fixed point
is discussed, the new one is first shifted to zero by a change of variables,
and then the transformed system is studied in the same way.

DEFINITION 2.1 System (2.1.1) is said to be stable in the sense of Lya-


punov about the fixed point x = 0, if for any > 0 and any initial time
t0 0, there exists a constant, = (, t0 ) > 0, such that

||x(t0 )|| < = ||x(t)|| < for all t t0 . (2.1.2)

This stability is said to be uniform with respect to the initial time, if the
constant = () is independent of t0 over the entire time interval [0, ).

This stability is illustrated by Fig. 2.1.


It should be emphasized that the constant generally depends on both
and t0 . Only for autonomous systems is it always independent of t0
(thus, this stability for autonomous systems is always uniform with respect
to the initial time). It is particularly important to point out that, unlike
autonomous systems, one cannot simply fix the initial time t0 = 0 for a
nonautonomous system in a general discussion of its stability.
40 Stabilities of Nonlinear Systems (I)

FIGURE 2.1
Geometric meaning of stability in the sense of Lyapunov.

Example 2.2
Consider the following linear time-varying system with a discontinuous
coefficient:
1
x(t)
= x(t) , x(t0 ) = x0 .
1t
It has an explicit solution
1 t0
x(t) = x0 , 0 t0 t < ,
1t
which is well defined over the entire interval [0, ) even when t 1 (see
Fig. 2.2). Clearly, this solution is stable in the sense of Lyapunov about
the equilibrium x = 0 over the entire time domain [0, ) if and only if
t0 = 1. This shows that the initial time, t0 , does play an important role in
the stability of a nonautonomous system.

FIGURE 2.2
Solution curve with a singularity at t0 = 1.

At this point, it is worth noting that it is often not desirable to shift


the initial time t0 to zero by a change of the time variable, t t t0 for
control systems. For instance, a system initially starts operation at time
Lyapunov Stabilities 41

zero, and then is subject to control at time t0 > 0, where both controlled
and uncontrolled system behaviors are important for analysis. Another
example is the case where a system has multiple singularities or equilibria
of interest, which cannot be eliminated by a single change of variables.
It should also be noted that if a system is stable in the sense of Lya-
punov about its fixed point, then starting from any bounded initial states
its corresponding solution trajectories are bounded, which follows directly
from the definition of the stability. Although its converse need not be true
for a general nonlinear system, it is true for linear systems. To show this,
first recall that for a linear time-varying system,


x(t) = A(t) x(t) , x(t0 ) = x0 Rn , (2.1.3)

the fundamental matrix is defined by


Z t
(t, t0 ) = exp A( ) d , (2.1.4)
t0

which satisfies (s, s) = I for any t0 s t < . In particular, if


A(t) = A is a constant matrix then

(t, t0 ) = e(tt0 )A := (t t0 ) .

Using matrix (2.1.4), any solution of system (2.1.3) can be expressed as

x(t) = (t, t0 ) x0 . (2.1.5)

PROPOSITION 2.1
System (2.1.3) is stable in the sense of Lyapunov about its zero fixed point
if and only if all its solution trajectories are bounded.

PROOF If the system is stable about its zero fixed point, then for any
> 0, there exists a > 0 such that ||e x0 || < implies that starting from
e0 at time t0 , the system solution trajectory x(t; t0 , x
this x e0 ) satisfies

x t; t0 , x
e0 = (t, t0 )e
x 0 < .

In particular, letting xe0 = [0 0 /2 0 0]> with /2 being at the ith


component yields

(t, t0 )e
x0 = i (t, t0 ) /2 < ,

where i is the ith column of . Therefore, ||(t, t0 )|| < 2n 1 , so that



x t; t0 , x0 = (t, t0 )x0 < 2n 1 ||x0 || ,

which means that the solution trajectory x(t) is bounded if x0 is so.


42 Stabilities of Nonlinear Systems (I)

Conversely, if all solution trajectories of system (2.1.3) are bounded, then


there is a constant, c, such that ||(t, t0 )|| < c. Thus, for any given > 0,
||x0 || < := /c implies that

x(t; t0 ) = (t, t0 )x0 c ||x0 || < ,

meaning that the system is stable in the sense of Lyapunov about its zero
fixed point.

Next, a more important type of stability is introduced.

DEFINITION 2.3 System (2.1.1) is said to be asymptotically stable about


its fixed point x = 0, if it is stable in the sense of Lyapunov and, further-
more, there exists a constant, = (t0 ) > 0, such that

||x(t0 )|| < = ||x(t)|| 0 as t . (2.1.6)

The asymptotic stability is said to be uniform, if the constant is indepen-


dent of t0 over [0, ), and is said to be global, if the convergence, ||x|| 0,
is independent of the initial state x(t0 ) over the entire spatial domain on
which the system is defined (e.g., when = ). If, furthermore,

||x(t0 )|| < = ||x(t)|| c e t (2.1.7)

for two positive constants c and , then the system is said to be exponen-
tially stable about its fixed point x .

The reason for the first requirement of being stable in the sense of Lya-
punov is to exclude those unstable transient situation like the one shown
in Example 2.2. The asymptotic stability is visualized by Fig. 2.3, and the
exponential stability, by Fig. 2.4.

FIGURE 2.3
Geometric meaning of the asymptotic stability.
Lyapunov Stability Theorems 43

FIGURE 2.4
Geometric meaning of the exponential stability.

Clearly, exponential stability implies asymptotic stability, and asymp-


totic stability implies the stability in the sense of Lyapunov, but the reverse
need not be true.

Example 2.4
For illustration, if a system has output trajectories of the form x1 (t) =
x0 sin(t), then it is stable in the sense of Lyapunov, but is not asymp-
totically stable, about 0; a system with output trajectories of the form
x2 (t) = x0 (1 + t t0 )1/2 is asymptotically stable (so also is stable in
the sense of Lyapunov) but is not exponentially stable about 0; however, a
system with outputs x3 (t) = x0 et is exponentially stable (hence, is both
asymptotically stable and stable in the sense of Lyapunov).

2.2 Lyapunov Stability Theorems


Most stability theorems derived in this section apply to the general nonau-
tonomous system (2.1.1), namely,
x = f (x, t) , x(t0 ) = x0 Rn , (2.2.8)
n
where f : D [0, ) R is continuously differentiable in a neighborhood
of the origin, D Rn , with a given initial state, x0 D. Again, without
loss of generality, assume that x = 0 is the system fixed point of interest.
First, for the general autonomous system
x = f (x) , x(t0 ) = x0 Rn , (2.2.9)

an important special case of (2.2.8), with a continuously differentiable f :


D Rn , the following criterion of stability, called the first (or indirect)
method of Lyapunov, is very convenient to use.
44 Stabilities of Nonlinear Systems (I)

THEOREM 2.2 (First method of Lyapunov)


[for continuous-time autonomous systems]

In system (2.2.9), let J = f /x x=x =0 be the system Jacobian evalu-
ated at the zero fixed point. If all the eigenvalues of J have a negative real
part, then the system is asymptotically stable about x = 0.

PROOF This theorem is a direct consequence of the Grobman-Hartman


Theorem, i.e., Theorem 1.6, introduced in Chapter 1.

Note that this and the following Lyapunov theorems apply to linear
systems as well, for linear systems are merely a special case of nonlinear
systems. When f (x) = A x, the linear time-invariant system x = A x
has the only fixed point x = 0. If A has all eigenvalues with negative
real parts, Theorem 2.2 implies that the system is asymptotically stable
about its fixed point because the system Jacobian is simply J = A. This
is consistent with the familiar linear stability results.
Note also that the region of asymptotic stability given by Theorem 2.2
is local, which can be quite large for some nonlinear systems but may
be very small for some others. However, there is no general criterion for
determining the boundaries of such local stability regions when this and
the following Lyapunov methods are applied. Of course, when it is applied
to linear systems, it is easy to see that the stability result is always global.
Before introducing the next theorem of Lyapunov, two examples are first
discussed.

Example 2.5
Consider the damped pendulum (1.1.1), namely,

x = y
g
y = sin(x) y.
` m
Its Jacobian is

0 1 0 1
J= = ,
g `1 cos(x) /m (x ,y )=(0,0) g `1 /m

which has eigenvalues


1p 2 2
1,2 = ( /m ) 4g/` ,
2m 2
both with negative real parts. Hence, the damped pendulum system is
asymptotically stable about its fixed point (x , y ) = ( , ) = (0, 0).
Lyapunov Stability Theorems 45

It is very important to note that Theorem 2.2 cannot be applied to a


general nonautonomous system, since for general nonautonomous systems
this theorem is either necessary nor sufficient, as shown by the following
example.

Example 2.6
Consider the following linear time-varying system:

1 + 1.5 cos2 (t) 1 1.5 sin(t) cos(t)

x(t) = x(t) .
1 1.5 sin(t) cos(t) 1 + 1.5 sin2 (t)

This system has eigenvalues 1,2 = 0.25 j 0.25 7, both having negative
real parts and being independent of the time variable t. If Theorem 2.2
is used to judge this system, the conclusion would be that the system is
asymptotically stable about its fixed point 0. However, the solution of this
system is precisely
0.5t
e cos(t) et sin(t) x1 (t0 )
x(t) = ,
e0.5t sin(t) et cos(t) x2 (t0 )
which is unstable, for any initial conditions with a bounded and nonzero
value of x1 (t0 ), no matter how small this initial value is. This example
shows that by using the Lyapunov first method alone to determine the
stability of a general time-varying system, the conclusion can be wrong.

Many other counterexamples of this kind can be constructed. On the one


hand, this demonstrates the necessity of other general criteria for asymp-
totic stability of nonautonomous systems. On the other hand, however, a
word of caution is that this types of counterexamples do not completely
rule out the possibility of applying the first method of Lyapunov to some
special nonautonomous systems in case studies. The reason is that there
is no theorem saying that the Lyapunov first method cannot be applied
to all nonautonomous systems. Due to the complexity of nonlinear dy-
namical systems, oftentimes they have to be studied class by class, or even
case by case. It has been widely experienced that the first method of Lya-
punov does work for some, perhaps not too many, specific nonautonomous
systems in case studies. The point is that one has to be very careful when
this method is applied to a particular nonautonomous system; the stability
conclusion must be verified by some other means at the same time.
Here, it is emphasized that a rigorous approach for asymptotic stabil-
ity analysis of general nonautonomous systems is provided by the second
method of Lyapunov, for which the set of class-K functions are useful:

K = g(t) : g(t0 ) = 0, g(t) > 0 if t > t0 ,

g(t) is continuous and nondecreasing on [t0 , ) .
46 Stabilities of Nonlinear Systems (I)

THEOREM 2.3 (Second method of Lyapunov)


[for continuous-time nonautonomous systems]
System (2.2.8) is globally (over the entire domain D), uniformly (with re-
spect to the initial time over the entire time interval [t0 , ) ), and asymptot-
ically stable about its zero fixed point, if there exist a scalar-valued function,
V (x, t), defined on D[t0 , ), and three functions (), (), () K, such
that
(i) V (0, t) = 0 for all t t0 ;
(ii) V (x, t) > 0 for all x 6= 0 in D and all t t0 ;
(iii) (||x||) V (x, t) (||x||) for all t t0 ;
(iv) V (x, t) (||x||) < 0 for all t t0 .

In Theorem 2.3, the function V is called a Lyapunov function. The


method of constructing a Lyapunov function for stability determination is
called the second (or direct) method of Lyapunov.
The role of Lyapunov function in the theorem is illustrated by Fig. 2.5,
where for simplicity only the autonomous case is visualized. In the figure,
it is assumed that a Lyapunov function, V (x), has been found which has a
bowl-shape as shown based on conditions (i) and (ii). Then, condition (iv)
is
V
V (x) = x < 0 , (2.2.10)
x

where V x is the gradient of V along the trajectory x. It is known from
Calculus that if the inner product of this gradient and the tangent vector
x is constantly negative, as guaranteed by condition (2.2.10), then the
angle between these two vectors is larger than 90o . This means that the
surface of V (x) is monotonically decreasing to zero (as seen in Fig. 2.5).
Consequently, the system trajectory x, the projection on the domain as
shown in the figure, converges to zero as time evolves.

PROOF First, the stability in the sense of Lyapunov is established. What


needs to be shown is that for any > 0, there is a = (, t0 ) > 0 such
that
||x(t0 )|| < = ||x|| < for all t t0 .
The given conditions (iii) and (iv) together imply that for x 6= 0,

0 < (||x||) V (x, t) V x(t0 ), t0 for all t t0 .

Since V (x, t) is continuous and satisfies V (0, t0 ) = 0, there is a =


(, t0 ) > 0 such that

||x(t0 )|| < = V x(t0 ), t0 < () ,
Lyapunov Stability Theorems 47

FIGURE 2.5
Geometric meaning of the Lyapunov function.

because (||x||) > 0 for x 6= 0. Therefore, if ||x(t0 )|| < , then



(||x||) V x(t0 ), t0 () for all t t0 .

Since () K, this implies that ||x(t)|| < for all t t0 .


Next, the uniform stability property is verified. The given condition (iii),
namely,

0 < (||x||) V (x, t) (||x||) for all x 6= 0 and t t0 ,

implies that for any > 0, there is a = () > 0, independent of t0 , such


that () < (), as illustrated by Fig. 2.6. Therefore, for any initial state
x(t0 ) satisfying ||x(t0 )|| < ,

(||x(t)||) V (x, t) V x(t0 ), t0 ||x(t0 )|| () < ()

for all t t0 , which implies that

||x(t)|| < for all t t0 .

Since = () is independent of t0 , this stability is uniform with respect


to the initial time.
48 Stabilities of Nonlinear Systems (I)

FIGURE 2.6
Geometric meaning of the comparison () < ().

Next, the uniform asymptotic stability is shown. It follows from the


uniform stability, just verified above, that for any > 0, there is a =
() > 0, independent of t0 , such that

||x(t0 )|| < = ||x(t)|| < for all t t0 .

What needs to be shown is that there is a t > 0, independent of t0 , such


that
||x(t0 )|| < = ||x(t)|| < for all t t .
To do so, let
t = t (, ) = ()/() .
Then, it can be shown that there is a t1 [t0 , t0 +t ] such that ||x(t1 )|| < .
If not, namely, if ||x(t)|| for all t [t0 , t0 + t ], then condition (iv)
implies that
V (x, t) (||x||) () < 0 ,
so that, by condition (iii),
Z t

V (x(t), t) = V x(t0 ), t0 ) + V (x( ), ) d
t0
Z t

V x(t0 ), t0 () d
t0

= V x(t0 ), t0 () t t0

() () t t0 .

It then follows that for all x(t) satisfying ||x(t)|| ,



V (x, t) t=t0 +t () () t = () () = 0 ,
Lyapunov Stability Theorems 49

contradicting the fact that V (x, t) > 0 for all x 6= 0. Therefore, as claimed
above, there is a t1 [t0 , t0 + t ] such that ||x(t1 )|| < . Thus,

||x(t)|| < for all t > t0 + t t1 .

Since t = t (, ) is independent of t0 , this asymptotic stability is uniform


with respect to the initial time.
Finally, the global uniform asymptotic stability is proven. Since (||x||)
as ||x|| , one has

(||x||) V (x(t), t) (||x||) as ||x|| .

Hence, starting from any initial state x(t0 ) Rn , there is always a (large
enough) > 0 such that ||x(t0 )|| < . For this , since (||x||) as
||x|| , there is always a (large enough) r > 0 such that () < (r)
(see also Fig. 2.6). Thus,

(||x(t)||) V (x(t), t) V x(t0 ), t0 (||x(t0 )||) () (r) ,

which implies that

||x(t)|| < r for all t t0 .

This establishes the global stability in the sense of Lyapunov. Under all
the given conditions, the uniform asymptotic stability can also be verified
in the same manner by repeating the above arguments.

Regarding uniform negative definiteness condition (iv) of Theorem 2.3,


it is very important to note that it cannot be weakened to be the simple
condition V (x, t) < 0 for all t t0 . The reason is that this weak condition
is not sufficient for nonautonomous systems in general, as can be seen from
the following two examples.

Example 2.7
Consider a very simple one-dimensional linear system,

x = 0 , t 0.

Suppose that the Lyapunov function


t+2 2
V (x, t) = x
t+1
is chosen, which satisfies
x2
V (x, t) = <0 for all t 0 .
(t + 1)2
50 Stabilities of Nonlinear Systems (I)

Then, one tends to conclude that the given system is asymptotically stable
about its zero fixed point. However, the solution of this system is x =
constant (depending on initial conditions), which is stable in the sense of
Lyapunov but not asymptotically.

Example 2.8
Consider a one-dimensional, linear nonautonomous system,
g(t)

x(t)
= x(t) ,
g(t)

where g(t) is a continuously differentiable function which coincides with


the function et/2 except around some peaks where it reaches the value
1, as shown in Fig. 2.7. In this figure, the width of the peak at t = n is
assumed to be strictly less than (1/2)n/2 .

FIGURE 2.7
Graph of the function g(t) versus function et/2 .

It is clear from the figure that


Z Z n
X 1
g 2 (t) dt < e d + = 2.
0 0 n=1
2

Consider a Lyapunov function of the form


Z t
x2 (t)
V (x, t) = 2 3 g 2 ( ) d .
g (t) 0

It is easy to verify that

V (0, t) = 0 , V (x, t) > x2 (t) > 0 for all x(t) 6= 0 ,

and
V (x, t) = x2 (t) < 0 for all x(t) 6= 0 .
Lyapunov Stability Theorems 51

This satisfies the weak negative definiteness condition mentioned above.


However, the general solution of this system is given by

x(t0 )
x(t) = g(t) ,
g(t0 )

which does not tend to zero as t due to the peaks of the function g(t)
(see Fig. 2.7). Therefore, the system is not asymptotically stable about its
zero fixed point.

Now, return to Theorem 2.3. Since autonomous systems are special


case of nonautonomous systems, this theorem applies to them as well. The
following three examples show how the theorem can be applied, at the tech-
nical level, to stability analysis of both autonomous and nonautonomous
systems.

Example 2.9
Consider a general linear time-varying (i.e., nonautonomous) system,


x(t) = A(t) x(t) ,

where the system matrix A(t) is assumed to satisfy the following condition:
there are constants a and b and a uniformly positive definite and symmetric
matrix Q(t) min (Q)I > 0 for all t t0 , where min (Q) is the minimum
eigenvalue of Q(t), such that the time-varying Lyapunov equation

P (t) + P (t)A(t) + A> (t)P (t) + Q(t) = 0

has a positive definite and symmetric matrix solution P (t) satisfying

0 < aI P (t) b I for all t t0 .

It can be shown that this system is globally, uniformly, and asymptoti-


cally stable about its zero fixed point. Indeed, using the Lyapunov function

V (x, t) = x> (t)P (t) x(t) ,

and choosing the following three class-K functions:

( ) = a 2 , ( ) = b 2 , ( ) = min (Q) 2 ,

one has
(||x||) = a ||x||2 V (x, t) b ||x||2 = (||x||) ,
52 Stabilities of Nonlinear Systems (I)

and

V (x, t) = x> P x + x> P x + x > P x



= x> P + P A + A> P x
= x> Q x
min (Q) ||x||2
= (||x||) .

Moreover,
(||x||) as ||x|| .
Therefore, all conditions stated in Theorem 2.3 are satisfied, so the system
is globally, uniformly, and asymptotically stable about its zero fixed point.

Example 2.10
The damped pendulum (1.1.1), namely,

x = y
g
y = sin(x) y
` m
is asymptotically stable about its fixed point (x , y ) = ( , ) = (0, 0).
This can be verified by using the Lyapunov function

1 2 /(2m2 ) /(2m) x g
V (x, y) = x y + 1 cos(x) .
2 /(2m) 1 y `
Indeed, V (x, y) > 0 for all /2 < x < /2 and all y R. Moreover,

2 /(2m2 ) /(2m) x g
V (x, y) = x y + sin(x) x
/(2m) 1 y `
g 2
= x sin(x) y <0
2m ` m
for all /2 < x < /2 and y R. By Theorem 2.3, this damped pendu-
lum system is asymptotically stable about its fixed point (0, 0).

Example 2.11
Consider a nonautonomous system in the following special form:

x = A x + g(x, t) ,

where A is a stable constant matrix and g is a nonlinear function satisfying


g(0, t) = 0 and ||g(x, t)|| c ||x|| for a constant c > 0 for all t [t0 , ).
Lyapunov Stability Theorems 53

Since A is stable, the following Lyapunov equation


P A + A> P + I = 0
has a unique positive definite and symmetric matrix solution, P , as known
from the elementary linear systems theory. Using the Lyapunov function
V (x, t) = x> P x, one has
V (x, t) = x> P x + x > P x

= x> P A + A> P x + 2 x> P g(x, t)
x> x + 2 max (P ) c ||x||2 ,
where max (P ) is the largest eigenvalue of P . Therefore, if the constant
c < 1/(2max (P )) and if the class-K functions

() = min (P ) 2 , () = max (P ) 2 , () = 1 2c max (P ) 2
are used, then conditions (iii) and (iv) of Theorem 2.3 are satisfied. As a
result, the given system is globally, uniformly, and asymptotically stable
about its zero fixed point. This example shows that the linear part of a
weakly nonlinear nonautonomous system can indeed dominate the stability.

This example motivates the following investigation: cConsider a nominal


nonautonomous system,
x = f (x, t) , x(t0 ) = x0 Rn ,
where f is continuously differentiable, with a zero fixed point f (0, t) = 0
for all t t0 . Suppose that a small perturbation, e(x, t), is input to it,
yielding a perturbed system of the form
x = f (x, t) + e(x, t) .
The question is: if the nominal system was asymptotically stable about
its zero fixed point, then under a small perturbation does the perturbed
system remain to be stable, at least in the sense of Lyapunov, about the zero
fixed point? If so, the nomial system is said to be totally stable about its
zero fixed point. This issue will be further addressed later in the text where
the Malkin theorem provides an answer: if the nominal system is uniformly
asymptotically stable about its zero fixed point, then the perturbed system
is totally stable.
Now, it should be noted that in Theorem 2.3, the uniform stability is
guaranteed by the class-K functions , , and stated in conditions (iii)
and (iv), which is necessary since the solution of a nonautonomous system
may sensitively depend on the initial time, as seen from Example 2.2. For
autonomous systems, these class-K functions (hence, condition (iii)) are not
needed. In this case, Theorem 2.3 reduces to the following simple form.
54 Stabilities of Nonlinear Systems (I)

Autonomous Systems Nonautonomous Systems

V (0) = 0 V (0, t0 ) = 0
V (x) > 0 V (x, t) > 0 for all x 6= 0
0 < (||x||) V (x, t) (||x||) for some , K
V (x) < 0 V (x, t) (||x||) < 0 for some K

TABLE 2.1
Comparison of Lyapunov functions for autonomous and
nonautonomous systems.

THEOREM 2.4 (Second method of Lyapunov)


[for continuous-time autonomous systems]
The autonomous system (2.2.9) is globally (over the entire domain D)
and asymptotically stable about its zero fixed point, if there exists a scalar-
valued function, V (x), defined on D, such that
(i) V (0) = 0;
(ii) V (x) > 0 for all x 6= 0 in D;
(iii) (not needed)
(iv) V (x) < 0 for all x 6= 0 in D.

Note that if condition (iv) in Theorem 2.4 is replaced by


(iv) V (x) 0 for all x D
then the resulting stability is only in the sense of Lyapunov but may not
be asymptotic.
At this point, it is helpful to compare the stability conditions for au-
tonomous and nonautonomous systems stated in Theorems 2.3 and 2.4,
which are summarized in Table 2.2.

Example 2.12
Consider the linear time-invariant system

x = A x ,

for which it is assumed that the constant matrix A is negative definite.


If Theorem 2.2 is applied, one can use the Lyapunov function V (x) =
1 >
2 x x, which satisfies V (0) = 0, V (x) > 0 for all x 6= 0, and

V (x) = x x = x> A x < 0 , for all x 6= 0 .


Lyapunov Stability Theorems 55

Therefore, the system is asymptotically stable about its zero fixed point,
consistent with the well-known linear systems theory.

Example 2.13
Consider the undamped pendulum (1.1.2), namely,

x = y
g
y = sin(x) ,
`
where x = is the angular variable defined on < < , with the
vertical axis as its reference, and g is the gravity constant. Since the system
Jacobian atpthe zero equilibrium has a pair of purely imaginary eigenvalues,
1,2 = j g/`, Theorem 2.2 is not applicable. However, if one uses the
Lyapunov function
g 1
V = 1 cos(x) + y 2 ,
` 2
then
g
V (x) = x sin(x) + y y
`
g g
= y sin(x) y sin(x) = 0
` `
for all (x, y) over the entire domain. Thus, the conclusion is that this
undamped pendulum is stable in the sense of Lyapunov but not asymptot-
ically, consistent with the physics of the undamped pendulum.

THEOREM 2.5 (Krasovskii Theorem)


[for continuous-time autonomous systems]
For the autonomous system (2.2.9), let J(x) = f /x be its Jacobian
evaluated at x(t). A sufficient condition for the system to be asymptotically
stable about its zero fixed point is that there exist two real positive definite
and symmetric constant matrices, P and Q, such that the matrix

J > (x) P + P J(x) + Q

is semi-negative definite for all x 6= 0 in a neighborhood D of the origin,


in which 0 is the only fixed point of the system. In this case, a Lyapunov
function is given by
V (x) = f > (x) P f (x) .
Furthermore, if D = Rn and V (x) as ||x|| , then this asymptotic
stability is global.
56 Stabilities of Nonlinear Systems (I)

PROOF First, since x = 0 is a fixed point, f (0) = 0, so that

V (0) = f > (0) P f (0) = 0 .

Also, for any x 6= 0 in the neighborhood D, f (x) 6= 0, since 0 is the only


fixed point of f in D. Therefore, V (x) = f > (x)P f (x) > 0 since P is
positive definite. Moreover,

V
V (x) = x
x
" > #
> V > V
= f (x) P + f (x) P f (x)
x x

= f > (x) J > (x) P + P J(x) f (x)

= f > (x) J > (x)P + P J(x) + Q f (x) f > (x) Q f (x)
<0 for all x 6= 0 .

The global stability assertation follows from Theorem 2.4.

Note that matrix Q is used to guarantee the matrix [J > (x)P +P J(x)] be
always negative definite. The reason is that for some x in the neighborhood
of the zero fixed point, the Jacobian J(x) may be zero, so that J > P +P J =
0 which, in turn, implies V (x) = 0 (see the proof above). But if Q > 0,
then it is guaranteed that V (x) < 0 for all x 6= 0. Usually, a convenient
choice is Q = I, the identity matrix.
Similar stability criteria can be established for discrete-time systems.
Two main results are summarized as follows.

THEOREM 2.6 (First method of Lyapunov)


[for discrete-time autonomous systems]
Let x = 0 be a fixed point of the discrete-time autonomous system

xk+1 = f (xk ) , (2.2.11)

where f : D Rn is continuously differentiable


in a neighborhood of the
origin, D Rn , and let J = f /xk x =x =0 be the Jacobian of the
k
system evaluated at this fixed point. If all the eigenvalues of J are strictly
less than one in absolute value, then the system is asymptotically stable
about its zero fixed point.

THEOREM 2.7 (Second method of Lyapunov)


[for discrete-time nonautonomous systems]
Lyapunov Stability Theorems 57

Let x = 0 be a fixed point of the nonautonomous system

xk+1 = fk (xk ) , (2.2.12)

where fk : D Rn is continuously differentiable in a neighborhood of


the origin, D Rn . Then the system (2.2.12) is globally (over the entire
domain D) and asymptotically stable about its zero fixed point, if there
exists a scalar-valued function, V (xk , k), defined on D and continuous in
xk , such that
(i) V (0, k) = 0 for all k k0 ;
(ii) V (xk , k) > 0 for all xk 6= 0 in D and for all k k0 ;

(iii) V (xk , k) := V xk , k V xk1 , k 1 < 0 for all xk 6= 0 in D and
all k k0 + 1;

(iv) 0 < W ||xk || < V xk , k for all k k0 + 1, where W ( ) is a
positive continuous function defined on D, satisfying W (||xk0 ||) = 0
and lim W ( ) = monotonically.

As a special case, for discrete-time autonomous systems, Theorem 2.7


reduces to the following simple form.
58 Stabilities of Nonlinear Systems (I)

THEOREM 2.8 (Second method of Lyapunov)


[for discrete-time autonomous systems]
Let x = 0 be a fixed point for the autonomous system (2.2.11). Then
the system is globally (over the entire domain D) and asymptotically stable
about this zero equilibrium if there exists a scalar-valued function, V (xk ),
defined on D and continuous in xk , such that
(i) V (0) = 0;
(ii) V (xk ) > 0 for all xk 6= 0 in D;

(iii) V (xk ) := V xk V xk1 < 0 for all xk 6= 0 in D;

(iv) V x as ||x|| .

To this end, it is important to emphasize that all the Lyapunov theorems


stated above only offer sufficient conditions for asymptotic stability. On the
other hand, usually more than one Lyapunov function may be constructed
for the same system. For a given system, one choice of a Lyapunov function
may yield a less conservative result (e.g., with a larger stability region) than
other choices. Here, stability region refers to the set of system initial states
starting from which the system trajectory will converge to the zero fized
point. A stability region is also called a basin of attraction (of the zero
fixed point). However, no conclusion regarding stability may be drawn if,
for technical reasons, a satisfactory Lyapunov function cannot be found.
Nevertheless, there is a necessary condition in theory about the existence
of a Lyapunov function.

THEOREM 2.9 (Massera Inverse Theorem)


Suppose that the autonomous system (2.2.9) is asymptotically stable about
its fixed point x and f is continuously differentiable with respect to x for
all t [t0 , ). Then a Lyapunov function exists for this system.

A combination of this theorem and Theorem 2.3 provides a necessary and


sufficient condition for the asymptotic stability of a general autonomous
system. For a general nonautonomous system, a necessary and sufficient
condition of this type is possible but much more complicated.
PROOF [see J.L.Massera]
LaSalle Invariance Principle 59

2.3 LaSalle Invariance Principle


First, the concepts of limit sets and invariant sets are introduced.

DEFINITION 2.14 For a given dynamical system, a point z in the state


space is said to be an -limit point of an orbit x(t) of the system if, for
every open neighborhood Uz of z, the trajectory of x(t) enters Uz at a finite
value of t. The set of all -limit points of x(t) is called the -limit set of
x(t), which can be formulated as
n
+ (x) = z Rn | there exist {tk } with tk
o
such that x(tk ) z as k .

In the definition, is the last letter in the Greek alphabet, which is used
to indicate the limiting process t . Similarly, -limit point and -limit
set are defined for the opposite limiting process as t , where is the
first letter of the Greek alphabet. Equivalently,
n
(x) = z Rn | there exist {tk } with tk
o
such that x(tk ) z as k .

Simple examples of -limit points and -limit sets are fixed points and
periodic orbits (e.g., limit cycles), respectively, which however need not be
stable.

DEFINITION 2.15 An -limit set, + (x), is attracting if there exists an


open neighborhood U+ of + (x), such that, if the trajectory of a system
state enters U+ at some instant t1 t0 then this trajectory will approach
+ (x) arbitrarily closely, as t . The basin of attraction of an attract-
ing point is the union of all such open neighborhoods. An -limit set is
repelling if the system trajectory always moves away from it.

Simple examples of attracting sets include stable fixed points and stable
limit cycles; repelling sets, unstable ones.

DEFINITION 2.16 A set S Rn is said to be forward (backward) invari-


ant under function (or map) f , if for all x(t) S then f (x(t)) S for all
t t0 (t t0 ).

As a note at this point, which will be further discussed in detail later, an


attractor is defined to be the union of all those points in an attracting set
60 Stabilities of Nonlinear Systems (I)

that is invariant under the system function. In other words, an attractor is


an -limit set, + (x), satisfying the property that all system orbits near
+ (x) have + (x) as their -limit sets.
As another note, for the discrete-time setting, for a given map M an
-limit set is similarly defined. Moreover, an -limit set is invariant under
the map if M (+ (xk )) = + (xk ) for all k; namely, starting from any point
x0 of the set, the orbit xk = M k (x0 ) will eventually return to the same
set (but need not be the same point). Thus, an (invariant) -limit set
embraces fixed points and periodic orbits.

THEOREM 2.10
The -limit set of a finite state x of an autonomous system, x = f (x), is
always invariant under f .

PROOF It amounts to showing that for any z + (x), the system tra-
jectory t (z) + (x). By definition of + (x), there exists a sequence {tk }
with tk as k , such that tk (x) z as k .
Now, fix t and choose tk sufficiently large so that t + tk > 0. Then

t+tk (x) = t (tk (x)) t (z) as k ,

due to the time-invariant property of solutions of an autonomous system.


Therefore, t (z) + (x), implying that + (x) is invariant under f .

Now, consider once again the autonomous system (2.4.14) with a fixed
point x = 0. Let V (x) be a Lyapunov function defined in a neighborhood
D of the origin. Let also t (x0 ) be a bounded solution orbit of the system,
with its initial state x0 and all its -limit points being confined within D.
Moreover, let
E = x D V (x) = 0 (2.3.13)
and S E be the largest invariant subset of E in the sense that if the
initial state x0 S then the entire orbit t (x0 ) S for all t t0 .

THEOREM 2.11 (LaSalle Invariance Principle)


Under the above assumptions, for any initial state x0 D, the solution
orbit satisfies
t (x0 ) S as t .

PROOF Since V (x) is a Lyapunov function, it is non-increasing in t and


is bounded from below by zero. Hence,

V (t (x0 )) c (t )

for a constant c depending only on x0 .


LaSalle Invariance Principle 61

Let (x0 ) be the -limit set of x0 . Then (x0 ) is an invariant set


(Theorem 2.10) and belongs to D since the latter contains all limit points
of t (x0 ).
Take a limit point of t (x0 ), z (x0 ). Then, there is an increasing
subsequence {tk }, with tk as k , such that tk (x0 ) z as
k . Therefore, by the continuity of V , V (z) = c. Since z is arbitrarily
chosen, this holds for all z (x0 ).
Now, since (x0 ) is invariant, if z (x0 ) then t (z) (x0 ) for all
t t0 . Hence, V (z) = c for all t t0 and for all z (x0 ). This means
V (z) = 0 for all z (x0 ) .
Consequently, since S is the largest invariant subset of E, (x0 ) S E.
But t (x0 ) (x0 ), so t (x0 ) S, as t .

This invariance principle is consistent with the Lyapunov theorems when


they are applicable to a problem. Sometimes, when V = 0 over a subset
of the domain of V , a Lyapunov theorem is not easy to directly apply, but
the LaSalle invariance principle may be convenient to use.

Example 2.17
Consider the system
1 3
x = x + x +y
3
y = x .
The Lyapunov function V = x2 + y 2 yields

1 1 2
V = x2 x 1 ,
2 3
which is negative for x2 < 3 but is zero for x = 0 and x2 = 3, regardless
of variable y. Thus, Lyapunov theorems do not seem to be applicable, at
least not directly. However,
observe that the set E defined above has only
three straight lines: x = 3, x = 0, and x = 3, and that all trajectories
which intersect the line x = 0 satisfy y = 0 (the second equation) and so
will not remain on the line x = 0 unless y = 0. This means that the largest
invariant subset S containing the points with x = 0 is the only point (0, 0).
It then follows from the LaSalle invariance principle that starting from any
initial state located
in a neighborhood of the origin bounded within the
two stripes x = 3, say located inside the disk

D = (x, y) | x2 + y 2 < 3 ,
the solution orbit will always be attracted to the point (0, 0). This means
that the system is (locally) asymptotically stable about its zero fixed point.
62 Stabilities of Nonlinear Systems (I)

2.4 Some Instability Theorems


Once again, consider the general autonomous system (2.2.9), namely,

x = f (x) , x(t0 ) = x0 Rn , (2.4.14)

with a fixed point x = 0. Sometimes, disproving the stability is easier


than trying to find a Lyapunov function to prove it, since in this case a
Lyapunov function can never be found. To disprove the stability, when the
system indeed is unstable, the following instability theorems may be used.

THEOREM 2.12 (A Linear Instability


Theorem)
In system (2.4.14), let J = f /x x=x =0 be its Jacobian evaluated at
x = 0. If at least one of the eigenvalues of J has a positive real part, then
the system is unstable about the fixed point x = 0.

For discrete-time systems, there is a similar result: the discrete-time


autonomous system

xk+1 = f (xk ) , k = 0, 1, 2, ,

is unstable about its fixed point x = 0 if at least one of the eigenvalues of


the system Jacobian is larger than 1 in absolute value.
The following two negative theorems can be easily extended to nonau-
tonomous systems in an obvious way.

THEOREM 2.13 (A General Instability Theorem)


For system (2.4.14), let V (x) be a positive differential function defined on
a neighborhood D of the origin, such that V (0) = 0 and in any arbitrarily
small neighborhood of 0, there always exists an x0 which satisfies V (x0 ) >
0. If there is a closed subset D B containing x0 , where B is the unit
ball of Rn , on which V (x) > 0 and V (x) > 0, then the system is unstable
about its zero fixed point.

PROOF First, observe that since V is differentiable, it is continuous and


so is bounded over any closed bounded set.
It can be shown that the trajectory x(t) of the system, starting from
x(0) = x0 , must leave the subset as time t evolves. To show this, note
that as long as this trajectory x(t) is inside the subset , one has V (x) > 0.
This is because V (x) starts with the value c := V (x0 ) > 0, and V (x) > 0
by assumption. Let

= inf V (x) | x and V (x) c > 0 .
Some Instability Theorems 63

Then, since is a closed bounded set, > 0, so that


Z t Z t
V (x(t)) = V (x0 ) + V (x( )) d c + dt = c + t .
0 0

This inequality shows that x(t) cannot stay in forever, because V (x) is
bounded on the closed subset but now V (x) c + t . In other
words, x(t) must leave the set .
Next, note that x(t) cannot leave through the surface V (x) = 0. This
is because V (x(t)) c > 0, which implies that x(t) has to leave through
a path with the magnitude satisfying ||x|| ||x0 || (since it starts with
x(0) = x0 ). According to the assumption, this x0 can be arbitrarily close
to the origin, which implies that the origin is unstable: any trajectory
starting from a point in an arbitrarily small neighborhood of the origin will
eventually move away from it.

Example 2.18
Consider the system

x = y + x x2 + y 4

y = x + y x2 + y 4 ,

which has the fixed point (0, 0). The system Jacobian at the fixed point
has a pair of imaginary eigenvalues, 1,2 = j, so Theorem 2.2 is not
applicable. On the contrary, the Lyapunov function V = 12 x2 + y 2 leads
to

V = x2 + y 2 x2 + y 4 > 0 , for all (x, y) 6= (0, 0) .
Hence, this system is unstable about its zero fixed point by Theorem 2.13.

A variant of Theorem 2.13 is the following.

THEOREM 2.14
For system (2.4.14), let V (x) be a positive differential function defined on
a neighborhood D of the origin, such that V (0) = 0 and in any arbitrarily
small neighborhood of 0, there always exists an x0 which satisfies V (x0 ) >
0. If there is a closed subset D B containing x0 , where B is the unit
ball of Rn , on which V (x) > aV (x) fopr some constant a > 0 and for all
x , then the system is unstable about its zero fixed point.

Example 2.19
64 Stabilities of Nonlinear Systems (I)

Consider the autonomous system

x = x + 2y + xy 2
y = 2x + y x2 y .

Let V (x, y) = x2 y 2 . Then V (x, y) > 0 in the subset = (x, y) |x| >
|y| , as shown in Fig. 2.8. Moreover,

V (x, y) = 2x2 2y 2 + 4x2 y 2 = 2 V (x, y) + 4x2 y 2 2 V (x, y) ,

for all (x, y) . Therefore, the system is unstable about its zero fixed
point.

FIGURE 2.8
The subset in which |x| > |y|.

THEOREM 2.15 (Chetaev Instability Theorem)


For system (2.4.14), let V (x) be a positive and continuously differentiable
function defined on D, and let be a subset, containing the origin, of D,
(i.e., 0 D ). If
(i) V (x) > 0 and V (x) > 0 for all x 6= 0 in D,
(ii) V (x) = 0 for all x on the boundary of ,
then the system is unstable about its zero fixed point.

PROOF This instability theorem can be established by an argument sim-


ilar to that given in the proof of Theorem 2.13. It is illustrated by Fig. 2.9,
which graphically shows that if the theorem conditions are satisfied, then
there is a gap within any neighborhood of the origin, so that a system tra-
jectory can escape from the neighborhood of the origin along a path within
this gap.

Example 2.20
Some Instability Theorems 65

FIGURE 2.9
Illustration of the Chetaev theorem.

FIGURE 2.10
The defining region of a Lyapunov function.

Consider the system

x = x2 + 2 y 5
y = x y 2 .

Choose the Lyapunov function

V = x2 y 4 ,

which is positive inside the region defined by

x = y2 and x = y2 .

Let D be the right-half plane and be the shaded area shown in Fig. 2.10.
Clearly, V = 0 on the boundary of , and V > 0 with V = 2x3 > 0 for
all (x, y) D. According to the Chetaev theorem, this system is unstable
about its zero fixed point.
66 Stabilities of Nonlinear Systems (I)

2.5 Construction of Lyapunov Functions


As seen from the previous subsections, Lyapunov functions play the central
role in the Lyapunov stability theory, and how to find a suitable, working
Lyapunov function is key to the determination of the stability or insta-
bility of a system with respect to its fixed point of interest. There are
literally many successful techniques for constructing a simple and effective
Lyapunov function. This section discusses a couple of such methods.
A relatively simple and direct approach to constructing a Lyapunov func-
tion for an autonomous system is to apply Theorem 2.5, in which a Lya-
punov function is found to be

V (x) = f > (x)P f (x) ,

where P and Q are two real positive definite and symmetric constant ma-
trices such that the matrix

J > (x)P + P J(x) + Q

is negative semi-definite for all x 6= 0 in a neighborhood D of the zero fixed


point. If these two matrices P and Q can be found, then the given system
is asymptotically stable about the fixed point in the neighborhood D, and
if furthermore D = Rn and V (x) as ||x|| , then this stability is
global.
A limitation of this approach is that two constant matrices, P and Q,
have to be found such that the matrix-valued function J > (x)P +P J(x)+Q
is negative semi-definite for all x 6= 0 in D. This, oftentimes, is very difficult
or even impossible. Therefore, some more flexible methods for constructing
a Lyapunov function in a general case are needed.
To introduce one general method, let V (x) be a Lyapunov function to
be constructed. Let g> (x) be its gradient defined as usual by

V (x) V (x)
g> (x) = g1 (x) gn (x) = . (2.5.15)
x1 xn
Then,
n
X V (x)
V (x) = x i
i=1
xi

f1 (x)
V (x) V (x) .
= ..
x1 xn
fn (x)
= g> (x) f (x) .

LEMMA 2.16
Construction of Lyapunov Functions 67

g(x) is the gradient of a scalar-valued function if and only if


gi gj
= , for all i, j = 1, , n .
xj xi

PROOF If g(x) is the gradient of a scalar-valued function, V (x), then



>
V (x) V (x)
g (x) = g1 (x) gn (x) = .
x1 xn
It follows that
gi V 2V
= =
xj xj xi xj xi
2V V gj
= = = ,
xi xj xi xj xi
for all i, j = 1, , n.
Conversely, if gi /xj = gj /xi for all i, j = 1, , n, then a double
integration gives
Z Z Z Z
gi gj
dxi dxj = dxi dxj ,
xj xi
or Z Z
gi dxi = gj dxj := V (x) ,

which is independent of i and j (since it is true for all i, j = 1, , n).


Thus, Z
V
= gk dxk = gk , k = 1, , n .
xk xk
implying that g(x) is the gradient of the scalar-valued function V (x).

To construct V (x), notice that V (x) can be calculated by the integration


of its gradient:
Z xX n
Z n
!
xX
V (x)
V (x) = dxi = gi (x) dxi .
0 i=1 xi 0 i=1

Then, Lemma 2.16 shows that gi /xj = gj /xi , i, j = 1, , n. This


means that the above integral is taken over a conservative field, so that the
integral can be taken over any path joining 0 to the terminal state x. In
particular, this can be carried out along all the principal axes, namely,
Z x1 Z x2
V (x) = g1 (1 , 0, , 0) d1 + g2 (x1 , 2 , 0, , 0) d2 +
0 0
Z xn
+ gn (x1 , , xn1 , n ) dn . (2.5.16)
0
68 Stabilities of Nonlinear Systems (I)

This formula, (2.5.16), can be ued to construct a Lyapunov function, V (x),


for a given autonomous system, x = f (x), in a strightforward manner,
as illustrated by the following example. This approach to constructing
Lyapunov functions is called the variational gradient method.

Example 2.21
For the damped pendulum system (1.1.1), namely,

x = y
g
y = sin(x) y,
` m
one may start with a simple assumed form,

g(x, y) = g1 (x, y) g2 (x, y)

= (x, y) x + (x, y) y (x, y) x + (x, y) y ,

where , , , are functions of x and y, and are to be determined to satisfy


the condition g1 (x, y)/y = g2 (x, y)/x; that is,
(x, y) (x, y) (x, y) (x, y)
(x, y) + y+ x = (x, y) + x+ y . (a)
y y x x
On the other hand,
V (x, y) V (x, y)
V (x, y) = x + y
x y

f1 (x, y)
= g1 (x, y) g2 (x, y)
f2 (x, y)
g
= (x, y) xy + (x, y) y 2 (x, y) xy (x, y) x sin(x)
m `
2 g
(x, y) y (x, y) y sin(x) .
m `
Then, one should choose some appropriate functions , , , to simplify
V (x, y), so that V (x, y) can be easily found. For this purpose, one may
proceed as follows:
(i) Cancel the crossed-product terms, by letting
g
(x, y) x (x, y) x (x, y) sin(x) = 0 , (b)
m m
so that

V (x, y) = (x, y) m1 (x, y) y 2 g `1 (x, y) x sin(x) . (c)
Stability Regions: Basins of Attraction 69

(ii) By inspection, use = (x) and = = = constant in equations


(b) and (c), so that

g1 (x, y) = (x) x + y , g2 (x, y) = x + y , and = ,

where the last equality follows from equation (a).


Thus, one has
Z x Z y
V (x, y) = g1 (1 , 0) d1 + g2 (x, 2 ) d2
0 0
Z x Z y

= (1 ) 1 d1 + x + 2 d2
0 0
Z x Z y
g
= 1 + sin(1 ) d1 + x + 2 d2
0 m ` 0
g g
= x2 + cos(x) + xy + y 2
2m ` ` 2

1 ( )/m x g
= [x y] + [1 cos(x)] .
2 y `
To satisfy the requirements for a Lyapunov function, namely, V (0, 0) = 0
and V (x, y) > 0 with V (x, y) < 0 for all x 6= 0 and y 6= 0, one can simply
choose the constants and such that 0 < < m1 < . It can be
easily verified that this condition is sufficient to guarantee that V (x, y) so
constructed is a Lyapunov function. Consequently, the damped pendulum
system is asymptotically stable about its zero fixed point in the domain of
< x = < , consistent with the pendulum physics.

2.6 Stability Regions: Basins of Attraction


In the damped pendulum system discussed above, its fixed point ( , ) =
(0, 0) is asymptotically stable, and any initial state satisfying the angular
condition < 0 < will eventually move to this fixed point at rest. In
other words, the stability region, or basin of attraction of the zero fixed
point for this pendulum is (, ).
In general, for a nonautonomous system,

x = f (x, t) , x(t0 ) = x0 Rn , (2.6.17)

with a fixed point x = 0, let t (x0 ) be its solution trajectory starting from
x0 . Suppose that this system is asymptotically stable about its zero fixed
point. The interest here is to find out how large the region for the initial
state x0 can be, within which one can guarantee t (x0 ) 0 as t .
70 Stabilities of Nonlinear Systems (I)

DEFINITION 2.22 For system (2.6.17), the region of stability, or the basin
of attraction of its stable zero fixed point, is the set
n o
B = x0 Rn lim t (x0 ) = 0 .
t

For a stable linear system, x = A x, it is clear that B = Rn , the entire


state space. For a nonlinear nonautonomous system, however, finding its
exact basin of attraction is very difficult in general. Therefore, one turns
to find a (relatively conservative) estimate of the basin, to have a sense of
at least how large it would be. For this purpose, it is easy to see that if
one can find a Lyapunov function, V (x), defined on a neighborhood of the
zero fixed point, D Rn , and find a region

c = x Rn | V (x) c

where c 0 is a constant, then all system trajectories starting from inside


c will tend to zero as time evolves. In other words, the basin of attraction
is at least as large as c . Here, it is important to note that one may not
simply use the defining domain D of the Lyapunov function as an estimate
of the basin of attraction, as illustrated by the following example.

Example 2.23
Consider the autonomous system

x = y
1 3
y = x + x y.
3
The Lyapunov function
3 2 1 4 1 1
V (x, y) = x x + xy + y 2
4 12 2 2
satisfies
1 2 1 2 1
V (x, y) = x 1 x y 2 ,
2 3 2
so V (x, y) is well defined on
n o
D = (x, y) R2 : 3<x< 3 .

It is clear that V (0, 0) = 0 and V (x, y) > 0 with V (x, y) < 0 for all
nonzero (x, y) D. However, it can also be seen, from a computer plot
shown in Fig. 2.11, that D is not a subset of B, although they have a large
overlapping region. In this example, D cannot be used as an estimate of
B.
Stability Regions: Basins of Attraction 71

FIGURE 2.11
Computer plot of the region of attraction.

Thus, what is actually needed is to find a largest possible bounded subset,


c , within the defining domain D, for the given system. This can be
accomplished as discussed in the next example. First, a simple yet useful
result is in order.

LEMMA 2.17
All the eigenvalues of the Jacobian J of the autonomous system x = f (x)
has negative real parts if and only if the Lyapunov equation

J >P + P J + Q = 0

has a unique positive definite and symmetric matrix solution P for any
positive definite and symmetric matrix Q. In this case, a Lyapunov function
for the systems local asymptotic stability about its zero fixed point, if exists,
is given by
V (x) = x> P x .

PROOF The first part of the lemma is well known in the linear systems
theory. If such a positive definite and symmetric matrix, P , indeed exists,
then V (x) so constructed is a Lyapunov function: V (0) = 0, V (x) > 0 for
all x 6= 0, and

V (x) = x> P x + x P x
= x> P f (x) + f (x) P x
>
= x> P J x + H.O.T. + J x + H.O.T. P x

= x> P J + JP x + 2 x> P H.O.T.

= x> Q x + 2 x> P H.O.T.
<0 for small enough ||x|| ,
72 Stabilities of Nonlinear Systems (I)

where J is the Jacobian and H.O.T. represents the higher-order terms, in


the Taylor expansion of f (x) about the zero fixed point. Therefore, V (x) is
a Lyapunov function for the local asymptotic stability of the system about
the zero fixed point.

Now, return to the problem of finding a largest possible bounded subset,


c , of the defining domain D, for the basin of attraction. This problem can
be solved by following a general procedure, which is illustrated by working
through Example 2.23 as follows:
(1) Compute the system Jacobian evaluated at the zero fixed point:

f (x) 0 1
J= = .
x x=x =0 1 1
Make sure that all its eigenvalues have a negative real part. Here,
1,2 = 0.5 1.5 j .
(2) Choose a constant matrix Q > 0 (e.g., Q = I); then solve the Lya-
punov equation
J >P + P J + Q = 0
for the positive definite and symmetric matrix P and its eigenvalues:

3/2 1/2 5 13
P = , 1,2 = .
1/2 1 4
(3) Estimate a constant c > 0 using

c < min V (x) ,


||x||=r

where r is a constant to be determined later. In so doing, since x> P x


is a Lyapunov function and

x> P x min (P ) ||x||2 ,

where min () denotes the minimum eigenvalue of a matrix, one has


the following estimate:

2 5 13 2
c < min (P ) ||x|| = r .
4
(4) Enlarge the estimate of the basin of attraction: Let x = cos() and
y = sin(), and compute

V (x) = x> P x + x > P x


" #
3/2 1/2 y
= 2[x y]
1/2 1 x + 13 x3 y
Stability Regions: Basins of Attraction 73

2 3 1
= x2 + x y + x4 y 2
3 3
1
= 2 + 4 cos3 () 2 sin() + cos()
3
1
2 + 4 |2 sin() + cos()|
3
1
2 + 4 2.2361
3
< 0,

which gives 2 < 3/2.2361, so that



5 13 2 5 13 2 5 13 3
c< r = < 0.46771 .
4 4 4 2.2361
(5) An estimate of the basin of attraction for the system is obtained as

2 > 3/2 1/2
c = x R : x x 0.46771 .
1/2 1

The resulting estimated basin of attraction, c , is an ellipse, as shown in


Fig. 2.12, which is a rather conservative estimate as compared to the true
basin of the computer plot.

FIGURE 2.12
Estimation of the region of attraction.
74 Stabilities of Nonlinear Systems (I)

Exercises
2.1 Verify which of the following systems is (a) only stable in the sense of Lya-
punov; (b) asymptotically but not exponentially stable; (c) exponentially
stable:
x x x
x = , x = , x = , x = (1 + t) x .
(1 + t)2 1+t 1+t
[Hint: If necessary, these equations can be solved by a standard method of
separation of variables.]
2.2 Determine whether or not the following systems have a stable fixed point.
If so, explain if the stability is (a) only in the sense of Lyapunov, (b) global,
(c) asymptotic, (d) uniform, or (e) exponential.

x 1 2 sin(t) x x 1 e2t x
= and = .
y 0 (t + 1) y y 0 2 y

2.3 Prove that if a linear system x = A(t)x is uniformly and asymptotically
stable if and only if it is exponentially stable. Disprove this for nonlinear
systems by studying the counterexample
1/2 x = x3 , which has solution
2
x(t) = x0 / 1 + 2 x0 (t t0 ) .
2.4 Consider the system
x = 2x 5y + x2 4xy
y = 2x 4y + 2x2 3xy + 8y 2
Use the Jacobian method to determine the stability of its zero fixed point.
Try the Lyapunov function V (x) = 2x2 6xy + 5y 2 , to see if it provides
the same solution about the stability.
2.5 Consider the nonlinear system
p
x
+3 1 + x 2 + 2 x = 0 .
(a) Find its fixed point in the x-x phase plane.
(b) Determine the stability of the fixed point.
(c) Determine the type (center, saddle, focus) of the fixed point.
2.6 Consider a time-varying damped pendulum described by

x = y
k(t) g
y = m
y `
sin(x) ,
where time-verying dumping coefficient satisfies
a k(t) b and c k(t) d
for some constants a, b, c, d. Under what conditions on a, b, c, d the zero
fixed point is: (a) stable in the sense of Lyapunov, (b) asymptotically
stable, (c) uniformly stable (if applicable), and (d) globally and asymptot-
ically stable?
2.7 Analyze the stability of the following dynamical system:
x
+ 2a |x| x + b x = c ,
where a > 0, b > 0, and c are constants.
Exercises 75

2.8 Consider the Volterra-Lotka system


x(t)
= a x(t) b x(t)y(t)
y(t)
= b x(t)y(t) c y(t) ,
where a, b, c are constants.
(a) Clearly, (0, 0) is a fixed point of the system. Find the other one.
(b) Under what conditions is (0, 0) asymptotically stable?
(c) Linearize the system about (0, 0), and then about another fixed
point. Find the solutions of the two linearized systems.
2.9 Verify that the linear time-varying system

x = 1 9 cos2 (6t) + 12 sin(6t) cos(6t) x

+ 12 cos2 (6t) + 9 sin(6t) cos(6t) y

y = 12 sin2 (6t) + 9 sin(6t) cos(6t) x

1 + 9 sin2 (6t) + 12 sin(6t) cos(6t) y
has eigenvalues 1 = 1 and 2 = 10, but is not even stable about its
zero fixed point since its solution is (verify it):
x = c1 e2t (cos(6t) + 2 sin(6t)) + c2 e13t (sin(6t) 2 cos(6t))
y = c1 e2t (2 cos(6t) sin(6t)) + c2 e13t (2 sin(6t) + cos(6t)) .
2.10 Discuss the stability of the following system about its zero fixed point, first
by the Jacobian method and then by another menthod of your choice:

x 17
34 et x
= 4
199 t .
y 4
e 334
y
2.11 Prove that a linear time-invariant system,
x = A x , x0 R n ,
is asymptotically stable about its zero fixed point if and only if for any
positive definite matrix Q, the Lyapunov equation
A> P + P A + Q = 0
has a unique positive definite matrix solution P .
2.12 Consider the nonhomogeneous system
x
(t) + 2 x(t)
= f (x(t)) ,
where f () is a nonlinear differentiable function. Find the bounds a and b
for the input, in the sense that ax(t) f (x(t)) bx(t), such that within
these bounds the system is asymptotically stable about its zero fixed point.
2.13 Use the Lyapunov function V (x) = x21 + x22 to study the stability of the
zero fixed point of the system

x = x 2 x2 y 2 + y x2 + y 2 + 2

y = x 2 + x2 + y 2 + y 2 x2 y 2
when (a) = 0 and (b) 6= 0.
76 Stabilities of Nonlinear Systems (I)

2.14 Given a linear system, x = A x, with a constant matrix



0 1
A= ,
a b
where a < 0 and b < 0 are constants. Construct a Lyapunov function,
V (x) = x> P x, by using the positive definite and symmetric matrix solu-
tion P from the Lyapunov equation
A> P + P A = Q
in which Q = diag {2, 2}.
2.15 Determine the stability of the zero fixed points of the following systems:
(a)
x = x y + xy
y = y y 2
(b)
x = x2 + y 3
y = y + x3 .
[Hint: Try V (x, y) = (2x y)2 y 2 and V (x, y) = x y 2 /2, resp.]
2.16 The following linear system is obviously stable in the sense of Lyapunov:
x = y
y = x ,
since it is is equivalent to x
= x, and its solution is x(t) = c1 sin(t) +
c2 cos(t). However, someone shows the following reasoning to argue that
it is unstable even in the sense of Lyapunov. Explain why it is wrong:
Let V (x, y) = (2x y)2 y 2 = 4x2 4xy .
Then V (x, y) > 0 if x > y (see Fig. 2.13 for a selected region )
and V (x, y) = 0 on the boundaries x = y and x = 0 of
with V (0, 0) = 0 and (0, 0) boundary of
However,
V (x, y) = 8xx 4xy
4xy = 8xy 4y 2 4x(x)
= 4(2x y)y + 4x2 > 0 inside .
So, the equilibrium point (0, 0) is unstable in the sense of Lyapunov.
2.17 Apply the LaSalle invariance principle to show that the Lienard equation
x
+ f (x) x + g(x) = 0 ,
with g(0) = 0, is globally asymptotically stable about its zero fixed point,
if the following conditions are satisfied:
f (x) > 0 , xg(x) > 0 for all x 6= 0 , and
Z x
G(x) = g(z) dz (x ) .
0
Exercises 77

FIGURE 2.13
Selected region of .

[Hint: Try V (x, x) = 12 y 2 + G(x), and note that the largest invariant set
is S = {(x, y) : f (x)y 2 = 0}, where y = x.]

2.18 Apply the result of the previous exercise to the following systems:
+ x2 x + x = 0
x
and the van der Pol equation
+ (x2 1) x + x = 0
x ( < 0).
2.19 Consider the following autonomous system:
(
x = y
y = f (x) ,
where the function f satisfies f (0) = 0. Let the Lyapunov function be
constructed in terms of the total system energy, in the form
Z x
1
V (x, y) = y 2 + f (z)dz .
2 0
Use this Lyapunov function to discuss the stability of the system with
respect to the properties of f .
2.20 Use the variable gradient method to find a Lyapunov function for the
system
x = y
y = 4 (x + y) h(x + y)
where h() is a nonlinear function satisfying h(0) = 0 and zh(z) 0 for all
|z| 1. [Hint: V (x) = 2x2 + 2xy + y 2 is one solution.]
2.21 Estimate the basin of attraction of the zero fixed point for the following
system:
x = y

y = x + x2 1 y
[Hint: Solve the Lyapunov equation to find the matrix P > 0 and then
try the Lyapunov function V (x) = x> P x. Note also that | cos2 sin |
0.3849 and |2 sin cos | 2.2361.]
78 Stabilities of Nonlinear Systems (I)

2.22 For the following system:




x
= 1
z+
yx
2 2

y = 1
z

xy (, > 0) ,

2 2


z = 2 1 xy
show that in a large enough neighborhood of the origin there must be an
attractor, and that this attractor is not the origin.
[Hint: Construct a Lyapunov function to argue.]
2.23 Consider the following autonomous system:
(
x = y

y = x + 1 x2 y 2 y .
(a) Discuss the stability of the system about its zero fized point.
(b) Find the limit cycle of the system.
(c) Use the Lyapunov method and the LaSalle principle to show that
any system orbit, if not starting from the origin, will converge to the
limit cycle.
[Hint: Polar coordinates may be convenient to use.]
3
Stabilities of Nonlinear Systems (II)

3.1 Linear Stability of Nonlinear Systems


The first method of Lyapunov provides a linear stability analysis for non-
linear autonomous systems. As can be seen from Theorems 2.2 and 2.12,
the stability and instability issues for autonomous systems are quite simple,
which are similar to that of the familiar linear time-invariant systems. As
has also been emphasized in the last chapter, the Lyapunov first method
generally cannot be applied to nonautonomous systems. Then, one may
wonder to what extent the simple and efficient linear stability analysis
methodology can be used for nonlinear nonautonomous systems.
In this section, the attention is first switched back onto the general
nonautonomous system:
x = f (x, t) , x(t0 ) = x0 Rn , (3.1.1)

which, as usual, is assumed to have a fixed point at x = 0.
In system (3.1.1), a Taylor expansion of the function f about its zero
fixed point gives
x = f (x, t) = J(t; t0 ) x + g(x, t) , (3.1.2)

where J(t; t0 ) = f /x x=0 is the Jacobian and g(x, t) is the residual of
the expansion, which is assumed to satisfy

g(x, t) a ||x||2 for all t [t0 , ) , (3.1.3)

with the constant a > 0. Recall from (2.1.5) that the solution of equation
(3.1.2) is given by
Z t
x(t) = (t, t0 ) x0 + (t, ) g(x( ), ) d , (3.1.4)
t0

where (t, ) is the fundamental matrix associated with matrix J(t; t0 ),


defined in (2.1.4).

79
80 Stabilities of Nonlinear Systems (II)

THEOREM 3.1 (A General Linear Stability Theorem)


For the nonlinear nonautonomous system (3.1.2), if there are two positive
constants, c and , such that

(t, ) c e(t ) for all t0 t < ,
and if
||g(x, t)||
lim =0
||x||0 ||x||
uniformly with respect to t [t0 , ), then there are two positive constants,
and , such that
x(t) c x0 e(tt0 )
for all ||x0 || and all t [t0 , ).

This result implies that under the theorem conditions, the system is
locally, uniformly, and exponentially stable about its zero fixed point. A
proof of this theorem relies on the Gronwall Inequality, which is very useful
in its own right (see, e.g., Example 3.3 below).

LEMMA 3.2 (Gronwall Inequality)


Let x(t), y(t), and z(t) be scalar-valued continuous functions defined on
[a, b], with z(t) > 0 and
Z t
x(t) y(t) + z( )x( ) d , for all t [a, b] .
a

Then, the following Gronwall Inequality:


Z t Z t
x(t) y(t) + z( )y( ) exp z(s)ds ds , (3.1.5)
a

holds for all t [a, b].

PROOF The Gronwall inequality can be easily verified by letting


Z t
w(t) = z( )x( ) d
a

and observing that


r(t)
= z(t)x(t) z(t)x(t) + z(t)r(t) ,
or
Z t Z t
d
r(t) exp z( )d exp z( )d z(t)y(t) .
dt a a

An integration of this inequality yields immediately the result.


Linear Stability of Nonlinear Systems 81

Now, a proof of Theorem 3.1 is readily to be given.


PROOF Equation (3.1.4), or its slightly different form,
Z t
x(t) = (t, t0 ) x0 + (t, t0 )1 (t0 , ) g(x( ), ) d ,
t0

along with the bound on (t, t0 ) given in the theorem, yield


Z t
||x(t)|| c e(tt0 ) ||x0 || + c e(t ) ||g(x( ), )|| d .
t0

Due to the limiting condition given in the theorem, which means that g
is a higher-order function of x, one has ||g(x, t)|| ||x|| for a constant
> 0, for (small enough) ||x|| < / for an > 0. It then follows from
the Gronwall inequality, with x(t) = ||x||et therein, that
Z t
x(t) c et0 ||x(t0 )|| + c x( ) d
t0

for small enough ||x||. Using again the Gronwall inequality, but with con-
stant functions y = c et0 ||x(t0 )|| and z = c at this time, yields

x(t) c et0 ||x(t0 )|| e c(tt0 ) .

Therefore,
x(t)|| c ||x(t0 )|| e( c) (tt0 ) .
Thus, if one chooses < /c and lets = c > 0, and moreover
restricts ||x(t0 )|| to be so small that ||x(t0 )|| < := /(c), then the above
inequality remains to be valid, with

x(t) c x0 e(tt0 )

for all ||x0 || and all t [t0 , ).

Example 3.1
The simple nonlinear nonautonomous system

x = a x x3

is exponentially and asymptotically stable about its zero fixed point, if


a > 0, in a small neighborhood of zero.
The stability of this system is obvious: the linear part of the system,
x = a x is exponentially stable, while the nonlinear part, x3 , is dominated
by x for small enough |x| near the origin. Therefore, the linear stability
dominates the nonlinear instability in this system.
82 Stabilities of Nonlinear Systems (II)

Note, however, that not all solutions of this system tend to the zero fixed
point, which can be verified simply by sketching the phase portrait of its
solution flow, or be understood from the fact that for large |x|, the term x3
will dominate the linear part, a x. Hence, starting from an initial point
far away from the origin, the system orbit will diverge.

Example 3.2
Consider the damped pendulum (1.1.1), now in the following form:

0 1 0
= + .
g
` m g
` ( sin())

It is easy to verify that the constant matrix here is stable. If the following
truncated expansion is used:
g
g 3 2n+1
sin() + c ||3 + + ||2n+1
` ` 3! (2n + 1)!

for some constant c > 0, then Theorem 3.1 applies to conclude that the
zero fixed point of the damped pendulum is asymptotically stable.
This example shows that that the use of a more accurate approximate
model does not give radical difference in asymptotic stability analysis for
the pendulum with a small enough angle.

Example 3.3
Consider a perturbed linear time-invariant system,


x(t) = A + A (t) x(t) , x(t0 ) = x0 Rn ,

where the nominal matrix A is stable (i.e., all its eigenvalues have negative
real part). If the perturbation of the system matrix, A (t), is continuous
and satisfies

A (t) < c for all t0 t <

for a small enough constant c > 0, then the perturbed system remain to
be asymptotically stable about its zero fixed point.
Clearly, the limiting condition stated in Theorem 3.1 cannot be varified.
Thus, one resorts to a direct application of the Gronwall inequality (3.1.5).
The stability of matrix A implies that there are positive constants and
such that ||(t, t0 )|| e(tt0 ) for all t t0 , so that the solution
Z t
x(t) = (t, t0 )x0 + (t, )A ( )x( ) d
t0
Linear Stability of Nonlinear Systems 83

satisfies
Z t
||x(t)|| e(tt0 ) ||x0 || + ||A ( )|| ||x( )|| e(t ) d
t0
Z t
||x0 || exp ||A ( )|| d
t0
c (tt0 )
||x0 || e ,

where the Gronwall inequality (3.1.5) has been used. It then follows that

||x(t)|| ||x0 || e(c) (tt0 ) .

If c is small enough such that c < 0, then ||x(t)|| 0 as t .



Note that the system x(t) = A + A (t) x(t) in this example can be
viewed as


x(t) = A x(t) + g(x(t), t) , x(t0 ) = x0 Rn ,

where > 0 is a constant and ||g|| uniformly for all x Rn for


all t t0 , with a constant > 0 such that < c for a small enough
c > 0. This means that a uniformly small (even nonlinear and time-varying)
perturbation does not alter the stability of the linear part of the system.
Next, return to system (3.1.2). If the Jacobian J(t; t0 ) = J therein is
a stable constant matrix, then the following simple criterion (as a special
case of Example 2.9) is convenient to use.

THEOREM 3.3
Suppose that in system (3.1.2), the matrix J(t; t0 ) = J is a stable constant
matrix and g(0, t) = 0. Let P be a positive definite and symmetric matrix
solution of the Lyapunov equation

P J + J >P + Q = 0 ,

where Q is a positive definite and symmetric constant matrix. If

||g(x, t)|| a ||x||

for a constant a < 21 max (P ) uniformly on [t0 , ), where max (P ) is the


largest eigenvalue of P , then system (3.1.2) is globally, uniformly, and
asymptotically stable about its zero fixed piont.

A proof of this theorem was actually given in Example 2.9, and is a


special case of the following slightly generalized result.
84 Stabilities of Nonlinear Systems (II)

THEOREM 3.4
In system (3.1.2), suppose that both the matrix J(t; t0 ) and the function
g(x, t) are continuous,
R and that there is a continuous nonnegative function
(t) satisfying t0 ( )d a < and ||g(x, t)|| (t)||x(t)||. Under
these conditions, if J(t; t0 ) is uniformly stable (with all eigenvalues having
a negative real part for all t t0 ), then then system (3.1.2) is locally,
uniformly, and asymptotically stable about its zero fixed piont.

PROOF Since the matrix J(t; t0 ) is uniformly stable, according to Exer-


cise 2, there is a constant c such that ||(t, )|| c for all t0 t < .
For any t t0 ,
Z t

x(t) = (t, t )x(t ) + (t, )g(x( ), ) d ,
t

where t0 t t < . It follows from the Gronwall inequality (3.1.5)


that
Z t

||x(t)|| c ||x(t )|| + c ( ) ||x( )|| d
t
Z t
c ||x(t )|| exp c ( ) d
t
Z

c ||x(t )|| exp c ( ) d
t
c ea ||x(t )|| .

For any > 0, starting from an initial state satisfying ||x(t )|| < ea /(2c),
one has ||x(t)|| < /2. Then, an argument similar to that given in the
proof of Theorem 3.1 shows that ||x(t)|| < /2 for all t t . Therefore,
||x(t)|| < for all t0 t < . Hence, the system is stable in the sense of
Laypunov about its zero fixed point.
Furthermore, since the matrix J(t; t0 ) is uniformly stable, ||(t, t0 )|| 0
as t . Thus, for any > 0 and any bounded ||x0 ||, there is a t > t0
such that ||(t, t0 )x0 || < for t t , so that
Z t
||x(t)|| ||(t, t0 )x0 || + ||(t, )|| ||g(x( ), )|| d
t0
Z t
+ c ( ) ||x( )|| d
t0
Z
exp c ( ) d
t0

eac (t t < ) .
Linear Stability of Nonlinear Systems with Periodic Linearity 85

Since is arbitrary and eac does not depend on and t , it follows that
||x(t)|| 0 as t uniformly.

3.2 Linear Stability of Nonlinear Systems with Periodic


Linearity
First, consider a linear time-varying system with a periodic coefficient ma-
trix in the form

x = A(t) x , x(t0 ) = x0 Rn , (3.2.6)

where A(t + tp ) = A(t), with the fundamental period tp > 0. It should be


noted that a solution of this system may not be periodic, as can be seen
from the following simple example.

Example 3.4
The system
x(t)
= [1 + sin(t)] x(t)
has a 2-periodic coefficient, but its nonzero solution

x(t) = x0 etcos(t)

is aperiodic.

To further study system (3.2.6), let (t, t0 ) be the fundamental


Rt matrix

associated with A(t), which is defined by (t, ) = exp A()d . Re-
call that all columns of (t, t0 ) are linearly independent solution vectors of
) = A(t)(t, ) for t0 t < .
(3.2.6), and that it satisfies (t,
For the tp -periodic matrix A(t + tp ) = A(t), it is clear that (t + tp ) =
(t), so that (t + tp , t0 ) is another fundamental matrix associated with
A(t). Thus, the columns of (t+tp ), being (linearly independent) solutions
of (3.2.6), are linear combinations of those in (t, t0 ):
n
X
ij (t + tp ) = i` c`j ,
`=1

where = [ij ] and {cij } are constants, or

(t + tp , t0 ) = (t, t0 ) C (3.2.7)

for matrix C = [cij ]. Since det [(t + tp , t0 )] = det [(t, t0 )] det [C] and
det [(t, t0 )] 6= 0 for all t t0 , one has det [C] 6= 0.
86 Stabilities of Nonlinear Systems (II)

It should be noted that all the eigenvalues of C are independent of the


choice of the fundamental matrix (t, t0 ). To see this, let 1 (t, t0 ) and
2 (t, t0 ) be two fundamental matrices associated of A(t). Then, since they
consist of solutions of the same equation (3.2.6), there is a constant matrix,
D, such that
1 (t, t0 ) = 2 (t, t0 ) D ,
and this D is nonsingular since both 1 and 2 are so. Thus, it follows
that

1 (t + tp ; t0 ) = 2 (t + tp ; t0 ) D = 2 (t, t0 ) CD
= 1 (t, t0 )D1 CD := 1 (t, t0 ) H .

This implies that H plays the role as C in equation (3.2.7). This also
implies that H and C are similar nonsingular matrices, so they have the
same eigenvalues. Therefore, a different choice of the fundamental matrix
in (3.2.7) only changes the form, but not the eigenvalues, of the constant
matrix C therein.

DEFINITION 3.5 The eigenvalues of the constant matrix C in (3.2.7) are


called the Floquet multipliers of the system (3.2.6).

The importance of the Floquet multipliers can be appreciated from the


observation that equation (3.2.7) implies

(t + ntp , t0 ) = (t, t0 ) C n , n = 1, 2, , (3.2.8)

which, in turn, implies that the long-term dynamical behavior of a solution


of system (3.2.6) is determined by the eigenvalues of C, namely, the Floquet
multipliers.

THEOREM 3.5 (Floquet Theorem)


The system (3.2.6) has at least one nonzero solution, x(t), and this solution
satisfies
x(t + tp ) = x(t) , (3.2.9)

where is a Floquet multiplier.

Note that this solution, x(t), may not be periodic (unless = 1).

PROOF Let be an eigenvalue of C: det[C I] = 0, and let v be its


associated eigenvector:
[C I] v = 0 .
Linear Stability of Nonlinear Systems with Periodic Linearity 87

e(t) is a
Let x(t) = (t, t0 )v. Then, it is straightforward to verify that x
nonzero solution of system (3.2.6). Moreover,

x(t + tp , t0 ) = (t + tp , t0 ) v = (t, t0 ) C v = (t, t0 ) v = x(t) ,

as claimed.

The following example shows how to calculate the Floquet multipliers


for a simple system.

Example 3.6
Consider the system
" #
x 1 1 x
= sin(t)+cos(t) ,
y 0 2+sin(t)cos(t)
y

with initial time t0 = 0. This linear system can be easily solved, yielding

x(t) = a et b 2 + sin(t) ,
(a)
y(t) = b 2 + sin(t) cos(t) ,

with constants a and b determined by initial conditions. Hence, a corre-


sponding fundamental matrix is

2 sin(t) et
(t) = .
2 + sin(t) cos(t) 0

Clearly, matrix (t) is 2-periodic and the matrix C in (3.2.7) must satisfy
(t + 2) = (t)C for all t 0. Therefore, (2) = (0)C, so that

1 0
C = 1 (0)(2) = (t) = .
0 e2

Thus, the Floquet multipliers of the system are given by the eigenvalues of
matrix C: 1 = 1 and 2 = e2 .
In this example, 1 = 1 implies a 2-periodic solution of the system (see
(3.2.9), which corresponds to a = 0 in solution formula (a).

Now, in equation (3.2.7), let

R = ln(C)/tp ,

where the principal value of log is taken, so C = etp R . This constant matrix,
R, is very useful. For example, it can be used to transform the periodic
88 Stabilities of Nonlinear Systems (II)

system (3.2.6) to be one having a constant coefficient matrix. Indeed,


letting
x = (t, t0 ) e(tt0 )R y
changes the system (3.2.6) to

y = R y .

DEFINITION 3.7 In Definition 3.5, if for a Floquet multiplier , there is


a real constant such that
= e t p ,
then this is called a characteristic exponent or Floquet number of system
(3.2.6).

In this definition, observe that

= ln()/tp ,

where the principal value of log is taken. Since is an eigenvalue of the


constant matrix C, is an eigenvalue of the constant matrix R. Moreover,

<{ || } < 0 || < 1 ,

which is useful for stability determination.


Now, consider a nonlinear nonautonomous system of the form

x = f (x, t) = J(t) x + g(x, t) , x(t0 ) = x0 Rn , (3.2.10)

where g(0, t) = 0 and J(t) is a p-periodic matrix, with p > 0:

J(t + p) = J(t) for all t [t0 , ).

THEOREM 3.6 (Floquet Theorem)


In system (3.2.10), assume that g(x, t) and g(x, t)/x are both continuous
in a bounded region D containing the origin. Assume also that
||g(x, t)||
lim =0
||x||0 ||x||
uniformly over [t0 , ). If the system Floquet multipliers satisfy

| i | < 1 , i = 1, , n , for all t [t0 , ) , (3.2.11)

then system (3.2.10) is globally, uniformly, and asymptotically stable about


its zero fixed point. In particular, this holds for the linear system (3.2.10)
with g = 0 therein.
Comparison Principle and Vector Lyapunov Functions 89

PROOF First, for the linear case with g = 0, it follows from (3.2.9) and
condition (3.2.11) that
x(t0 + ntp ) = nmax x(t0 ) 0 (n ) ,
where |max | = max{|i | : i = 1, , n}. Then, for the general case, prop-
erty (3.2.8) and condition (3.2.11) together imply that there exist constants
c > 0 and > 0 such that
||(t0 + ntp , t0 )|| c e n ,
so that similar conditions to Theorem 3.1 are satisfied. Hence, the proof
of Theorem 3.1 can be suitably modified for the present theorem.

One should compare this theorem with Theorem 3.1 which is for systems
with an aperiodic coefficient matrix and so can achieve exponential stabil-
ity. For systems with a periodic coefficient matrix, however, the best stabil-
ity that one may expect is the asymptotic stability but not the exponential
one. This is due to the periodicity of the system which, roughly, yields a
stable solution converging to zero at the same rate as c et sin(t + ).

3.3 Comparison Principle and Vector Lyapunov Functions


For large-scale and interconnected nonlinear (control) systems, or systems
described by differential inequalities rather than differential equations, the
above stability criteria may not be directly applicable. In many such cases,
the comparison principle and vector Lyapunov function methods turn out
to be convenient.
To introduce the comparison principle, consider the nonautonomous sys-
tem (3.1.1), namely,
x = f (x, t) , x(t0 ) = x0 , (3.3.12)

where f is assumed to be continuous on a neighborhood D of the origin.


In this case, since f is only continuous (but not necessarily satisfying the
Lipschitz condition), this differential equation may have more than one
solution. In particular, the following differential inequality has infinitely
many solutions in general:
x f (x, t) , x(t0 ) x0 , (3.3.13)

where f is the defined above. Any continuously differentiable solution of


(3.3.13) is called a lower solution of system (3.3.12). Similarly, any contin-
uously differentiable solution of
x f (x, t) , x(t0 ) x0 , (3.3.14)
90 Stabilities of Nonlinear Systems (II)

where f is defined above, is called an upper solution of system (3.3.12).


Obviously, system (3.3.12) has infinitely many lower and upper solutions.
Among them, under some further conditions, there exist a supremum lower
solution, called the minimum solution, and an infimum upper solution,
called the maximum solution; they are characterized by the following the-
orem, where for notational convenience only the one-dimensional case is
discussed.

THEOREM 3.7
Let xm (t) and xM (t) be a lower and an upper solution of the scalar system
(3.3.12). If there is a constant L > 0 such that

f (x, t) f (y, t) L(x y) for all x y and all t t0 ,

then
xm (t) xM (t) for all t t0 .
Moreover, if there is a constant ` 0 such that

f (x, t) f (y, t) ` (x y)

for all (x, y) satisfying xm (t) y(t) x(t) xM (t) for all t t0 , then the
system has a minimum solution xmin (t) and a maximum solution xmax (t)
satisfying xm (t) xmin (t) x(t) xmax (t) xM (t) for all solution x(t)
of the system for all t [t0 , ).

PROOF [see V. Lakshmikantham et al.]

Now, retrun to the general system setting (3.3.12). Suppose that this
system satisfies appropriate conditions and so has maximum and minimum
solutions, xmax (t) and xmin (t), respectively, in the sense that

xmin (t) x(t) xmax (t) componentwise

for all t [t0 , ), where x(t) is any solution of the system, satisfying

xmin (t0 ) = x(t0 ) = xmax (t0 ) = x0 .


Comparison Principle and Vector Lyapunov Functions 91

THEOREM 3.8 (The Comparison Principle)


Let z(t) be a solution of the following differential inequality:


z(t) f (z, t) with z(t0 ) x0 componentwise .

If xmax (t) is the maximum solution of system (3.3.12), then

z(t) xmax (t) componentwise

for all t [t0 , ).

PROOF [see V. Lakshmikantham et al.]

The next theorem is established based on this comparison principle.


A vector-valued function, g(x, t) = [g1 (x, t) gn (x, t)]> is said to be
quasi-monotonic, if

xi = x
ei and xj x
ej (j 6= i)

implies
gi (x, t) gi (e
x, t) ,
for all i = 1, , n.

THEOREM 3.9 (Vector Lyapunov Function Theorem)

Let v(x, t) be a vector Lyapunov function associated with the nonau-


>
tonomous system (3.3.12), with v(x, t) = V1 (x, t) Vn (x, t) in which
each Vi is a continuous Lyapunov function for the system, i = 1, , n,
satisfying ||v(x, t)|| > 0 for x 6= 0. Assume that


v(x, t) g v(x, t), t componentwise

for a continuous and quasi-monotonic function g defined on D. Then


(i) if the system

z(t) = g z, t
is stable in the sense of Lyapunov (or asymptotically stable) about
its zero equilibrium z = 0, then so is the nonautonomous system
(3.3.12);
(ii) if, moreover, ||v(x, t)|| is monotonically descreasing with respect to t
and the above stability (or asymptotic stability) is uniform, then so is
the nonautonomous system (3.3.12);
(iii) if, furthermore, ||v(x, t)|| c ||x|| for two positive constants c and
, and the above stability (or asymptotic stability) is exponential, then
so is the nonautonomous system (3.3.12).
92 Stabilities of Nonlinear Systems (II)

PROOF [see V. Lakshmikantham et al.]

A simple and frequently used comparison function is


||h(z, t)||
g(z, t) = A z + h(z, t) , lim = 0,
||z||0 ||z||
where A is a stable M matrix (Metzler matrix). Here, A = [aij ] is an M
matrix if
aii < 0 and aij 0 (i 6= j) , i, j = 1, , n .
For instance, A = I is an M matrix and is also a stable M matrix.

3.4 Orbital Stability


The orbital stability differs from the Lyapunov stabilities in that it concerns
with the stability of a system output (or state) trajectory under small
external perturbations.
Let t (x0 ) be a p-periodic solution, p > 0, of the autonomous system

x(t) = f (x) , x(t0 ) = x0 Rn , (3.4.15)
and let represent the closed orbit of t (x0 ) in the phase space, namely,
n o
= y | y = t (x0 ) , 0 t < p .

DEFINITION 3.8 The p-periodic solution trajectory, t (x0 ), of the au-


tonomous system (3.4.15) is said to be orbitally stable if, for any > 0,
there exits a constant = () > 0 such that for any x0 satisfying
d(x0 , ) := inf ||x0 y|| < ,
y

the solution of the system, t (x0 ), satisfies


d(t (x0 ), ) < , for all t t0 .

Orbital stability is visualized by Fig. 3.1.

Example 3.9
For a simple example, a stable periodic solution, particularly a stable fixed
point of a system, is orbitally stable. This is because all nearby trajectories
approach it and, as such, it becomes a nearby orbit after a small pertur-
bation and so will move back to its original position (or stay nearby). On
the contrary, unstable and saddle-node type of periodic orbits are orbitally
unstable.
Structural Stability 93

FIGURE 3.1
Geometric meaning of the orbital stability.

3.5 Structural Stability


Consider again the autonomous system (3.4.15).

DEFINITION 3.10 Two system orbits are said to be topologically equiva-


lent if there is a diffeomorphism that transfers one orbit to another. Two
systems are said to be topologically orbitally equivalent, if their phase por-
traits are topologically equivalent.

If the dynamics of a system in the phase space changes radically, for


instance by the appearance of a new fixed or a new periodic orbit, due to
small external perturbations, then the system is structurally unstable. To
be more precise, consider the following set of functions:

S = g(x) ||g(x)|| < , ||g(x)/x|| < for all x Rn .

DEFINITION 3.11 If, for any g S, there exists an > 0 such that the
orbits of the two systems

x = f (x) (3.5.16)

x = f (x) + g(x) (3.5.17)

are topologically orbitally equivalent, then the autonomous system (3.5.16)


is said to be structurally stable.
94 Stabilities of Nonlinear Systems (II)

Example 3.12
System x = x2 is not structurally stable in a neighborhood of the origin.
This is because when the second system is slightly perturbed, to become
= x2 + , where
say x > 0, then the resulting system has two equilibria,
x1 = and x2 = , which has more numbers of fixed points than the

original system that possesses only one, x = 0.


In contrast, x = x is structurally stable in a neighborhood of the origin,
although the origin is an unstable fixed point of the system. To see this,
consider its perturbed system

x = x + g(x) ,

where g(x) is continuously differentiable and there is a constant c > 0 such


that 0
| g(x) | < c and g (x) < c , for all x R .
To find the fixed points of the perturbed system, let

f (x) := x + g(x) = 0

and choose such that < 1/(2c). Then,


0
g (x) < c < 1/2 , for all x R ,

so that
f 0 (x) = 1 + g 0 (x) > 1/2 , for all x R .
This implies that the curve of the function f (x) intersects the x-axis at
exactly one single point. Hence, equation

x = f (x) = x + g(x)

has exactly one solution, x . Since g(x) is (uniformly) bounded, |g(x)| < c
for all x R, when 0, one has g(x) 0, so that this solution
x 0 as well. This means that the perturbed fixed point approaches the
unperturbed one.
Furthermore, it needs to show that the perturbed fixed point is always
unstable just like the unperturbed one. To do so, observe that the linearized
equation of the perturbed system is

x = f (x) f xe + f 0 (x) x=xe x xe

= 0 + 1 + g 0 xe x xe ,

where xe is the perturbed fixed point. Since


1 1
1 + g 0 xe > 1 g 0 xe > 1 = > 0 ,
2 2
Total Stability: Stability Under Persistent Disturbances 95

the eigenvalue of the Jacobian f 0 (xe ) is positive, implying that the per-
turbed system is unstable about its zero fixed point. Therefore, the unper-
turbed and perturbed systems are topologically orbitally equivalent; or, in
other words, the unperturbed system is structurally stable about its zero
fixed point.

THEOREM 3.10 (Orbital Stability Theorem)


Let x(t) be a p-periodic solution of an autonomous system. Suppose that the
system has Floquet multipliers i , with 1 = 0 and |i | < 1 for i = 2, , n.
Then this periodic solution x(t) is orbitally stable.

PROOF [see F. Verhulst: p.97]

THEOREM 3.11 (Peixoto Structural Stability Theorem)


Consider a two-dimensional autonomous system. Suppose that f is twice
differentiable on a compact and connected subset D bounded by a simple
closed curve, , with an outward normal vector, ~n. Assume that f ~n 6= 0
on . Then the system is structural stable on D if and only if
(i) all equilibria are hyperbolic;
(ii) all periodic orbits are hyperbolic;
(iii) if x and y are hyperbolic saddles (probably, x = y), then W s (x)
W u (y) is empty.

PROOF [see Glendingning: p.92]

3.6 Total Stability: Stability Under Persistent Disturbances


Consider a nonautonomous system and its perturbed version

x = f (x, t) x(t0 ) = x0 Rn (3.6.18)

x = f (x, t) + h(x, t) , (3.6.19)

where f is continuously differentiable, with f (0, t) = 0, and h is a persistent


perturbation in the following sense:

DEFINITION 3.13 A function h is a persistent perturbation if, for any


> 0, there are two positive constants, 1 and 2 , such that

||h(e
x, t)|| < 1 for all t [t0 , )
96 Stabilities of Nonlinear Systems (II)

and
||e
x(t0 )|| < 2
together imply ||e
x(t)|| < .

DEFINITION 3.14 The zero fixed point of the unperturbed system (3.6.18)
is said to be totally stable, if the persistently perturbed system (3.6.19)
remains to be stable in the sense of Lyapunov.

As the next theorem states, all uniformly and asymptotically stable sys-
tems with persistent perturbations are totally stable, namely, a stable orbit
starting from a neighborhood of another orbit will stay nearby.

THEOREM 3.12 Malkin Theorem

If the unperturbed system (3.6.18) is uniformly and asymptotically stable


about its zero fixed point, then it is totally stable, namely, the persistently
perturbed system (3.6.19) remains to be stable in the sense of Lyapunov.

PROOF [see Hoppensteadt: p.104]

Next, consider an autonomous system with persistent perturbations:

x = f (x) + h(x, t) , x Rn . (3.6.20)

THEOREM 3.13 Perturbed Orbital Stability Theorem

If t (x0 ) is an orbitally stable solution of the unperturbed autonomous


system (3.6.20) (with h = 0 therein), then it is totally stable, that is, the
perturbed system remains to be orbitally stable under persistent perturba-
tions.

PROOF [see Hoppensteadt: p.107]


Exercises 97

Exercises
3.1 Compare and comment on the stabilities of the following two systems:

x3 x2n+1
x = a sin(x) and x = a x + ,
3! (2n + 1)!
with constant a > 0.
3.2 Consider a linear time-varying system, x = A(t) x, and let (t, t0 ) be its
fundamental matrix. Show that this system is uniformly stable in the sense
of Lyapunov about its zero fixed point if and only if ||(t, )|| c <
for a constant c and for all t0 t < .
R x = [ A + A (t) ] x,
3.3 Consider a perturbed linear time-invariant system,
with A (t) being continuous and satisfying t ||A ( )|| d c < .
0
Show that this perturbed system is stable in the sense of Lyapunov about
its zero fixed point.
3.4 Consider the following nonlinear nonautonomous system:

x = A + A (t) x + g(x, t) ,
which satisfies all conditions stated in Theorem 3.1. Assume, moreover,
that A (t) is continuous with A (t) 0 as t . Show that this system
is asymptotically stable about its zero fixed point. [Hint: Mimic the proof
of Theorem 3.1 and Example 3.3.]
3.5 Determine the stability of the fixed point of the following system:
x = 2 x + f (t) y
y = g(t) x 2 y ,
where |f (t)| 1/2 and |g(t)| 1/2 for all t 0.
3.6 For the following system:
x = sin(2t) x + (cos(2t) 1) y
y = (cos(2t) + 1) x + sin(2t) y ,
find its Floquet multipliers and Floquet numbers.
3.7 Let A(t) be a periodic matrix of period > 0, and let {i } and {i } be its
Floquet multipliers and numbers, i = 1, , n, respectively. Verify that
Y
n Z t+
i = exp trace [A(s)] ds
i=1 t

and Z
X
n
1
t+
i = trace [A(s)] ds .
t
i=1
3.8 Consider the following linear time-varying system with a periodic coeffi-
cient:
x = ( + cos(t)) x .
Verify that its Floquet multiplier is = , and discuss the stability of
the system about its zero fixed point with respect to the value of this
multiplier.
98 Stabilities of Nonlinear Systems (II)

3.9 Consider the predator-prey model


x = x + x2 xy
y = x + xy ,
where all coefficients are positive constants. Discuss the stability of this
system about its zero fixed point.
3.10 Consider the Mathieu equation

x
+ a + b cos(t) x = 0 .
Discuss the stability of its orbit about the zero fixed point in terms of the
constants a and b.
3.11 A pendulum with mass m and weight-less string of length a hangs from
a support that is constrained to move with vertical and horizontal dis-
placements (t) and (t), respectively. It can be verified that the motion
equation of this pendulum is
a + (g )
sin() + cos() = 0 .
p
Assume that (t) = sin(t) and (t) = sin(2t), where = g/a.
Show that the linearized equation, for small amplitudes, has a solution
8
(t) = cos(t) .

Discuss the stability of this solution.
3.12 Suppose that the Lienard equation
x
+ f (x) x + g(x) = 0
has a tp -periodic solution, t (x0 ). Show that if
Z tp
f ( (x0 ) d > 0 ,
0
then this periodic solution is orbitally stable.
4
Stabilities of Nonlinear Systems (III)

4.1 Lure Systems Formulated in the Frequency Domain


To motivate, consider a one-dimensional nonlinear autonomous system,
(
x = f (x) , < t < ,
x() = 0 ,
where, if only causal signal is considered, the initial condition can be re-
placed by x(t0 ) = 0, with t0 t < . Using the unit-step function
(
1 t 0,
g(t) =
0 t < 0,
the solution of this system is given by
Z t
x(t) = f (x( )) d

Z
= g(t ) f (x( )) d

= [g f (x)] (t) (convolution).

The system can be implemented in the time domain as shown in Fig. 4.1.
In this configuration, if an exponential signal of the form est is input
to the plant g(), where s is a complex variable, then
Z
st
g(t) e = g( ) es(t ) d

Z
= g( ) es d est

: = G(s) est ,

99
100 Stabilities of Nonlinear Systems (III)

FIGURE 4.1
Configuration of the one-dimensional system.

(a) in time domain (b) in frequency domain


(parameter: s) (parameter: t)

FIGURE 4.2
Input-output relations of the plant g().

in which G(s) is actually the bilateral Laplace transform of g(t), and serves
as the transfer function of the linear plant represented by g(). This is
illustrated by Fig. 4.2.
Now, to generalize, consider a higher-dimensional nonlinear autonomous
system, (
x = f (x) , < t < ,
x() = 0 ,
where, again, if only causal signal is considered, the initial condition can
be replaced by x(t0 ) = 0, with t0 t < . Rewrite the system to be in
the following Lure form:
(
x = A x + B h(y)
(4.1.1)
y = C x,

with initial condition x() = 0 or y() = 0, where A, B, C are con-


stant matrices, in which A is chosen by the user but B and C are determined
from the given system (possibaly, B = C = I), and h is a vector-valued
nonlinear function generated by the reformulation. One specific example
Lure Systems Formulated in the Frequency Domain 101

FIGURE 4.3
Configuration of the Lure system.

is to use
B =C =I, y = x, h = f A,
while choosing A to have some special property such as stability for conve-
nience. Of course, there are other choices for the reformulation. The main
purpose of this reformulation is to have a linear part, Ax, for the resulting
system (4.1.1). In many applications, a given system may have already be
given in this Lure form.
Nevertheless, after the above reformulation, by taking the Laplace trans-
form, L{}, with the notation xb = L{x}, the Lure system (4.1.1) yields

sb
x = Ab x + BL h(y) ,

or
b = [sI A]1 BL h(y) .
x
Consequently,

b = C [sI A]1 BL h(y) := G(s) L h(y) ,
b =Cx
y

where the system transfer matrix is

G(s) = C [sI A]1 B . (4.1.2)

An implementation of the Lure system (4.1.1) is shown in Fig. 4.3, where


both the time-domain and frequency-domain notations are mixed.
The Lure system shown in Fig. 4.3 is a closed-loop configuration, where
the feedback loop is usually considered as a controller. Thus, this system
is sometimes written in the following equivalent form:

x = A x + B u

y=Cx (4.1.3)


u = h(y) .
102 Stabilities of Nonlinear Systems (III)

4.2 Absolute Stability and Frequency Domain Criteria


This section introduces the concept of absolute stability about the Lure
system and derives some frequency-domain criteria for this type of stability.

4.2.1 Background and Motivation


To provide some basic background and motivation for this study, consider
a single-input single-output Lure system described by

x = A x + b u

y = c> x (4.2.4)


u = h(y) .
Assume that h(0) = 0, so that x = 0 is a fixed point of the system. The
transfer function of its linear part is given by G(s) = c> [sI A]1 b. Let
Gr (j ) = G(j ) + G(j ) be the real part of G(j , and assume that
(i) G(s) is stable and satisfies Gr (j ) 0;
(ii) u(t) = g(x) y(t) with g(x) 0.
Based on condition (i), one can construct a rational function (another trans-
fer function), R(s), and find a constant vector, r, such that
R(s)R(s) = G(s) + G(s) (4.2.5)

with
R(s) = r> [sI A]1 b . (4.2.6)
This is known as spectral factorization in matrix theory.
Then, one can solve the following Lyapunov equation
P A + A> P = r r>
for a positive definite and symmetric constant matrix, P , which gives
P [sI A] + [sI A> ] P = r r> ,
or, after multiplying [sI A]1 b to its right and b> [sI A> ]1 to its
left,
b> P [sI A]1 b + b> [sI A> ]1 P b = R(s)R(s) . (4.2.7)

Thus, we can identify c = P b by comparing (4.2.5) and (4.2.7).


To this end, the Lyapunov function V (x) = x> P xyields
V (x) = x> r r> x 2 x> c g(x) c> x
2 2
= r> x 2 c> x g(x) 0 .
Absolute Stability and Frequency Domain Criteria 103

Therefore, the system (4.2.4) is stable in the sense of Lyapunov about its
fixed point x = 0.
It is remarked that this stabilty can be strengthened to be asymptotic if
conditions (i) and (ii) above are modified as
(i) G(s) is stable and satisfies Gr (j ) > 0;
(ii) u(t) = g(x) y(t) with g(x) > 0.
This is left to further discussion below, where some even more convenient
stability criteria are derived.
Now, observe that in many cases either condition (i) or (ii) may not
be satisfied by the given system (4.2.4). But if the function g(x) instead
satisfies the following weaker condition:
g(x) < for all x Rn , (4.2.8)

for some constants and , then one may try to make a change of variables:
u = v z and y = z v,
so that
v(t) = g(x) z(t)
where
g(x)
g(x) = 0,
g(x)
which satisfies condition (ii). In this case, the new transfer function be-
comes
e 1 + G(s)
G(s) = .
1 + G(s)
If this new transfer function satisfies condition (i), then all the above analy-
sis and results remain to be valid. This motivates the study of the stability
problem under condition (4.2.8) in the rest of the section.

4.2.2 SISO Lure Systems


First, single-input single-output Lure systems of the form (4.2.4) are dis-
cussed, namely,
x = A x + b u

y = c> x (4.2.9)


u = h(y) .
Assume that h(0) = 0, so that x = 0 is a fixed point of the system.
In light of condition (4.2.8), the following sector condition is imposed.
The Sector Condition: The Lure system (4.2.9) is said to satisfy the
local (global ) sector condition on the nonlinear function h(), if there exist
two constants, < , such that
104 Stabilities of Nonlinear Systems (III)

(a) (b)

FIGURE 4.4
Local and global sector conditions.

(i) local sector condition:

y 2 (t) y(t) h(y(t)) y 2 (t) (4.2.10)

for all < ym y(t) yM < and t [t0 , );


(ii) global sector condition:

y 2 (t) y(t) h(y(t)) y 2 (t) (4.2.11)

for all < y(t) < and t [t0 , ).


Here, [, ] is called the sector for the nonlinear function h(). Moreover,
the system (4.2.9) is said to be absolutely stable within the sector [, ] if
the system is globally asymptotically stable about its fixed point x = 0
for any nonlinear function h() satisfying the global sector condition.
The above local and global sector conditions are visualized by Fig. 4.4
(a) and (b), respectively. It is easy to see that if a system satisfies the
global sector condition then it also satisfies the local sector condition, but
the converse is not necessarily true.
As a historical remark, Aizerman made a conjecture in 1940s that if
the Lure system is stable about its zero fixed point for all linear system
approximations with corresponding constant slope satisfying
as shown by Fig. 4.4 then the original nonlinear Lure system would be
stable. In 1957, Kalman modifed this to be an even stronger conjecture
that if the system is stable for all linear system approximations with 0
dh(y)/dy 0 for some 0 and 0 then the original nonlinear
system would be stable. It is now known that they both are false bacause
some counterexamples were found lately. One simple counterexample is
given in Exercise 4.4 and another can be found in [Hanh: 1967].
The following simple example shows how to determine a sector for a
given system.
Absolute Stability and Frequency Domain Criteria 105

Example 4.1
The linear system
x = a x + b u

y = cx


u = h(y) = y
satisfies the global sector condition with = = 1.
In this system, if the controller is changed to be
u = h(y) = 2 y 2 ,
then the system becomes nonlinear, and
2 y 2 (t) y(t) 2 y 2 (t) 2 y 2 (t)
for all 1 y(t) 1 and t [t0 , ). In this case, the system satisfies
the local sector condition with a = 1 and b = 1, over the sector [, ] =
[2, 2].

THEOREM 4.1 Popov Criterion

Suppose that the SISO Lure system (4.2.9) satisfies the following con-
ditions:
(i) A is stable and {A, b} is controllable;
(ii) the system satisfies the global sector condition with = 0 therein;
(iii) for any > 0, there is a constant > 0 such that
1
Re (1 + j ) G(j ) + for all 0, (4.2.12)

where G(s) is the transfer function defined by (4.1.2), and Re{} denotes the
real part of a complex number (or function). Then, the system is globally
asymptotically stable about its fixed point x = 0 within the sector.

The Popov criterion has the following geometric meaning: Separate the
complex function G(s) into its real and imaginary parts, namely,
G(j ) = Gr () + j Gi () ,
where
1 1
Gr () = G(j ) + G(j ) and Gi () = G(j ) G(j ) .
2 2j
Then rewrite condition (iii) as
1
> Gr () + Gi () for all 0 .

106 Stabilities of Nonlinear Systems (III)

FIGURE 4.5
Geometric meaning of the Popov criterion.

Then the graphical situation of the Popov criterion shown in Fig. 4.5 implies
the global asymptotic stability of the system about its zero fixed point.

PROOF [see Khalil: 1996. pp. 419-421]

Example 4.2
Consider the linear system discussed in Example 4.1. This one-dimensional
system is controlable as long as b 6= 0. It is also easy to verify that the
system satisfies the global sector condition, even with = 0. Finally, for
any > 0, if the constant is chosen such that

a2 b2 c2 + 2 abc a2 b2 c2 2
> ,
2
then a little algebra shows that

1+j
Re +1> for all 0.
j (a + bc)
Therefore, this linear feedback control system is globally, asymptotically
stable about its zero fixed point. This is consistent with the familiar linear
analysis result.
Absolute Stability and Frequency Domain Criteria 107

FIGURE 4.6
The relay nonlinearity.

Example 4.3
Consider a Lure system with linearity described by the transfer function
G(s) = 1/[s(s + a)2 ] and nonlinearity described by the function h() shown
in Fig. 4.6, where a and 0 < b < 1 are constants. Clearly,
2
2a a2 2
Gr (j ) = 2 and Gi (j ) = 2
a2 + 2 a2 + 2
and the Popov criterion requires
2
1 2a 2 a2 2
> 2 2
a2 + 2 a2 + 2
for all 0, which leads to a choice of a small > 0 with

=0 and = a3 /2 .

The Popov criterion has a natural connection to the linear Nyquist cri-
terion. A more direct generalization of the Nyquist criterion to nonlinear
systems is the following.

THEOREM 4.2 Circle Criterion

Suppose that the SISO Lure system (4.2.9) satisfies the following con-
ditions:
(i) A has no purely imaginary eigenvalues, and has eigenvalues with
positive real parts;
(ii) the system satisfies the global sector condition;
(iii) one of the following situation holds:

(a) 0 < < : the Nyquist plot of G(j ) encircles the disk D 1 , 1
counterclockwise times but does not enter it;
108 Stabilities of Nonlinear Systems (III)

FIGURE 4.7
1
The disk D
, 1 .

(b) 0 = < : the Nyquist plot of G(j ) stays within the open half-
plane Re{s} > 1 ;
(c) < 0 < : the Nyquist plot of G(j ) stays within the open disk
D 1 , 1 ;

(d) < < 0: the Nyquist plot of G(j ) encircles the disk D 1 , 1
counterclockwise times but does not enter it.
Then, the system is globally asymptotically stable about its fixed point
x = 0.
1 1

Here, the disk D , , for the case of 0 < < , is shown in
Fig. 4.7.

Example 4.4
Consider the following Lure system:

x = a x + h(u)

y = b x + h(u)


u = k1 x + k2 y ,
where a < 0, b < 0, and the nonlinear function h() satisfies the global
sector condition with = 0 and = .
It follows from a direct calculation that
1
Re (1 + j ) G(j ) +

(ak2 + bk2 ) (ab 2 ) + (k1 + k2 ) 2 (a + b)
= .
(ab 2 )2 + 2 (a + b)2
Thus, the circle criterion (4.2.12) is

(ak2 + bk2 ) ab 2 (ak1 + bk2 ) + (k1 + k2 ) (a + b) > 0
for all 0, which is equivalent to
ak2 + bk2 > 0 and (ak1 + bk2 ) + (k1 + k2 ) (a + b) > 0 .
Absolute Stability and Frequency Domain Criteria 109

4.2.3 MIMO Lure Systems


Consider a multi-input multi-output (MIMO) Lure system, as shown in
Fig. 4.3, of the form (
x(s) = G(s) u(s)
(4.2.13)
u(t) = h y(t) ,
with G(s) is defined by (4.1.2). If the system satisfies the following Popov
inequality: Z t1
y> ( )x( ) d for all t1 t0 (4.2.14)
t0
for a constant 0 independent of t, then it is said to be hyperstable.
The linear part of this MIMO system is described by the transfer matrix
G(s), which is said to be positive real if
(i) G(s) has no poles located inside the open half-plane Re{s} > 0;
(ii) poles of G(s) on the imaginary axis are simple, and the residues of
its entries (as ) form a semi-positive definite and symmetric
matrix;

(iii) the matrix Gr (j ) := 21 G(j ) + G> (j ) is a semi-positive defi-
nite and symmetric matrix for all real values of that are not poles
of G(s).

Example 4.5
The transfer matrix

s/(s + 1) 1/(s + 2)
G(s) =
1/(s + 2) 2/(s + 1)

is positive real, because (i) all its poles are located on the left-half plane,
(ii) the matrix formed by the residues,

1 0
Gr (j ) = ,
0 0

is semi-positive definite, and (iii) the matrix



2 /(1 + 2 ) j /(4 + 2 )
Gr (j ) =
j /(4 + 2 ) 2/(1 + 2 )

is positive definite for all 0 (its smallest eigenvalue approaches 2 as


).

The matrix shown in this example is sometimes referred to as strictly


positive real, which satisfies stronger conditions: (i) its has no pole on the
110 Stabilities of Nonlinear Systems (III)

imaginary axis, (ii) the residues of its entries form a nonzero, semi-positive
definite and symmetric matrix, and (iii) the matrix G(j ) + G> (j ) is
a positive definite and symmetric matrix for all 0.

THEOREM 4.3 (Hyperstability Theorem)

The MIMO Lure system (4.2.13) is hyperstable if and only if its transfer
matrix G(s) is positive real. Moreover, it is asymptotically hyperstable if
and only if its transfer materix is strictly positive real.

PROOF [see Popov: 1973; Anderson and Vongpanitterd, 1972]

4.3 Harmonic Balance Approximation and Describing


Function
Consider again the SISO Lure systems (4.2.9), namely,

x = A x + b u

y = c> x (4.3.15)


u = h(y) ,

where h(0) = 0, so x = 0 is a fixed point of the system. Assume also that


the matrix A is invertible.
The interest here is the existence of periuodic orbits (e.g., limit cycles)
in the system output, y(t). Presumably, the system output has a periodic
orbit, which is expressed in the Fourier series form:

X
y(t) = ak ej k t ,
k=

where ak = a k and are to be determined.


Observe that if y(t) is periodic, then, since h() is a time-invariant func-
tion, h(y(t)) is also periodic with the same period, . Therefore,

X
u(t) = h(y(t)) = ck ej k t ,
k=

where ck = ck (a0 , a1 , ) are to be determined.


Recall also that in the frequency s-domain the input-output relation of
the linear plant is
Y (s) = G(s) U (s) ,
Harmonic Balance Approximation and Describing Function 111

where the transfer matrix


1 n(s)
G(s) = c> sI A b :=
d(s)

is equivalent to a differential equation of constant coefficients with zero


initial conditions:

d(p) y(t) = n(p) u(t) , p := d/dt .

Since
d j k t
p ej k t = e = j k ej k t ,
dt
one has

X
d(p) y(t) = d(j k ) ak ej k t
k=

and

X
n(p) u(t) = n(j k ) ck ej k t .
k=

Consequently, the above differential equation gives


h
X i
d(j k ) ak n(j k ) ck ej k t = 0 .
k=

It then follows from the orthogonality of the Fourier basis functions that

d(j k ) ak n(j k ) ck = 0 , k = 0, 1, 2, ,

or
G(j k ) ck ak = 0 , k = 0, 1, 2, .
Here, when k = 0, G(0) is well defined since A is invertible by assumption.
Observe, furthermore, that since G(j k ) = G(j k ), ak = a k , and ck =
ck , for all k = 0, 1, 2, , it suffices to consider

G(j k ) ck ak = 0 , k = 0, 1, 2, .

However, this infinite-dimensional system of nonlinear algebraic equations


is very different (if not impossible) to solve for the unknowns and {ak },
while {ck } are functions of {ak }. One thus resort to some kind of approxi-
mations.
Since G(s) is strictly proper,

G(j k ) 0 (k ) ,
112 Stabilities of Nonlinear Systems (III)

namely, G(k j ) 0 for large values of k. So it is reasonable to truncate


the above infinite-dimensional system. The first-order approximation, with
k = 0 and 1, is (
G(0) bc0 b
a0 = 0
G(j ) b
c1 b
a1 = 0 ,
where the first equation is real and the send is complex, with b ci ci and
b
ai ai , i = 0, 1. Because bc0 and b
c1 are both functions of ba0 and b
a1 , the
above system of two equations has only two real unknowns, and b a0 , and
one complex unknown, b a1 . To solve for four real unknowns from three real
equations, one more constraint is needed. It turns out that if one only
considers the following special cases of the nonlinear feedback system, then
some useful results can be obtained: assume that
(i) the nonlinear function h() is odd: h(y) = h(y), and is time-
invariant;
(ii) if y = sin(t), 6= 0, then the first-order harmonic of h(y)
dominates the other harmonic components.
Here, assumption (i) implies that b c0 = 0, so that the first real equation
yields b
a0 = 0 as well. Assumption (ii) implies that
j t
sin(t) = e ej t ,
2j
which gives b
a1 = /(2j). Thus, the second equation above becomes
G(j ) b
c1 (0, /(2j)) /(2j) = 0
where, by the Fourier series coefficient formulas,
Z 2/

b
c1 (0, /(2j)) = h((t)) ej t dt
2 0
Z
/
=j h((t)) sin(t) dt .
0
Define
Z /
b
c1 (0, /(2j)) 2
() := = h( sin(t)) sin(t) dt , (4.3.16)
/(2j) 0

which is called the describing function of the odd nonlinearity h(). Then
one obtains at the first-order harmonic balance equation
G(j ) () + 1 = 0 . (4.3.17)

If this complex equation is solvable, then one may solve


(
Gr (j ) () + 1 = 0
Gi (j ) = 0 ,
Harmonic Balance Approximation and Describing Function 113

for the two real unknowns and , where Gr and Gi are the real and
imaginary parts of G, respectively. This yields all the expected results:
Z
/
, ba1 = /(2j) , b c1 = j h((t)) sin(t) dt ,
0
b
a0 = 0 , b
c0 = 0 .

Consequently,
j j t j j t
a1 ej t + b
yb(t) b a1 ej t = e e
2 2
is the first-order approximation of a possible periodic orbit of the system
output, which is generated by an input signal of the form

u
b(t) = h(b c1 ej t + b
y (t)) b c1 ej t .

In summary, one arrives at the following conclusion which is helpful for


predicting possible periodic orbits (limit cycles) of the system output.

THEOREM 4.4
Consider the SISO Lure system

x = Ax + b u

y = c> x


u = h(y) .

Assume that the nonlinear function h() is odd and time-invariant, and
satisfies the property that for y(t) = sin(t), 6= 0, only the first-order
harmonic of h(y) is significant. Define the describing function of h()
by Z
2 /
() = h( sin(t)) sin(t) dt .
0
If the first-order harmonic balance equation
(
Gr (j ) () + 1 = 0
Gi (j ) = 0

has solutions and , then


j j t j j t
yb <1> (t) = e e
2 2
is the first-order approximation of a possible periodic orbit of the system
output. If this harmonic balance equation does not have solutions, then the
system likely will not output any periodic orbits.
114 Stabilities of Nonlinear Systems (III)

(a) (b)

FIGURE 4.8
Two different functions of h(y).

Example 4.6
Consider an SISO system with
1
G(s) =
s(s + 1) (s + 2)
and
(a) As shown in Fig. 4.8 (a):

1

y>0
h(y) = sgn (y) = 0 y=0


1 y<0

(b) As shown in Fig. 4.8 (b):



1
y < 1
h(y) = sat (y) = y 1 y 1


1 1 < y.

First, it can be easily verified that


1 3 j (2 2 )
G(j ) = = ,
j (j + 1) (j + 2) 9 2 + (2 2 )2
so that the second harmonic balance equation becomes
(2 2 )
Gi (j ) = = 0,
9 3
+ (2 2 )2

which has two roots: 1 = 2 and 2 = 2 (this one is ignored due to
symmetry).
Harmonic Balance Approximation and Describing Function 115

(a) The describing function in this signum function case is


Z
2 4
() = h( sin ) sin d = ( = t) .
0

Hence, the first harmonic balance equation at = 2 becomes

3 2 4
3 2 + 1 = 0,

9 2 + 2 2 ( 2 )2

which yields = 2/(3). The conclusion is that there is possibly a periodic


orbit of the form
j j t j j t j j 2t j j 2t
yb <1> (t) = e e = e e ,
2 2 3 3

which has amplitude 2/(3) and frequency 2.
(b) The describing function in this satuation function case is
Z
2
() = h( sin ) sin d 1 for all ,
0

so that the first harmonic balance equation at = 2 becomes

3 2
3 2 () + 1 = 0 , () 1 ,
9 2 + 2 2 ( 2)2

which has no solution for all real (although unknown) . The conclusion
is that there likely does not exist periodic orbits in the system output. To
make sure this, usually higher-order harmonic balance approximations are
needed, so as to obtain more accurate predictions.

When solving the equation Gr (j )() = 1 graphically, one can sketch


two curves in the complex plane: Gr (j ) and 1/() by gradually in-
creasing and , respectively, to find their crossing points:
(i) If the two curves are (visually) tangent, as illustrated by Fig. 4.9 (a),
then a conclusion drawn from the describing function method will not
be satisfactory in general.
(ii) If the two curves are (visually) transversal, as illustrated by Fig. 4.9
(b), then a conclusion drawn from the describing function analysis
will generally be reliable.

THEOREM 4.5 Graphical Stability Criterion for Periodic Orbit


116 Stabilities of Nonlinear Systems (III)

(a) (b)

FIGURE 4.9
Graphical describing function analysis.

Each intersection point of the two curves Gr (j ) and 1/() corre-


sponds to a periodic orbit, yb <1> (t), of the output of system (4.3.15). If
the points, near the intersection and on one side of the curve 1/()
where is increasing, are not encircled by the curve Gr (j ), then the
corresponding periodic output is stable; otherwise, it is unstable.

A word of warning is that the describing method discussed in this section


is a graphical method, which is based on numerical computation and human
visual judgement; therefor, although convenient and successful in most
practice, there is a chance that it can lead to incorrect conclusions about
the system stability. A counterexample is given in Exercise 4.4.
BIBO Stability 117

(a) open-loop (b) closed-loop

FIGURE 4.10
Input-output relations.

4.4 BIBO Stability

A relatively simple, and also relatively weak notion of stability is discussed


in this section. This is the bounded-input bounded-output (BIBO) stability,
which refers to the property of a system that any bounded input to a system
produces a bounded output through the system.
Consider an input-output map and its configuration shown in Fig. 4.10
(a).

Definition 1. The system S is said to be BIBO stable from the input


set U to the output set Y , if for each admissible input u U and the
corresponding output y Y , there exist two nonnegative constants, bi and
bo , such that

||u||U bi = ||y||Y bo . (4.4.18)

Note that since all norms are equivalent for a finite-dimensional vector,
it is generally insignificant to distinguish under what kind of norms for
the input and output signals the BIBO stability is defined and measured.
Moreover, it is important to note that in the above definition, even if bi
is small and bo is large, the system is still considered to be BIBO stable.
Therefore, this stability is weak, and may not be very practical in some
applications.
118 Stabilities of Nonlinear Systems (III)

4.4.1 Small Gain Theorem


A convenient criterion for verifying the BIBO stability of a closed-loop
control system is the small gain theorem, which applies to most systems
(linear and nonlinear, continuous-time and discrete-time, deterministic and
stochastic, time-delayed, of any dimensions), as long as the mathematical
setup is appropriately formulated to meet the theorem conditions. The
main disadvantage of this criterion is its over-conservativity.
Consider the typical closed-loop system shown in Fig. 4.10 (b), where the
inputs, outputs, and internal signals are related via the following equations:
(
S1 (e1 ) = e2 u2
(4.4.19)
S2 (e2 ) = u1 e1 .

First, it is important to note that the individual BIBO stability of S1


and S2 is not sufficient for the BIBO stability of the connected closed-
loop system. For instance, in the discrete-time setting of Fig. 4.10 (b),
suppose that S1 1 and S2 1, with u1 (k) 1 for all k = 0, 1, .
Then S1 and S2 are BIBO stable individually, but it can be easily verified
that y1 (k) = k as the discrete-time variable k evolves. Therefore, a
stronger condition describing the interaction of S1 and S2 is needed.

THEOREM 4.6 Small Gain Theorem

If there exists four constants, L1 , L2 , M1 , M2 , with L1 L2 < 1, such that


(
||S1 (e1 )|| M1 + L1 ||e1 ||
(4.4.20)
||S2 (e2 )|| M2 + L2 ||e2 || ,

then

||e1 || (1 L1 L2 )1 ||u1 || + L2 ||u2 || + M2 + L2 M1
(4.4.21)
||e || (1 L L )1 ||u || + L ||u || + M + L M ,
2 1 2 2 1 1 1 1 2

where the norms || || are defined over the spaces to which the signals be-
long. Consequently, (4.4.20) and (4.4.21) together imply that if the system
inputs (u1 and u2 ) are bounded then the corresponding outputs (S1 (e1 )
and S2 (e2 )) are bounded.

Note that the four constants, L1 , L2 , M1 , M2 , can be some what arbitrary


(e.g., either L1 or L2 can be large, and some of them can even be negative).
As long as L1 L2 < 1, the BIBO stability conclusion follows. This inequality
is the key condition for the theorem to hold, and is used in (1 L1 L2 )1
in the bounds (4.4.21).
BIBO Stability 119

PROOF It follows from (4.4.19) and (4.4.20) that

||e1 || ||u1 || + ||S2 (e2 )|| ||u1 || + M2 + L2 ||e2 || .

Similarly,

||e2 || ||u2 || + ||S1 (e1 )|| ||u2 || + M1 + L1 ||e1 || .

By combining these two inequalities, one has

||e 1|| L1 L2 ||e1 || + ||u1 || + L2 ||u2 || + M2 + L2 M1 ,

which, on the basis of L1 L2 < 1, yields



||e1 || (1 L1 L2 )1 ||u1 || + L2 ||u2 || + M2 + L2 M1 ,

as claimed. The other inequality is similarly verified.

In the special case where the input-output spaces, U and Y , are both the
L2 -space, a similar criterion based on the system passivity property can be
obtained. In this case, an inner product between two vectors in the space
is defined by
Z
h, i = > ( )( )d .
t0

THEOREM 4.7 Passivity Stability Theorem

If there are four constants, L1 , L2 , M1 , M2 , with L1 + L2 > 0, such that


(
e1 , S1 (e1 ) L1 ||e1 ||22 + M1
(4.4.22)
e2 , S2 (e2 ) L2 ||S2 (e2 )||22 + M2 ,

then the closed-loop system (4.4.19) is BIBO stable.

PROOF [see Sastry, 1999: pp. 155-156]

As mentioned, the main disadvantage of this criterion is its over-conservativity


in providing the sufficient conditions for the BIBO stability. One resolu-
tion is to transform the system into the Lure structure, and then apply the
circle or Popov criterion under the sector condition (if it can be satisfied),
which usually can lead to less-conservative stability conditions.
120 Stabilities of Nonlinear Systems (III)

FIGURE 4.11
A nonlinear feedback system.

4.4.2 Relationship between BIBO and Lyapunov Stabilities


There is a closed relationship between the BIBO stability of a nonlinear
feedback system and the Lyapunov stability of a related nonlinear control
system. Consider the nonlinear system

x = A x f (x, t) , x(t0 ) = x0 , (4.4.23)

which is supposed to have a fixed point x = 0. Assume that the system


matrix A is stable (i.e., has all eigenvalues with negative real parts), and
f : Rn R1 Rn is a real vector-valued integrable nonlinear function
defined on [t0 , ). By adding and then subtracting the term Ax, a general
nonlinear system can always be written in this form. Define
( Rt
x(t) = u(t) t0 e(t )A y( )d
(4.4.24)
y(t) = f (x, t) ,

with u(t) = x0 etA . Then system (4.4.23) can be implemented by a feedback


configuration as depicted in Fig. 4.11, where the errorR signal e = x, the
t
plant P () (t) = f (, t), and the compensator C() (t) = t0 e(t )A () ( )d .

THEOREM 4.8
Consider the nonlinear system (4.4.24) and its associated feedback con-
figuration shown in Fig. 4.11. Suppose that U = V = Lp ([t0 , ), Rn ),
1 < p < . Then, if the feedback system shown in Fig. 4.11 is BIBO
stable from U to V , the nonlinear system (4.4.23) is globally asymptotically
stable above the zero fixed point: ||x(t)|| 0 as t .
BIBO Stability 121

PROOF Since all eigenvalues of the constant matrix A have negative real
parts, one has

x0 etA M et
for some 0 < , M < for all t [t0 , ), so that ||u(t)|| = ||x0 etA || 0
as t . Hence, in view of the first equation of (4.4.24), if it can be
proven that
Z t
v(t) := e(t )A y( ) d 0 (t )
t0
in the Euclidean norm, then it will follow that
||x(t)|| = ||u(t) v(t)|| 0 (t ) .
Write
Z t/2 Z t
v(t) = e(t )A y( )d + e(t )A y( ) d
t0 t/2
Z t Z t
= e A y(t ) d + e(t )A y( ) d .
t/2 t/2

Then, by the Holder inequality one has


Z t Z t


||v(t)|| e y(t )d +
A
e (t )A
y( )d
t/2 t/2
Z t 1/q Z t 1/p
A q
e d |y(t )| d p
t/2 t/2
Z t 1/q Z t 1/p
(t )A q
+ e d |y( )|p d
t/2 t/2
Z 1/q Z 1/p
A q
e d |y( )| d p
t/2 t0
Z 1/q Z 1/p
A q
+ e d |y( )|p d .
t0 t/2

Since all eigenvalues of A have negative real parts and since the feedback
system is BIBO stable from U to V , so that y V = Lp ([t0 , ), Rn ), one
has Z
A q
lim e d = 0
t t/2
and Z
lim |y( )|p d = 0 .
t t/2

Therefore, it follows that ||v(t)|| 0 as t .


122 Stabilities of Nonlinear Systems (III)

4.4.3 Contraction Mapping Theorem


The small gain theorem discussed above by nature is a kind of contraction
mapping theorem. The contraction mapping theorem can be used to deter-
mine the BIBO stability property of a system described by a map in various
forms, provided that the system (or the map) is appropriately formulated.
The following is a typical (global) contraction mapping theorem.
As usual, define the operator norm of the input-output map S by
||S(x2 ) S(x1 )||
||S|| := sup .
x1 6=x2 ||x2 x1 ||

THEOREM 4.9 Contraction Mapping Theorem

If the operator norm of the input-output map S satisfies ||S|| < 1, then
the system equation
y(t) = S(y(t)) + c
has a unique solution for any constant vector c Rn . This solution satisfies
1
||y|| 1 ||S|| ||c|| .

Moreover, the solution of the equation

yk+1 = S(yk ) , y0 Rn , k = 0, 1, ,

satisfies
||yk || 0 as k .

PROOF In the continuous-time setting, it is clear that

||y|| ||S(y)|| + ||c|| ||S|| ||y|| + ||c|| ,

so that
1 ||S|| ||y|| ||c|| ,
Since ||S|| < 1,

||y|| 1 ||S|| ||c|| for all t t0 .

Therefore, taking the supremum over t [t0 , ) on both sides yields im-
mediately the expected result.
In the discrete-time case, let y0 Rn be arbitrarily given, so ||y0 || < .
By repeatedly using the given equation, one has

||yk || = ||S(yk1 )|| = = ||S k (y0 )|| ||S||k ||y0 || 0 (k ) .

since ||S|| < 1.


Exercises 123

Exercises
4.1 The following is a model of a perfectly mixed chemical reactor with a cool-
ing coil, in which two first-order, consecutive, irreversible, and exothermic
reactions A B C occur:
z
x = x + a(1 x) e

y = y + a(1 x) ez acy ez


z = (1 + ) z + ab(1 x) ez + abc y ez ,
where all coefficients are constants with certain physical meaning. Refor-
mulate this model in the Lure form, namely, assuming b = 0 and c = 1,
find A, h(), and G(s).
4.2 Consider the following autonomous system:
x
+ 2 x + f (x) = 0 ,
where f () is a nonlinear differentiable function. Find the bounds a and
b for ax f (x) bx such that the zero fixed point of the system is
asymptotically stable.
4.3 Consider the following system:

x 1 = x2
x 2 = 2 x2 + h(y)

y = x1 ,
where h() is a nonlinear function belonging to a sector [0, ] for some
k > 0. Reformulate this system into the Lure form and then use either
Popov or circle criterion to discuss its absolute stability.
4.4 Verify that the following SISO Lure system is a counterexample to both
the Aizerman-Kalman conjecture and the describing function method:

0 1 0
A= , b= , c> = [1 1] ,
1 1 1
and

1 e2
y |y| < 1
1+e1
h(y) =

1 e2|y|
y |y| 1 .
|y| (1+e|y| )

[Hint: see Narendra and Taylor: 1973, pp. 69-72]


4.5 Show that the following transfer function
n2 (s + a)
G(s) = ,
+ 2n s + n2
s2
where n > 0 and 0 < < 1, is strictly positive real if and only if 0 < a <
2n .
4.6 Use the circle criterion to discuss the absolute stability of the following
system: (
x = x h(x + y)
y = x y 2h(x + y) ,
124 Stabilities of Nonlinear Systems (III)

FIGURE 4.12
A piecewise linear function for h(y).

where h() is a differentiable nonlinear function given by



a y < b < 0
h(y) = 0 b y b

a b < y.
4.7 Consider the following autonomous nonlinear system;

x 1 = a x1 + x2 h(x1 )


x 2 = x1 + x3

x 3 = a x1 + b h(x1 )


y = x1 ,
where a > 0, b > 0, and h() is a piecewise continuous function satisfying
h(0) = 0 and 0 < h(z) < z. Reformulate this system into the Lure
form and then apply the Popov or circle criterion to discuss the absolute
stability of the system. Is there any condition on the sector upper bound
? [Hint: Discuss three cases: (a) b < a2 , (b) b = a2 , and (c) b > a2 .]
4.8 Find the describing function for each of the following odd nonlinear func-
tions:
h(y) = y 5 , h(y) = y 3 |y| , h(y) = sin(y) .
4.9 Consider a piecewise linear nonlinear function shown in Fig. 4.12, which
contains the two nonlinear functions of Fig. 4.8 as special cases. Verify
that
(a) when || , the describing function () = s1 ;
(b) when > ,
2 (s1 s2 )
h p i
() = sin1 (/) + 1 1 (/)2 + s2 ;

(c) with s1 = 1, s2 = 0, and = 1, the above result reduces to that
obatined in Example 4.6 for Case (b).
4.10 Consider the nonlinear feedback system shown in Fig. 4.10 (b), with
et et
S1 : e 1 sin(e1 ) and S2 : e2 + sin(e2 ) .
2 2
Exercises 125

Determine the upper bounds for ||e1 || and ||e2 ||, respectively, using the
Small Gain Theorem.
4.11 Consider a unity feedback system, as shown by Fig. 4.10 (b), with S1 = P
representing a given nonlinear plant and S2 = I (the identify mapping).
Suppose that the plant has a perturbation, so P becomes P + . Derive
the small gain theorm corresponding to this perturbed closed-loop system
and find the norm condition on the perturbation term, namely, find the
smallest possible upper bound on |||| in terms of the bounds given in
the small gain theorem just derived, such that under this condition the
perturbed system remains to be BIBO stable.
5
Nonlinear Dynamics: Bifurcations and
Chaos

5.1 Some Typical Bifurcations


Consider a nonlinear autonomous system with a real variable parameter:
x = f (x; ) , x Rn , R, (5.1.1)
n n
where the nonlinear function f : R R R is assumed to satisfied all
necessary conditions for the existence and uniqueness of a solution with
respect to any given initial state x0 Rn and any given parameter R.
Assume also that x = 0 is a fixed point of the system when = 0 :
f (0, 0 ) = 0 .
The concern here is whether or not (and if so, how) this fixed point and
its stability may change as the parameter is gradually varied in a neigh-
borhood of 0 . This study has a great impact on the understanding of the
dynamical behaviors and stabilities of the parametrized nonlinear system
(5.1.1).
If there is a change of stability of the system fixed point when is varied
and passes through the critical values 0 , for instance the system is stable
about its fixed point of interest when > 0 but becomes unstable when
< 0 , then the system is said to have a bifurcation at the fixed point.
The critical value = 0 is called a bifurcation values and (0, 0 ) in the
x space is called a bifurcation point. A more precise definition for the
one-dimensional case,
x = f (x; ) , x, Rn , (5.1.2)

is given as follows.

126
Some Typical Bifurcations 127

FIGURE 5.1
Equilibrium curves of a one-dimensional system.

DEFINITION 5.1 The one-parameter family of one-dimensional nonlinear


systems (5.1.2) is said to have a bifurcation at an equilibrium point (fixed
point), (x , 0 ), if the equilibrium curve near the point x = x and near
= 0 is not qualitatively the same as (i.e., not topologically equivalent
to) the curve near x = x at = 0 .

At this poiont, it is illuminant to examine a few simple and yet repre-


sentative examples.

Example 5.2
Consider the one-dimensional system

x = f (x; ) = x x2 ,

which has two equilibrium points (fixed points): x1 = 0 and x2 = . If is


allowed to vary, then there are two equilibrium curves as shown in Fig. 5.1.
The stabilities of the three equilibrium point and curves are determined
as follows: first, since this is an autonomous system, one may calculate its
Jacobian,
f
J= = 2 x .
x x=x x=x

(i) At x = 0, J = , so when < 0 the system is stable about this


equilibrium point and when > 0 then it is unstable.
(ii) At x = , J = , so when < 0 the system is unstable about this
equilibrium point and when > 0 then it is stable.
Therefore, (x , 0 ) = (0, 0) is a bifurcation point. This type of bifurcation
is called the transcritical bifurcation.
128 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.2
Equilibrium curves of a one-dimensional system.

Example 5.3
Consider the one-dimensional system

x = f (x; ) = x2 ,

which has one equilibrium point x0 = 0 at = 0 = 0 and an equilibrium



curve (x )2 = for all 0, which yields two branches x1 = and

x2 = . Clearly, these two branches contain the point x0 = 0 at the
value of = 0. If is allowed to vary over the interval 0, ), then these
two equilibrium curves have the shape as shown in Fig. 5.2.
The stabilities of the equilibrium curves are similarly determined as fol-
lows: the system Jacobian is

f
J= = 2 x .
x x=x x=x


(i) At x = , J = 2 , so when > 0 the system is stable about
this equilibrium state.

(ii) At x = , J = 2 , so when > 0 the system is unstable about
this equilibrium state.
Therefore, (x , 0 ) = (0, 0) is a bifurcation point. This type of bifurcation
is called the saddle-node bifurcation.

Example 5.4
Consider the one-dimensional system

x = f (x; ) = x x3 ,

which has one equilibrium point x0 = 0 for all R and an equilibrium



curve (x )2 = for all 0, which yields two branches x1 = and

x2 = . These two branches contain the point x0 = 0 at the value of
Some Typical Bifurcations 129

FIGURE 5.3
Equilibrium curves of a one-dimensional system.

= 0. If is allowed to vary, these two equilibrium curves can be seen as


in Fig. 5.3.
The stabilities of the equilibrium curves are similarly determined as fol-
lows: the system Jacobian is

f 2
J= = 3x .
x x=x x=x

(i) At x0 = 0, J = , so when > 0 the system is unstable about this


equilibrium state, while for < 0 it is stable.
(ii) At (x )2 = , J = 2, so when > 0 the system is stable about

both the two equilibrium branches x1 = and x2 = .
Therefore, (x , 0 ) = (0, 0) is a bifurcation point. This type of bifurcation
is called the pitchfork bifurcation, inspired by the shape of the bifurcation
diagram shown in Fig. 5.3.

To this end, it is important to note that not every nonlinear system with
a varying parameter has bifurcation.

Example 5.5
Consider the one-dimensional system
x = f (x; ) = x3 ,
which has one and only one equilibrium curve (x )3 = , R. When
is varied, this equilibrium curve is visualized by Fig. 5.4.
The stabilities of the equilibrium curve can be determined by examining
the system Jacobian

f 2
J= = 3x <0 for all x = 3 6= 0 .
x x=x x=x
130 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.4
The equilibrium curve of a one-dimensional system.

Therefore, (x )3 = is always stable, where the critical point (x , 0 ) =


(0, 0) does not change the stability of the system about the equilibrium
curve. Therefore, there is no bifurcation in this system

It may be observed from the above typical bifurcation exapmles that


(x , 0 ) is a bifurcation point for system (5.1.2) if either
(i) there are more than one curve of equilibrium solutions of the system
passing through the point (x , 0 ) in the x plane, or
(ii) in the case where there is only one equilibrium curve, this curve is
located on one side of the verticle line = 0 in the neighborhood of
the point (x , 0 ) in the x plane.
Generally speaking, this observation is correct.

5.2 Period-Doubling Bifurcations


For discrete-time systems, there is a special and interesting dynamical phe-
nomenon called period-doubling bifurcation.
As an example, consider the logistic map

xk+1 = xk (1 xk ) , k = 0, 1, 2, , (5.2.3)

with x0 (0, 1). Let the parameter be gradually varied, starting from
= 0. Then, one can observe the following phenomena:
Case 1. 0 < < 1.
In this case, starting from any x0 (0, 1), the result is xk 0 as k .
Period-Doubling Bifurcations 131

FIGURE 5.5
Converging orbit of the logistic map when = 2.8.

FIGURE 5.6
Period-2 orbit of the logistic map when = 3.3.

Case 2. 1 < 3.
Within this range of parameter values, startingt from any x0 (0, 1),
one has xk a steady state as k . For instance, if = 2.8, the result
is shown in Fig. 5.5.
Case 3. 3 < 3.449 .
Starting from any x0 (0, 1), xk period-2 cycles as k ; namely,
after a transient, {xk } = { , x(1) , x(2) , x(1) , x(2) , x(1) , x(2) , }, as shown
in Fig. 5.6 for the case of = 3.3.
Case 4. 3.449 4.0.
Depending on the value of , xk period-2n cycle for some n > 0 as
k , as shown in Fig. 5.7 for the case of = 3.5.
Within this range of parameter values, some very rich bifurcation phe-
nomena, called the period-doubling bifurcation, can be observed. The period-
doubling bifurcated orbit is described in Table 5.1 and shown in Fig. 5.8.
It is even more interesting to observe the self-similarity within the period-
doubling bifuraction diagram shown in Fig. 5.8, as locally enlarged in
Fig. 5.9.
132 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.7
Period-22 orbit of the logistic map when = 3.5.

TABLE 5.1
Period-doubling bifurcation of the logistic map.

Parameter Period

= 3.0 (period 2 is born) 21


= 3.449 4 22
= 3.54409 8 23
= 3.5644 16 24
= 3.568759 32 25
.. .. ..
. . .
= 3.569946 2
Period-Doubling Bifurcations 133

FIGURE 5.8
Period-doubling of the logistic map.
134 Nonlinear Dynamics: Bifurcations and Chaos

5.3 Hopf Bifurcations in Two-Dimensional Systems


Bifurcations in two-dimensional parametrized systems can be quite compli-
cated. Some examples of the hyperbolic case are first discussed, along with
the useful normal form theorem. For the non-hyperbolic case, however, the
system has a very typical bifurcation Hopf bifurcation which is further
studied in the following subsection.

5.3.1 The Hyperbolic Case and the Normal Form Theorem


Some examples of the hyperbolic case with rather complicated bifurcation
phenomena are first given.

Example 5.6
Consider a two-dimensional linear parametrized system,
(
x = x + y
R.
y = x 3 y ,

The eigenvalues of this system are


+3 1p
1,2 = ( 1) ( 9) .
2 2
Hence, the zero fixed point is hyperbolic. When is varied, some bifurca-
tions occur as dipicted by Fig. 5.10

Example 5.7
Consider the controlled dampled pendulum
(
x = y
y = y sin(x) + u ,

where x = satisfies and y = . This system has two fixed



points: y = 0 and sin(x ) = u. There are three cases:
(i) u > 1: there is no fixed point;
(ii) u = 1: there is one fixed point at (x , y ) = (/2, 0);
(iii) u < 1: there are two fixed points.
If u = is considered as the system variable parameter, then the above
shows that 0 = u0 = 1 is a bifurcation value.
Hopf Bifurcations in Two-Dimensional Systems 135

FIGURE 5.9
Self-similarity of the logistic period-doubling diagram.
136 Nonlinear Dynamics: Bifurcations and Chaos

bifurcation point

0 1 9
-

real real complex real


1,2
opposite signs same signs same signs
(saddle) (stable node) (stable spiral) (stable node)

FIGURE 5.10
Bifurcations of a two-dimensional parametric system.

In order to determine the type of the bifurcation at 0 , an effective way


is to transfer the system to the so-called normal form, which can be done
in three steps as follows:
Step 1: Shift (x , y , 0 ) to (0, 0, 0)
This can be done by a change of variables:

u=1+v

x = z1 + /2


y = z2 ,
which yields a new system of the form
(
z1 = z2
z2 = z1 cos(z1 ) + 1 + v .

Step 2: Find the Jacobian and its eigenvalues and eigenvectors



0 1 0 1
J= =
sin(z1 ) 1 z1 =0
0 1

and
1 1
1 = 0 , w1 = ; 2 = 1 , w2 = .
0 1
Step 3: Find the normal form

Use the nonsingular matrix



1 1
P := w1 w2 =
0 1
Hopf Bifurcations in Two-Dimensional Systems 137

to transform the new system to be in the normal form


= P z,
which is (
1 = cos 1 + 2 + 1 + v

2 = 2 cos 1 + 2 + 1 + v .

To this end, the following theorem is applied:

THEOREM 5.1 (Normal Form Theorem)

Suppose that in the normal form


(
1 = f (v, 1 , 2 )
(5.3.4)
2 = 2 + g(v, 1 , 2 )
the two nonlinear functions f and g are both nontrivial functions of v.
Then
(i) if
f 2f
(0, 0, 0) 6= 0 and (0, 0, 0) 6= 0
v 12
then there is a saddle-node bifurcation at v0 = 0;
(ii) if
f 2f
v (0, 0, 0) 2 (0, 0, 0) < 0
v 1
then there are two hyperbolic equilibria: one is a saddle node and the
other is a stable node;
(ii) if
f 2f
v (0, 0, 0) 2 (0, 0, 0) > 0
v 1
then there is no equilibrium poont (so, no bifurcation).

PROOF [see Wiggins, 1990: p.216]

Note that since nonsingular linear transforms do not change the sys-
tem qualitative behaviors (topological properties), the type of bifurcation
concluded by the theorem can be transformed back to the original system
without changes. The only corresponding changes are the state variables
and the parameter value.
Note also that one may exchange 1 and 2 to obtain a dual theorem
for a dual system of the normal form (5.3.4). This may be useful in some
cases.
138 Nonlinear Dynamics: Bifurcations and Chaos

Example 5.8
Return to Example 5.7 of the controlled pendulum. It can be easily verified
that
f
(0, 0, 0) = 1 6= 0
v
2f

(0, 0, 0) = cos( 1 + )
2 = 1 6= 0 .
12 1 =2 =0

Therefore, at v0 = 0 the system in the normal form has a saddle-node


bifurcation. After transforming back to the original system, it has a saddle-
node bifurcation at u0 = 1.

5.3.2 Decoupled Planar Systems


Some planar systems can be decoupled, so that the bifurcation analysis
becomes much easier.
Consider the planar parametrized system
(
x = f (x, y; )
y = g(x, y; ) ,
which is supposed to have a bifurcation point (x , y , 0 ) = (0, 0, 0). Use
the Taylor expansion at (x , y ) = (0, 0), one can rewrite this system as

x a() x
= + J() + H.O.T.
y b() y
where J is the Jacobian and H.O.T. represents all the higher order terms.
Since = 0 = 0 is a bifurcation value, a(0) = b(0) = 0. Let 1 ()
and 2 () be the eigenvalues of J(). Then, using a nonsingular lin-
ear transform, one can assume without loss of generality that J() =
diag {1 (), 2 ()}. It then follows that
(
x = a() + 1 () x + H.O.T.
(5.3.5)
y = b() + 2 () x + H.O.T.

THEOREM 5.2
Consider the decoupled planar system (5.3.5).

(i) If da()/d =0 6= 0, then there exists a single branch of equilibria
of the system in a neighborhood of the bifurcation point (x , y , 0 ) =
(0, 0, 0), which is of saddle-node type.

(ii) If da()/d =0 = 0 and
2
2f 2f 2f
(0, 0, 0) (0, 0, 0) (0, 0, 0) < 0,
2 x2 x
Hopf Bifurcations in Two-Dimensional Systems 139

then there are two branches of equilibria, which intersect and exchange
stability at the bifurcation point, and the bifurcation is either trans-
critical or pitchfork.

(iii) If da()/d =0 = 0 and
2 2
2f 2f f
(0, 0, 0) (0, 0, 0) (0, 0, 0) > 0,
2 x2 x
then the bifurcation point has an isolated point (the only solution of
the system in a neighborhood of this point is the point itself ).

Note that one may exchange x y and f g to obtain a dual


theorem.
PROOF [see ]

Example 5.9
Consider the following decoupled two-dimensional system:
(
x = x2
y = y .
Observe that the second equaton is linear and stable: y(t) = et 0 as
t ; and the first one is one-dimensional with a saddle-node bifurcation
as seen in Example 5.2. This system satisfies the saddle-node bifurcation
condition stated in Theorem 5.2 above, since
da()
a() = and = 1 6= 0 .
d =0

Example 5.10
Consider the following decoupled two-dimensional system:
(
x = x x3
y = y .
Observe that the second equaton is linear and stable: y(t) = et 0 as
t ; and the first one is one-dimensional with a pitchfork bifurcation
as seen in Example 5.4. This system satisfies the pitchfork bifurcation
condition stated in Theorem 5.2 above, since
2 2
2f 2f f
(0, 0, 0) (0, 0, 0) (0, 0, 0) = 1 < 0 .
2 x2 x
The phase portraits of this system for different values of is shown in
Fig. 5.11.
140 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.11
Pitchfork bifurcation in a two-dimensional system.

Example 5.11
Consider the Lotka-Volterra system:
(
x = x ( x) (x + 1) y 2
y = y (x 1) .
Its equilibrium curves (points) include

x3 = 1
x1 = 0
x2 =

2

(1) y1 = 0 (2) y2 = 0 (3) 2 y
=1



3
z1 = z2 = z =
3

as shown in Fig. 5.12.


The system Jacobian is

2x y 2 2(x + 1)y
J= .
y x1
Case (1). The eigenvalues of J, evaluated at (x1 , y1 , z1 ) are 1 () =
and 2 () = 1. So there is a bifurcation point at 0 = 0, which is
Hopf Bifurcations in Two-Dimensional Systems 141

FIGURE 5.12
Equilibrium curves of the Lotka-Voterra system.

stable for < 0 = 0 (S in Fig. 5.12) but is unstable for > 0 = 0


(U in Fig. 5.12).
Case (2). The eigenvalues of J, evaluated at (x2 , y2 , z2 ) are 1 () =
and 2 () = 1. So there are two bifurcation points at 01 = 0
and 02 = 1, respectively; the first one is stable for 01 = 0 < < 1
(S in Fig. 5.12) but the second is unstable for < 01 = 0 and
> 02 = 1 (U in Fig. 5.12).
Case (3). The eigenvalues of J, evaluated at (x3 , y3 , z3 ) are
p
y 2 1 (y 2 9)2 80
1,2 (y) =
2
or p
3 ( 19)2 320
1,2 () = .
4
There exist bifurcation points only for 1, which is stable for
1 < 3 but is unstable for > 3.
In Case (3) above, one should note that the two eigenvalus 1,2 () are

(a) both real negative, if 1 < < 19 320;

(b) complex conjugate with negative real parts, if 19 320 < < 3;

(c) complex conjugate with positive real parts, if > 19 + 320.
Also, as shown in Fig. 5.13,
(a) there is a pitchfork bifurcation at = 1;
(b) there is a Hopf bifurcation at = 3.

5.3.3 Hopf Bifurcation of Two-Dimensional Systems


For a two-dimewnsional non-hyperbolic system, the following is a very im-
portant result.
142 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.13
Two other bifurcations of the Lotka-Voterra system.

THEOREM 5.3 (Poincare-Andronov-Hopf)

Suppose that the two-dimensional parametrized system


(
x = f (x, y; )
(5.3.6)
y = g(x, y; )

has a zero fixed point, (x , y ) = (0, 0), and that its associate Jacobian has
a pair of purely imaginary eigenvalues: () and (). If

d Re {()}
>0
d =0

for some 0 , then


(i) = 0 is a bifurcation point of the system;
(ii) for close enough values < 0 , the zero fixed point is asymptotically
stable;
(iii) for close enough values > 0 , the zero fixed point is unstable;
(iv) for close enough valuesp 6= 0 , the zero fixed is surrounded by a limit
cycle of magnitide O( | 0 |).
Hopf Bifurcations in Two-Dimensional Systems 143

FIGURE 5.14
Hopf bifurcation in a planar system.

Example 5.12
Consider the planar system
(
x = y + x x2 y 2

y = x + y x2 y 2 .

Using the polar coordinates, it can be rewritten as


(
= 2
= 1 .

It is easy to verify, e.g., via computer graphics (see Fig. 5.14), that
(i) if 0 then the system orbit will spiral in to the zero fixed point
(stable focus);
(ii) if > 0 then (0, 0) becomes unstable, and a limit cycle of radius

0 = suddently emerges.
144 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.15
The Poincar
e first return map.

5.4 Poincar
e Maps
Consider a general two-dimensional autonomous system,
(
x = f (x, y)
y = g(x, y)

along with its phase plane. Let be a curve starting from an fixed point
of the system with the property that the it cuts each solution orbit in the
phase plane transversely (i.e., nowhere tangential to the orbit).
Consider a point, P0 = P0 (x0 , y0 ) on . Suppose that an orbit passing
through at P0 returns to after some time, and then cuts again at
a probaly different point, P1 = P1 (x1 , y1 ) (this is always assumed to be
happening unless otherwise indicated), as illustrated by Fig. 5.15. This
new point, P1 , is called the first return point and the map,

M: (x0 , y0 ) (x1 , y1 ) (or M: P0 P1 )

is called the Poincare map, which is usually denoted as

(x1 , y1 ) = M (x0 , y0 ) (or P1 = M (P0 ) ) .

If the orbit continues to make a second turn and cuts at P2 = P2 (x2 , y2 ),


then
(x2 , y2 ) = M (x1 , y1 ) = M M (x0 , y0 ) = M2 (x0 , y0 ) .
In general, one has

(xn , yn ) = Mn (x0 , y0 ) or Pn = Mn (P0 ) ,

as visualized by Fig. 5.16.


Poincare Maps 145

FIGURE 5.16
The Poincar
e map with multiple returns.


In the situation shown by Fig. 5.16, since P = Mn P for all n, P
is a fixed point on a periodic orbit. Moreover, as indicated in the figure,
since

P0 P1 P and P00 P10 P ,

this periodic orbit is a stable limit cycle. This means that if a Poincare map
M for a given autonomous system can be found, and a point satisfying
P = M (P ) exists such that

Mn P0 P and M P00 ,

where P0 and P00 are located on two opposite sides of P , then one knows
that the given system has a stable limit cycle. The following example shows
the details.

Example 5.13
Consider the following system:
( p
x = x + y x x2 + y 2
p
y = x + y y x2 + y 2 ,

which has a fixed point (x , y ) = (0, 0). Consider an orbit : x > 0, y = 0,


i.e., the positive semi-x-axis.
In polar coordinates, this system becomes
(
r = r ( r)
= 1 ,
146 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.17
Poincar
e map and stable limit cycle in the example.

which has solutions


(
r = r0 / (r0 + ( r0 ) et )
= t + 0 ,

or simply
r0
r= .
r0 + ( r0 ) e(0 )
Since in polar coordinates the first return is completed for a 2pi-change of
(here, for this example, it is 2), it can be seen that the Poincare map
is given by
r0
r1 = M (r0 ) = .
r0 + ( r0 ) e 2
Therefore, for P0 = (r0 , 0), one has
r0
rn = Mn (r0 ) = ,
r0 + ( r0 ) e 2n
and n = 0 (or can be arbitrary). Thus, if r0 = and 0 = 0 are used, then
rn and n 0 as n . This implies that P = (, 0), as shown in
Fig. 5.17, where the stable limit cycle is r = , which passes the point P .

5.5 Strange Attractors and Chaos


Return to the general higher-dimensional autonoumous system

x = f (x) , x(t0 ) = x0 Rn , (5.5.7)


Strange Attractors and Chaos 147

and recall the concept of -limit sets (or positive limit sets) introduced in
Definition 2.15. Simply put, a set of points z Rn is called a -limit set
of system (5.5.7), if there is a solution orbit x(t) of the system such that

||x(tn ) z|| 0 as n and tn .

It is clear that all periodic solution orbits (e.g., limt cycles), which need not
be stable, are positive limit sets, and that every solution orbit of the linear
harmonic oscillator described by x = y and y = x is a positive limit set.

DEFINITION 5.14 A set in a region Rn is called an attractor of


system (5.5.7), if the positive limit set of an arbitrary solution orbit of the
system lies in then it must lie in .

It is easy to verify that all stable focus points are attractors, for which
the region is the basin of attraction, and that the limit cycle of the van
der Pol oscillator (see Fig 5.22) is an attractor, for which the region is
= R2 \{0}.
It should be remarked that in a general situation, in the above defi-
nition can be a set but need not be a region. The largest possible set
is called the basin of attraction of . In the special case where = {0},
this reduces to the same notion of basin of attraction of a fixed point,
introduced ealier in Definition 2.15.

5.5.1 The Lorenz Chaotic System


It is particularly important to point out that not all attractors are stable
limit cycles or stable fixed points, as demonstrated by the next example.

Example 5.15
Consider the Lorenz system

x = (y x)

y = xz + r x y (5.5.8)


z = xy b z ,

where , r, and b are positive constants. It is easy to verify that this system
has three fixed points:
p p
x1 = 0
x2 = p b(r 1)
x3 = p b(r 1)

(1) y1 = 0 (2) y2 = b(r 1) (3) y3 = b(r 1)




z1 = 0 z2 = r 1 z3 = r 1
148 Nonlinear Dynamics: Bifurcations and Chaos

Here, by examining the eigenvalues of the system Jacobian at the fixed


points, it can be verified that the first fixed point is stable if r < 1 and
unstable if r > 1, while the stability of the other two fixed points depends
on the values of all three parameters , r, and b (e.g., they are both unstable
if r = ( + b + 3)/( b 1).
Following E. N. Lorenz (1963), the following parameter values will be
used:
8 ( + b + 3)
= 10 , b= , r= = 24.74 .
3 ( b 1)
Now, fix = 10 and b = 83 , and gradually change r:
Case 1. 0 < r < 1.
In this case, (x1 , y1 , z1 ) is a stable focus (i.e., a global point attrac-
tor). The other fixed points, (x2 , y2 , z2 ) and (x3 , y3 , z3 ), are complex
conjugates (so cannot be displayed in the phase space).
Case 2. 1 < r < 24.74 .
In this case, (x1 , y1 , z1 ) becomes unstable and the other two real
fixed points are bifurcated out, (x2 , y2 , z2 ) and (x3 , y3 , z3 ), which are
both local point attractors (stable spiral points). The phase portaits
when 1 < r < 13.926 and 13.926 < r < 24.74 are shown in
Fig. 5.18 (a) and (b), respectively.
Case 3. r = 24.74 .
In this case, both (x2 , y2 , z2 ) and (x3 , y3 , z3 ) become unstable and a
Hopf bifurcation appears.
Case 4.
This is the most interesting situation where chaos emerges: the system
orbit is spiraling around one of the two fixed points, (x2 , y2 , z2 ) and
(x3 , y3 , z3 ), for a certain period of time (which is unpredictable be-
forehand), then suddenly jumps to the vicinity of another fixed point,
which it spirals around for a while (again, the time period of such en-
circling is unpredictable beforehand), then it suddenly switches back
to the first fixed point, . This switching process continues indefi-
nitely and infinitely, but the system orbit never converge to nor di-
verge from either fixed point. Thus, the two fixed points, (x2 , y2 , z2 )
and (x3 , y3 , z3 ) togehter, virtually become a strange attractor, and
the wandering orbit is said to be a chaotic orbit, while the Lorenz
system, a chaotic system. This Lorenz chaotic attractor is shown in
Fig. 5.19.

A more precise and conveneint, yet not mathematically very rigorous,


definition of chaos is given below, where a trajectory is said to be quasi-
periodic if it is a finite sum of periodic orbits of different periods.
Strange Attractors and Chaos 149

(a) (b)

FIGURE 5.18
The two local point attractors of the Lorenz system.

FIGURE 5.19
The strange attractor of the Lorenz system.
150 Nonlinear Dynamics: Bifurcations and Chaos

(i) (ii) (iii)

FIGURE 5.20
Three cases of a bounded orbit of a planar system.

DEFINITION 5.16 A phase orbit is said to be chaotic, if it is bounded in


a region confined within the phase space but is not
(i) a fixed point;
(ii) a periodic orbit;
(iii) a quasi-periodic orbit.

5.5.2 Some Characterizations of Chaos


As mentioned above, a chaotic orbit must be bounded in the phase space.
This is intuitively necessary; otherwise the orbit diverges so there will not
be any meaningful results. However, boundedness is also obviously not
sufficient; for instance an orbit spiraling toward a stable focus is bounded
but it is certainly not chaotic. The following is an important result about
the consequence of boundedness for planar systems.

THEOREM 5.4 (Poincare-Bendixson Theorem)

Consider a planar autonomous system, x = f (x), x(t0 ) = x0 Rn ,


defined on a closed bounded region Rn . Let be a solution orbit of the
system, which enters region and then stays inside forever. In this case,
the orbit must be one of the following types: (i) is a closed orbit; (ii)
approaches a closed orbit; (iii) approaches a fixed point.

These three possibilities are illustrated in Fig. 5.20.

PROOF [see Verhulst: 1996, p.47]

COROLLARY 5.5
Strange Attractors and Chaos 151

FIGURE 5.21
The Duffing chaotic attractor.

For a continuous-time autonomous system to have chaos, its dimension has


to be at least three.

Note that the Poincare-bendixson Theorem does not apply to nonau-


tonomous systems since a two-dimensional nonautonomous system may
have chaos. Two typical examples are the Duffing oscillator (1.4.30) and
the van der Pol oscillator (5.5.10), as summarized below:

Example 5.17
The Duffing oscillator is described by
(
x = y
(5.5.9)
y = a x b x3 c y + q cos(t) ,

which is chaotic when

(a, b, c, q, ) = (1.1, 1.0, 0.4, 1.8, 1.8) ,

with the strange attractor shown in Fig. 5.21.

Example 5.18
152 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.22
The van der Pol chaotic attractor.

The van der Pol oscillator is described by


(
x = x 13 x3 y + p + q cos(t)
(5.5.10)
y = c (x + a b y) ,

which is chaotic when

(p, q, a, b, c) = (?, ?, 0.7, 0.8, 0.1, 1.0) ,

with the strange attractor shown in Fig. 5.22.

Note also that the Poincare-bendixson Theorem does not apply to au-
tonomous systems of dimenson three or higher since they may have chaos.
Simple examples of three-dimensional autonoumous systems that have only
quadratic nonlinearity but generate chaos include the Lorenz system (see
Exampleex:lorenz), Chuas circuit (see Fig. 1.24), and the following Rossler
and Chen chaotic systems:

Example 5.19
Strange Attractors and Chaos 153

FIGURE 5.23
The R
ossler chaotic attractor.

The following Rossler system



x = y z

y = x + a y (5.5.11)


z = x z b z + c) ,

is chaotic when (a, b, c) = (0.2, 5.7, 0.2), with the starnge attractor shown
in Fig. 5.23.

Example 5.20
The following Chens system

x = a (y x)

y = (c a) x x z + c y (5.5.12)


z = x y b z

is chaotic when (a, b, c) = (35, 3, 28), with the starnge attractor shown in
Fig. 5.24.
154 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.24
The Chens chaotic attractor.
Strange Attractors and Chaos 155

It is important to note that although the system equations of (5.5.12) are


quite similar to that of the Lorenz system (5.5.8), they are topologically not
equivalent; namely, there is no diffeomorphism (or nonsingular coordinates
transform) that can transfer one to another. Moreover, Chens attractor is
seemingly more complicated than the Lorenz attractor in terms of dynamics
since the former has prominant three-dimensional features in the phase
space.

Although a strange attractor is an indication of chaos, which may not


be easy to find in general. However, there are some other characteristics of
chaos that can be relatively easily verified, eben before a strange attractor
is found.
Notice that a strange attractor is a specific kind of limit set. For higher-
dimensional continuous-time autonomous systems, the Poincare-Bendixson
Theorem may actually be further extended to include at least one more
possibility (see Theorem 5.4 for possibilities (i)(iii)):
(iv) approaches a limit set.
Here, a limit set includes a strange attractor because a strange attractor
is an -limit set [see ???]. Note, however, that there are some other limit
sets that are not fixed points, limit cycles, or starnge attractors. Also,
depending on the definition of chaos, a strange attractor can be non-chaotic.
In this regard, usually, a chaotic attractor refers to as a strange attractor
for a system that is sensitive to initial conditions.
When approaches a chaotic attractor, it must have the following fea-
tures: One the one hand, it keeps approaching a certain subset (e.g., a
point), which means it possesses a negative Jacobian eigenvalue of some
sort along that particular direction of approaching, determined by the cor-
responding eigenvector; on the other hand, it keeps leaving this subset (or
point), which means it possesses a positive Jacobian eigenvalue along
this direction. Usually, when a system has a positive Jacobian eigenvalue,
it will diverge along the direction specified by the corresponding eigenvec-
tor. However, the global boundedness property of the system prohibits
the total divergence. Thus, a suitable combination of these attracting and
repelling features of the strange attractor leads to very complicated dy-
namical behaviors of the system orbit such as chaos.
For a nonlinear autonomous system, to quantify the everage value of a
Jacobian eigenvalue of the system, which is evaluated at different operating
points throughout the entire dynamical process, the concept of Lyapunov
exponent is very useful.
For a one-dimensional autonomous system,

x = f (x) , x(t0 ) = x0 ,
156 Nonlinear Dynamics: Bifurcations and Chaos

defined on a domain D R, if for almost all initial conditions x0 D (i.e.,


except perhaps a set of measure zero), the solution x(t) = x(t; x0 ) of the
system behaves like

| x(t; x0 ) | e t for large enough t > t0 ,

where = (x0 ) can be evaluated by


1
(x0 ) = lim ln x(t; x0 ) .
t t
This (x0 ), if it is finite, is called the Lyapunov exponent of the system
orbit x(t; x0 ) starting from x0 .
Obviously, the concept of Lyapunov exponent is a generalization of eigen-
value for linear systems. It is also clear that the Lyapunov exponent is
sensitive to the system initial conditions.
For a higher-dimensional autonomous system,

x = f (x) , x(t0 ) = x0 Rn ,

the leading (largest) Lyapunov exponent is defined by


1
(x0 ) = lim ln z(t; x0 ) , (5.5.13)
t t
where z(t) = z(t; x0 ) is a solution of the corresponding linearized equation
(
z = J(x) z
z(t0 ) = x0 ,

in which J(x) = f (x)/x is the system Jacobnian with x = x(t; x0 ).


For a three-dimensional autonomous system to have chaos, it is typical
that its three Lyapunov exponents are 1 > 0, 2 = 0, and 3 < 0, denoted
as (+, 0, ), and 1 + 3 < 0. The Lyapunov exponents for several typical
chaotic systems are listed below:
Strange Attractors and Chaos 157

TABLE 5.2
Lyapunov exponents of some typical chaotic systems.

System Lyapunov Exponent

Chen (1 , 2 , 3 ) = (1.983, 0.000, 11.986)


Chua (1 , 2 , 3 ) = (0.230, 0.000, 1.780)
Lorenz (1 , 2 , 3 ) = (0.897, 0.000, 14.565)
R
ossler (1 , 2 , 3 ) = (0.130, 0.000, 14.100)

A classification of different situations is given in Fig. 5.25, where some


other important features of chaos will not be further sdutied in this text.
158 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.25
Classification of Lyapunov exponents.
Strange Attractors and Chaos 159

Lyapunov exponents in discrete-time systems are quite different, al-


though the idea for definition is about the same.
For a discrete-time autonomous system,

xk+1 = f xk , x0 R n ,

let its Jacobian at the kth step be



f xk
Jk = Jk (x0 ) = ,
xk

which all depend on the initial state x0 since xk = f k (x0 ). Let

Pk = Pk (x0 ) := Jk Jk1 J2 J1

be the kth product of Jacobians, and let ei = ei (x0 ) be the eigenvalues of


Pk , arranging according to

|e1 | |e2 | |en | 0 .

The ith Lyapunov exponent of the system orbit {xk }, starting from x0 , is
defined by
1
i (x0 ) = lim ln ei x0 , i = 1, , n . (5.5.14)
k k
Differring from the continuous-time setting, a discrete-time system can
be chaotic even if it is onedimensional and has only one positive Lyapunov
exponent. The logistic map (5.2.3) is a typical example of this type:

xk+1 = xk 1 xk , x0 (0, 10) , 0 < 4, (5.5.15)

This map is chaotic when = 4.0 and, in this case, its only Lyapunov
exponent is = ln 2 = 0.693 . Its diagram of period-doubling bifurcation
leading to chaos is shown in Fig. 5.8.
Another example is the two-dimensional Henon map,
(
xk+1 = 1 a x2k + yk
(5.5.16)
yk+1 = b xk ,

which is chaotic when (a, b) = (1.4, 0.3), with Lyapunov exponents 1 =


0.603 and 2 = 2.34 without a zero exponent. A typical chaotic phase
portrait of the Henon map is shown in Fig. 5.26.
In both continuous-time and discrete-time autonomous systems, a posi-
tive leading Lyapunov exponent is a necessary condition for chaos to exist.
160 Nonlinear Dynamics: Bifurcations and Chaos

FIGURE 5.26
The chaotic attractor of the H
enon map.

5.6 Chaos in Discrete-Time Systems


Chaos in discrete-time systems, or maps, has a rather precise definition.
For a one-dimensional map, the following definition is convenient to verify.

DEFINITION 5.21 A map M : S S, where S is a nonempty set in a


bounded and compact domain, is chaotic if and only for any two nonempty
open subsets, U and V , of S, there is a point x0 S and a periodic-n orbit
of f such that f (x0 ) U 6= and f (x0 ) V 6= .

This definition, introduced by Touhey [1997] is equivalent to another


common definition of discrete chaos introduced by Devaney [1987]:

DEFINITION 5.22 A map M : S S, where S is a nonempty set in a


bounded and compact demoan, is chaotic if

(i) it is sensitive to initial conditions;


(ii) it is topologically transitive;
(iii) it has a dense set of periodic orbits.
Chaos in Discrete-Time Systems 161

FIGURE 5.27
Sensitivity to initial conditions.

FIGURE 5.28
Topological transitivity property.

Here, the meaning of each condition is further interpreted as follows:


(i) Sensitivity to initial conditions means that starting from two arbitrar-
ily close initital points x1 and x2 in S, the corresponding orbits, M n (x1 )
and M n (x2 ), will fall far apart after a large enough number of iterations,
n. For instance, let S = [a, b] be a bounded interval in R+ , and M be a
map from S to S. This property of sensitivity to initial conditions says
that for any prescribed [0, b a], as long as x1 6= x2 in S, no matter
how
nclose theyn are, after
a large enough number of iterations n, one has
M (x1 ) M (x2 ) > . This property is illustrated by Fig. 5.27.

(ii) Topological transitivity means that for any nonempty open set S,
no matter how small it is, an orbit of the map will sooner or later travel
into it. This property is illustrated by Fig. 5.28
(iii) Dense set of periodic orbits means that the map has infinitely many
periodic orbits of different periods, and all these periodic orbits consititute
a dense set in S. The period-doubling bifurcation diagram of the logistic
map, Fig. 5.8, at the value of = 4.0, best illustrates this property.
162 Nonlinear Dynamics: Bifurcations and Chaos

Example 5.23
Consider again the logistic map

xk+1 = 4 xk 1 xk .

First, it can be verified that this map is equivalent to

xk+1 = 2 xk (mod 1) , (5.6.17)

since they both have the same (unique) solution


1
xk = 1 cos(22k y0 ) ,
2
where
1
cos1 1 2 x0 .
y0 =
2
Then, one can show that the map (5.6.17) satisfies conditions (i)(iii) in
Definition 5.22:
(i) System (5.6.17) has a Lyapunov exponent = ln 2 > 0. Indeed,
1 1
= lim ln Jk Jk1 J2 J1 = lim ln 2k = ln 2 ,
k k k k

where Jk = 2 for all k = 1, 2, , and the modulo operation does not


change the derivative (as can be directly verified by definition of the deriva-
tive of a function). Therefore, the map is sensitive to initial conditions since
its orbit is diverging.
(ii) Because of the mod-1 operation, the map (5.6.17) is equivalent to the
following double angle map from the unit circle to itself:
6 xk+1 = 2 6 xk (mod 2) , (5.6.18)

where 6 is the angle, and x0 is an initial point on the unit circle, as shown
in Fig. 5.29.
Since the angule is doubled on each iteration, and the number 2 is not an
integer multiplier of , it is clear that for any nonempty open arc-segment
S on the circle, sooner or later (i.e., there is an index, k), the point xk S.
Therefore, this double angle map is topologically transitive, so is the map
(5.6.17).
(iii) Since Eq. (5.6.18) is 2-periodic, one has
6 xk+1+n = 2n 6 xk 2m , m, n, k = 0, 1, 2, .

Thus, by letting k , one can see that for each fixed pair (n, m), this
equation yields a fixed point, 6 x , satisfying
6 x = 2n 6 x 2m , m, n, k = 0, 1, 2, .
Chaos in Discrete-Time Systems 163

FIGURE 5.29
The double angle map from the unit circle to itself.

This gives
2m
6 x = , m = 0, 1, 2, , 2n 1; n = 1, 2, .
2n 1
Therefore, the map has infinitely many periodic orbits of different periods,
all located on the unit circle. Moreover, as n, m , one can see that all
these periodic points become dense on the unit circle, almost uniformly, so
is the map (5.6.17).

For a higher-dimensonal map: M : S S, where S is a bounded and


compact set in Rn with n > 1, there is a definition of chaos introduced
by Marotto [1978], which generalizes an earlier definition of Li and Yorke
[1975] from one-dimensional to higher-dimensional maps.
Consider an n-dimensional discrete-time autonomous system,

xk+1 = f xk , (5.6.19)

where f is a continuous nonlinear function. Let f 0 (x) and det(f 0 (x)) be the
Jacobian of f at x and its determinant, respectively, and let Br (x) be a
closed ball in Rn of radius r > 0 centered at x.

DEFINITION 5.24 A fixed point x of (5.6.19) is said to be a snap-back


repeller if
(i) there exists a real number, r > 0, such that f is differentiable with
all eigenvalues of f 0 (x) exceeding the unity in absolute value for all
x Br (x);
(ii) there exists an x0 Br (x), with x0 6= x , such that for some posi-
tive integer m, f m (x0 ) = x and f m (x0 ) is differentiable at x0 with
det((f m )0 (x0 )) 6= 0.

THEOREM 5.6 (Marotto)


164 Nonlinear Dynamics: Bifurcations and Chaos

If system (5.6.19) has a snap-back repeller then the system is chaotic in


the sense of Li and Yorke:
(i) there exists a positive integer n such that for every integer p n,
system (5.6.19) has p-periodic points;
(ii) there exist a scrambled set (an uncountable invariant set S containing
no periodic points) such that
(a) f (S) S,
(b) for every y S and any periodic point x of (5.6.19):

lim sup f k (x) f k (y) > 0 ,
k

(c) for every x, y S with x 6= y,



lim sup f k (x) f k (y) > 0 ;
k

(iii) there exists an uncountable subset S0 of S such that for any x, y S0 ,



lim inf f k (x) f k (y) = 0 .
k
Exercises 165

Exercises
5.1 For each of the following equations, determine the type of bifurcation when
is varied, and sketch the corresponding bifurcation diagram:
x = 1 + x + x2 , x = x + x2 ,
x = x + x3 , x = 3 x2 .
5.2 For each of the following equations, determine the type of bifurcation when
is varied, and sketch the corresponding bifurcation diagram:
x = x x(1 x) , x = x x(1 x) ,
3 x
x = x x , x = x .
1+x
5.3 Sketch all the periodic solutions of the following system, and indicate their
stabilities:
+ x (x2 + x 2 1) + x = 0 .
x
5.4 Determine the limit cycles and their stabilities of the following equations:
(
= (1 2 ) (9 2 )
= 1
and (
= ( 1) (2 2)
= 1 .
5.5 For the following systems, discuss their bifurcations and sketch their phase
portraits when varies:
( (
x = x x2 x = x + x3
and
y = y y = y .
5.6 For the following system, discuss its bifurcation and sketch its phase por-
trait when varies: (
x = y 2 x
y = + x2 y
5.7 Consider the system (
x = x y + x y 2
y = x + y + y 3 .
Study its Hopf bifurcation as is varied and passes 0 = 0. Is this bifur-
cation supercritical or subcritical?
5.8 Consider the biased van der Pol oscillator

+ x2 1 x + x = c ,
x
where c is a constant. Find the curve in the c plane on which Hopf
bifurcations occur.
166 Nonlinear Dynamics: Bifurcations and Chaos

5.9 Consider the following system:




x = 12 z + 2 y x

y = 12 z 2 x y (, > 0) .



z = 2 1 xy
Use a typical quadratic Lyapunov function to argue that in a large enough
neighborhood of (0, 0, 0) there must be an attractor, and this attractor is
not (0, 0, 0).
5.10 Consider the skew tent map
(
x/a if 0xa
xk+1 =
(1 x)/(1 a) if a < x 1,
with x0 (0, 1) and 0.5 a < 1. (i) Pick arbitrarily x0 and a to generate a
relatively long time series of the map (e.g., 100,000 pints). Then calculate
its Lyapunov exponent . Verify if your result satisfies
0 < ln (a) ln (1 a) .
(ii) Pick an arbitrarily x0 but fix a = 0.5. Find all the fixed points of the
map and classify their stabilities. (iii) Play around, with different values
of x0 and a [0.5, 1), to see if you can get period-2, period-3, period-4
orbits, and chaos.
6
Lyapunov Design of Nonlinear Feedback
Controllers

6.1 Feedback Control of Nonlinear Systems


A general approach to controlling a nonlinear dynamical system can be
formulated as follows:
(

x(t) = f (x, u, t) ,
(6.1.1)
y(t) = h(x, u, t) ,
where x(t) is the system state vector, y(t) the output vector, and u(t) the
control input vector. Here, in a general discussion, once again it is assumed
that all the neccesary conditions on the vector-valued functions f and h are
satisfied such that the system has a unique solution in a certain bounded
region of the state space for each given initial value x0 = x(t0 ), where the
initial time t0 0.
Given a reference signal, r(t), which can be either a constant (set-point)
or a function (target trajectory), the problem is to design a controller in
the state-feedback form
u(t) = g(x, t) , (6.1.2)
or, sometimes, in the output-feedback form
u(t) = g(y, t) ,
where g is a nonlinear (including linear) vector-valued function, such that
the controlled system
(

x(t) = f x, g(x, t), t ,
(6.1.3)
y(t) = h x, g(x, t ,
can be driven by the feedback control g(x, t) to achive the goal of traget
tracking:
lim ||y(t) r(t)|| = 0 , (6.1.4)
ttf

167
168 Lyapunov Design of Nonlinear Feedback Controllers

where the terminal time, tf , is predesired according to the application,


and || || is the Euclidean norm of a vector.
Since the second equation in system (6.1.1) is merely a mapping, which
can be easily handled in general, it is ignored in this discussion by simply
letting h() = I, the identity mappng, so that y = x.

Some Engineering Perspective about Controller Design


It is very important to point out that in a feedback controllers design,
particularly in finding a nonlinear controller for a given nonlinear system,
one must bear in mind that the controller should be (much) simpler than
the given system. For instance, if one would like to determine a nonlinear
controller, say uk in the discrete-time setting, for guiding the state vector
xk of a given nonlinear control system of the form

xk+1 = fk (xk ) + uk

to a target trajectory satisfying a predesired constraint xk+1 = k (xk ),


then mathematically it is very easy to use

uk = k (xk ) fk (xk ) ,

which will bring the original system state xk to the target trajectory in
just one step! As another example, to design a nonlinear controller u(t) in
the continuous-time setting to guide the state vector x(t) of the nonlinear
system

x(t) = f (x(t), t) + u(t)
to a target trajectory x (t), it is mathematically correct to use

u(t) = x (t) f (x(t), t) + K x(t) x (t) ,

where K has all its eigenvalues with negative real parts. This controller
leads to

e(t) = K e(t) , with e(t) = x(t) x (t) ,
yielding e(t) 0, or x(t) x (t), as t . One more example is the
following: for a given nonlinear controlled system in the canonical form

x 1 (t) = x2 (t)




x 2 (t) = x3 (t)
..

.



x n (t) = f x1 (t), , xn (t) + u(t) ,
Feedback Control of Nonlinear Systems 169

suppose that one wants to find the nonlinear controller u(t) to guide the
>
e, i.e.,
state vector x(t) = x1 (t) xn (t) to a target state, x

e
x(t) x as t .

It is mathematically straightforward to use the controller



u(t) = f x1 (t), , xn (t) + kc xn (t) x
en

with an arbitrary constant kc < 0. This controller yields



x n (t) = kc xn (t) x
en ,

which guarantees en (t) := xn (t) x


en 0 as t since kc < 0. Note that
the resulting n-dimensional controlled system is a completely controllable
linear system, x = A x + b u, with

0 1 0 0 0
0 0 1 0 ..

A= . . . . and b= . .
.. .. .. .. 1 0
0 0 0 0 1
Therefore, a suitable constant control gain exists for the state-feedback
controller, u = kc xn , such that x(t) xe as t .
All such examples seem to reveal a universal controller design methodol-
ogy that works for any given system. However, a fatal problem with such
design is that the controller is even more complicated than the given
system, and hence has no practical value: it virtually replaces the given
system by a stable one. It is hardly imagine that one can accept a controller
for a machine described by f (such as an airplane or a car) that is bigger
than the machine itself (described by the same f ), if one is talking about
engineering design rather than mathematical manipulations. Last but not
least, if this f is a simplified mathematical model for the real machine, and
the controller uses the same mathematical model, then the design merely
shows that the controller works for the mathematical model but there is
no reason to believe that it works for the real machine as well.
Therefore, in a feedback controllers design, it is expected to come out
with a simplest possible and satisfactory controller: if a linear controller
can be found to do the job, use a linear controller; otherwise, try a simple
nonlinear controller with such as a piecewise or a quadartic nonlinearity and
so on. Also, oftentimes full state feedback information is not avaliable in
practice, so one should try to design a controller using only output feedback
(i.e., partial state feedback), which means the second equation of (6.1.1)
is essential. Whether or not can one find a simple, easily implementable,
low-cost, and effective controller for a given nonlinear system for tracking
control requires both theoretical background and design experience.
170 Lyapunov Design of Nonlinear Feedback Controllers

A General Approach to Controller Design


Return to the central theme of feedback control for a general nonlinear dy-
namical system (6.1.1)(6.1.4). A basic idea is first outlined for a tracking
control task.
e(t), and assume that it is
Let the target trajectory (or set-point) be x
differentiable, so that by denoting

e (t) = z(t) ,
x (6.1.5)

one can subtract this equation (6.1.5) from the first equation of (6.1.3), so
as to obtain
e = F(e, t) , (6.1.6)

e and
where e = x x

F(e, t) := f x, g(x, t), t z(t) .

e is a periodic orbit of the given system (6.1.1),


If the target trajectory x
that is, if it satisfies

e = f x
x e, 0, t , (6.1.7)

then similarly a subtraction of (6.1.7) from the first equation of (6.1.1)


gives
e = F(e, t) , (6.1.8)

e and
where e = x x

F(e, t) := f x, g(x, t), t f xe, 0, t .

In either case, e.g., in the second case which is more difficult in general,
the goal of design is to determine the controller u(t) = g(x, t) such that

lim ||e(t)|| = 0 , (6.1.9)


t

which implies that the goal of tracking control is achieved:

(t)|| = 0 .
lim ||x(t) x (6.1.10)
t

It is now clear from Eqs. (6.1.9) and (6.1.10) that if zero is a fixed point
of the nonlinear system (6.1.8), then the original controllability problem
has been converted to an asymptotic stability problem of this fixed point.
Thus, the Lyapunov first and second methods may be applied or modified
to obtain rigorous mathematical techniques for the controllers design. This
is further discussed in the following.
Feedback Controllers Design for Nonlinear Systems 171

6.2 Feedback Controllers Design for Nonlinear Systems


This section is to discuss how a linear or nonlinear controller may be de-
signed for the control of a nonlinear dynamical system based on the rigorous
Lyapunov function arguments.

6.2.1 Linear Feedback Controllers for Nonlinear Systems


In light of the Lyapunov first method for nonlinear autonomous systems and
the linear stability theory for nonlinear nonautonomous systems with weak
nonlinearities (see Section ??, it is clear that a linear feedback controller
may be able to control a nonlinear dynamical system in a very rigorous
way.
Take the nonlinear, actually chaotic, Chuas circuit (??) as an example.
This circuit is a simple, yet very interesting, electronics system that displays
rich and typical bifurcation and chaotic phenomena. The circuit is shown
again in Fig. 6.1, which consists of one inductor L, two capacitors C1 and
C2 , one linear resistor R, and one nonlinear resistor f whichis a nonlinear
function of the voltage across its two terminals: g = g VC1 (t) . Let iL (t) be
the current through the inductor L, and VC1 (t) and VC2 (t) be the voltages
across C1 and C2 , respectively, in this circuit diagram.
It follows from Kirchhoffs laws that
d 1
C1 VC1 (t) = VC2 (t) VC1 (t) + g VC1 (t) ,
dt R
d 1
C2 VC (t) = VC1 (t) VC2 (t) + iL (t) ,
dt 2 R
d
L iL (t) = VC2 (t) .
dt
In the consideration of controlling the circuit behavior, it turns out to be
easier to first apply the nonlinear transformation

x1 (t) = VC1 (t) , x2 (t) = VC2 (t) , x3 (t) = R iL (t) , t = t/(C2 R)

to reformulate the circuit equations in the following dimensionless form:



x = p [ x + y f (x) ]

y = x y + z (6.2.11)


z = q y ,

where p > 0, q > 0, and f (x) = R g(x) is a nonlinear function represented


by
1
f (x) = m0 x + (m1 m0 ) ( |x + 1| |x 1| ) ,
2
172 Lyapunov Design of Nonlinear Feedback Controllers

R
iL 6
+ +
VC2 C2 L VC1 C1 g()

FIGURE 6.1
Chuas circuit.

in which m0 < 0 and m1 < 0.


It is known that with p = 10.0, q = 14.87, m0 = 0.68, and m1 = 1.27,
the circuit displays a chaotic attractor (see Fig. 6.2) and a limit cycle of
large magnitude. This limit cycle is a large unstable (saddle-type) periodic
orbit encompasses the non-periodic attractor, and is generated due to the
eventual passivity of the transistors.
Now, let (x, y, z) be the unstable periodic orbit of the Chua circuit
(6.2.11). Then, the circuit trajectory (x, y, z) of the circuit can be driven
from any current state to reach this periodic orbit by a simple linear feed-
back control of the form

u1 xx k11 0 0 xx
u2 = K y y = 0 k22 0 y y (6.2.12)
u3 z z 0 0 k33 z z

with
k11 pm1 , k22 0 , and k33 0 ,
where the control can be applied to the trajectory at any time.
A mathematical justification is given as follows. First, one observes that
the controlled circuit is

x = p x + y f (x) k11 (x x ) ,
y = x y + z k22 (y y) , (6.2.13)

z = q y k33 (z z) .

Since the unstable periodic orbit (


x, y, z) is itself a (periodic) solution of
the circuit, one has

x = p x + y f ( x) ,
y = x y + z , (6.2.14)

z = q y ,
Feedback Controllers Design for Nonlinear Systems 173

FIGURE 6.2
The double scroll chaotic attractor of Chuas circuit.

so that a subtraction of (6.2.14) from (6.2.13), with the new notation

X =xx
, Y = y y , and Z = z z ,

yields
X = p X + Y fe(x, x
) k11 X ,
Y = X Y + Z k22 Y , (6.2.15)

Z = q Y k33 Z ,
where


m0 (x x
) x 1, x
1



m0 x m1 x + m1 m0 x 1, 1 x1



m0 (x x
) + 2(m1 m0 ) x 1, x
1


m1 x m0 x m1 + m0 1 x 1, x1
fe(x, x
) = m1 (x x
) 1 x 1, 1 x1



m1 x m0 x + m1 m0 1 x 1, x 1



m0 (x x
) 2(m1 m0 ) x 1, x1



m0 x m1 x m1 + m0 x 1, 1 x 1

m0 (x x
) x 1, x 1

with m1 < m0 < 0.


174 Lyapunov Design of Nonlinear Feedback Controllers

Define a Lyapunov function for system (6.2.15) by


q pq 2 p 2
V (X, Y, Z) = X 2 + Y + Z .
2 2 2
It is clear that V (0, 0, 0) = 0 and V (X, Y, Z) > 0 for all X, Y, Z not si-
multaneously zero. On the other hand, since p, q > 0 and k22 , k33 0, it
follows that
V = q X X + p q Y Y + p Z Z

= q X p X + p Y p fe(x, x ) k11 X

+p q Y X Y + Z k22 Y + p Z q Y k33 Z
= p q X 2 + 2 p q XY p q Y 2 p q X fe(x, x
)
q k11 X 2 k22 p q Y 2 k33 p Z 2

= p q (X Y )2 + q k22 Y 2 + k33 Z 2 q p X fe(x, x) + k11 X 2
0
for all X, Y , and Z, if

p X fe(x, x
) + k11 X 2 0 (6.2.16)

for all x and x


. To find the conditions under which (6.2.16) is true, by
a careful examination of the nine possible cases for the function f(x, x
)
shown above, the following common condition can be obtained:

k11 max p m0 , p m1 = p m1 , (6.2.17)

in which m1 < m0 < 0, as indicated above. This condition guarantees


the inequality (6.2.16). Hence, if the conditions stated are satisfied, then
the equilibrium point (0, 0, 0) of the controlled circuit (6.2.15) is globally
asymptotically stable, so that

|X| 0 , |Y | 0 , |Z| 0 as t ,

simultaneously. That is, starting the feedback control at any time on the
chaotic trajectory, one has

lim |x(t) x
(t)| = 0 , lim |y(t) y(t)| = 0 , lim |z(t) z(t)| = 0 .
t t t

The tracking result is visualized by Fig. 6.3.

6.2.2 Nonlinear Feedback Controllers for Nonlinear Systems


It is not always possible to use a linear controller to drive a nonlinear
dynamical system. So nonlinear feedback controllers are often necessary.
Consider the Duffing oscillator (5.5.9), namely,

x = y ,
(6.2.18)
y = p2 x x3 p1 y + q cos(t) ,
Feedback Controllers Design for Nonlinear Systems 175

FIGURE 6.3
Tracking the inherent unstable limit cycle of Chuas circuit.
176 Lyapunov Design of Nonlinear Feedback Controllers

(18) where p1 , p2 , q, and are systems parameters. It is known that with


the parameters set p1 = 0.4, p2 = 1.1, q = 2.1 (or q = 1.8), and = 1.8,
the Duffing system has a chaotic response (see Fig. 5.21). It is also known
that this system has some inherent unstable limit cycles, which however
does not have an analytic expression nor can be displayed graphically due
to its instability.
For this system, suppose that one is interested in controlling its chaotic
trajectory to one of its inherent unstable periodic orbits (limit cycles) by
designing a conventional feedback controller.
Notationally, let ( x, y) = (
x(t), y(t)) be the target trajectory (one of its
unstable periodic orbits). The goal is to control the system trajectory, such
that
lim |x(t) x(t)| = 0 and lim |y(t) y(t)| = 0 (6.2.19)
tT tT
for a terminal time T .
For this purpose, consider a nonlinear feedback controller of the form
u(t) = h(t; x, x). By adding the controller to the second equation of the
original system, one has the following controlled Duffing system:

x = y ,
(6.2.20)
y = p2 x x3 p1 y + q cos(t) + h(t; x, x
) .
Since the periodic orbit (x, y) is itself a solution of the original system,
subtracting (6.2.19), with (x, y) being replaced by ( x, y) therein, from sys-
tem (6.2.20), it gives

X = Y , (6.2.21)
Y = p2 X x3 x 3 p1 Y + h(t; x, x ) ,
where
X =xx
and Y = y y .
Next, observe that the controlled Duffing system (6.2.21) is a nonlinear,
nonautonomous system. Therefore, the Lyapunov first method may not
apply. For this particular case, however, the Lyapunov second method can
be applied fairly easily. Indeed, a nonlinear controller h(x) can be designed
as follows. Let
h(x) = k X + 3 x2 X + 3 x
X2 .
Then, under the control of this controller, system (6.2.21) reduces to

X = Y (6.2.22)
Y = k + p2 X p1 Y X 3 ,
and one can easily verify that under the condition k + p2 > 0, which gives a
criterion for determining the linear control gain k, the Lyapunov function
k + p2 2 1 4 1 2
V (X, Y ) = X + X + Y
2 4 2
Feedback Controllers Design for Nonlinear Systems 177

FIGURE 6.4
Tracking the inherent unstable limit cycle of the Duffing oscillator.

satisfies V 0, where equality holds if and only if both X 0 and Y


0. This means that the zero fixed point of the controlled Duffing system
(6.2.22) is asymptotically stable, so that X 0 and Y 0 as t , or
the goal
|x x
| 0 and |x x | 0 (t )
is achieved. The tracking result is visualized by Fig. 6.4.

6.2.3 Some General Criteria for Controllers Design


Recall the general tracking control problem described earlier by equations
(eq:reference)(6.1.10). For a general nonlinear and nonautonomous sys-
tems of the form
x = f (x, t) , (6.2.23)
of period tp > 0: x
which is assumed to possess a periodic orbit x (t + tp ) =
(t) for all 0 t < . The goal is to design a feedback controller of the
x
178 Lyapunov Design of Nonlinear Feedback Controllers

form
+g xx
u(t) = K x x , t , (6.2.24)

where K is a constant matrix and g is a (simple) nonlinear vector-valued


function, which is to be added to the original system, to obtain

x = f (x, t) + u = f (x, t) + K x x
+g xx , t . (6.2.25)

The controller is required to be able to drive the trajectory of the controlled


system (6.2.25) to approach the target periodic orbit x , in the sense that

(t)|| = 0 ,
lim ||x(t) x (6.2.26)
t

where, again, || || is the standard Euclidean norm.


Since the target periodic orbit x is itself a solution of the original system,
it satisfies
= f (
x x, t) , (6.2.27)

and since the feedback controlled system with the controller (6.2.24) is
given by
x = f (x, t) + K x x
+g xx , t , (6.2.28)

a subtraction of (6.2.27) from (6.2.28) gives

= F(X, t) + K X + g(X, t) ,
X (6.2.29)

where

X=xx and F(X, t) = f (x, t) f (
x, t) .
It is clear that F(0, t) = 0 for all t [0, ).
Now, Taylor-expand the right-hand side of the controlled system (6.2.29)
at X = 0 (i.e., at x = x ), and suppose that the nonlinear controller to be
designed will satisfy g(0, t) = 0. Then

X = A( x, t)X + h(X, K, t) , (6.2.30)



where A( x, t) = F(X, t)/XX=0 and h(X, K, t) is the Taylor expansion
(truncated, if necessary, in a design), which is a function of t, K, and O(X).
To this end, the design is to determine both the constant control gain
matrix K and the nonlinear controller g(X, t) based on the linearized model
(6.2.30), such that X 0 (i.e., x x ) as t . If this can be done, then
when the controller is applied to the original system, as shown in (6.2.28),
the goal (6.2.26) can be achieved.
The following criteria are immediate from Theorems 3.1 and 3.5.

THEOREM 6.1
Lyapunov Design of Nonlinear Controllers 179

Suppose that in system (6.2.30) h(0, K, t) = 0 and A(


x, t) = A is a constant
matrix whose eigenvalues all have negative real parts. If
||h(X, K, t)||
lim =0
||X||0 ||X||
uniformly with respect to t [0, ), where || || is the Euclidean norm,
then the controller u(t) defined in (6.2.24) will drive the trajectory x of the
as t .
controlled system (6.2.28) to the target orbit x

THEOREM 6.2
In system (6.2.30), suppose that h(0, K, t) = 0 and that h(X, K, t) and
h(X, K, t)/X are both continuous in a bounded region ||X|| < . Also
assume that
||h(X, K, t)||
lim = 0,
||X||0 ||X||
uniformly with respect to t [0, ). If all the multipliers of the system
(6.2.30) satisfy

|i | < 1 , i = 1, , n , t [0, ) ,

then the nonlinear controller (6.2.24) so designed will drive the orbit x of
as t .
the original controlled system (6.2.30) to the target orbit x

6.3 Lyapunov Design of Nonlinear Controllers


6.3.1 An Illustrative Design Example
Consider a class of Lienard equations in the form

x
+ b(x)
+ c(x) = 0 , (6.3.31)

which is assumed to have a zero fixed point (x , x ) = (0, 0), where b()
and c() are nonlinear functions satisfying
(i) xb(
x) > 0 for all x 6= 0;
(ii) xc(x) > 0 for all x = 6 0.
Examples of this type of equations include

+ x 3 + x5 = 0
x
3 g2
x
+ x + sin(x) = 0 (pendumu : < x < )
m m `2

1 x2 x + x = 0
x (van der Pol : > 0 , |x| > 1)
180 Lyapunov Design of Nonlinear Feedback Controllers

For these systems, a valid Lyapunov function can be found to be the total
energy function:
Z x(t)
1 2
V (x, x)
= x (t) + c() d .
2 0

Under conditions (i) and (ii) given above, V (x, x)


> 0 for (x, x)
= (0, 0),
and

V = x x
+ c(x)x
= x b(x)
x c(x) + c(x) x
= x b(x)

<0 for all x 6= 0 .

The only chance for V = 0 while x 6= 0 is when x 0, i.e., on the x-axis.


But this axis is not a region, so the Lyapunov instability theorems studied
in Section 2.4 do not apply. Besides, whenever the system orbit is located
on the x-axis, since x 6= 0, it will leave this axis immediately, and right after
that moment one has V < 0, which forces the orbit to move toward the
origin. Thus, the zero fixed point of the system is asymptotically stable.
If, moreover,
Z x(t)
c() d as |x| ,
0
then the asymptotic stability is global.
Now, suppose that condition (i) and (ii) above are not simultaneously
satisfied. In this case, one wants to design a controller, u, to be added to
the right-hand side of the given system:

x
+ b(x)
+ c(x) = u , (6.3.32)

so as to stabilize the zero fixed point.


How can this be accomplished? It should be natural to try a controller
of the form
u = u1 (x)
+ u2 (x) ,
where u1 () and u2 () may be nonlinear and are to be determined. The
controlled system becomes

x
+ b(x) u1 (x)
+ c(x) u2 (x) = 0 .

Thus, it is clear that one has to find u1 and u2 such that


(i) x [b(x)
u1 (x)]
> 0 for all x 6= 0;
(ii) x [c(x) u2 (x)] > 0 for all x =
6 0.
Lyapunov Design of Nonlinear Controllers 181

The following examples illustrate how these conditions can be satisfied in


the design.

Example 6.1
The system
x 3 + x2 = 0
x
does not simultaneously satisfy conditions (i) and (ii). To design a con-
troller u = u1 (x)
+ u2 (x) to force both
(
x x 3 u1 (x)
>0 (x 6= 0)
2
x x u2 (x) > 0 (x 6= 0) ,

it is mathematically straightforward to choose

= 2 x 3
u1 (x) and u2 (x) = x2 x ,

which yields
u = u1 + u2 = 2 x 3 + x2 x .
However, this controller is not desirable since it is even more complicated
than the given system; it basically cancels the given nonlinearity and then
adds a stable linear portion back to the resulting system. In so doing, it
has actually replaced the given system by a stable one, which usually is
not allowed (for instance, in linear systems control, a PID controller does
not cancel the given plant and then puts in a new stable plant).
A simple, easily implementable, practical design can sometimes be very
technical. At least, a slightly simplier design for this example can be

= x 3 x
u1 (x) and u2 (x) = x sgn [x] ,

which give (
x x 3 u1 (x)
= x 2 > 0

x x2 u2 (x) = x2 x + sgn [x] > 0 ,
for all x 6= 0 and x 6= 0.

6.3.2 Adaptive Control via Lyapunov Design


The above Lyapunov design method is useful for adaptive controller design
for uncertain systems. The following very simple example tells some basic
idea about the approach.

Example 6.2
182 Lyapunov Design of Nonlinear Feedback Controllers

Consider an uncertain linear control system,

x + x = u ,

where is an unknown constant. One wants to design a controller, u, such


that the controlled state x(t) 0 as t .
Case 1.
If || , where is known, then the linear controller

u = 2x

can do the job. Indeed, the controlled system becomes



x + 2 + x = 0 ,

which has a solution

x(t) = x0 e(2+)t 0 (t ) .

Case 2.
If no such an upper bound is known, then usually no linear controller
can be designed, since whatever linear controller u = kx is used, the
system orbit is
x(t) = x0 e(k+)t ,
which does not provide any guideline for the determination of the constant
gain k.
In this case, one has to resort to a different methodology. Estimation of
on line is often necessary, for which observer is a useful tool. Suppose
that
b is an estimate of the unknown , which satisfies a so-called observer
equation,
b = f x,
x b ,
where f (/cdot, ) is to be determined. Let

u = u x,
b .

To find the observer f (, ), one may consider the Lyapunov function


1 1 2
b = x2 +
V x, b .
2 2
One is now looking for both f (, ) and u(, ) to force

V c x2 < 0
Lyapunov Design of Nonlinear Controllers 183

for some constant c > 0. Here, one actually uses three classK functions
(e.g., (|x|2 ) := c |x|2 ; see Section 2.2). Since it is wanted that

V = x x + b b = x (u x) + b b c x2 ,

or
xu +
bb x2 +
b c x2 ,
one may choose the observer equation

b = x2 .

Its solution is Z t

b(t) =
b(0) x2 ( ) d ,
0
where
b(0) may be chosen to be zero. Thus, what is wanted becomes

b x2 c x2 ,
V = x u

which suggests that one can design a controller in the form



u= bc x (c > 0) .

Usually, it is preferred that the controller can be a negative feedback. For


this purpose, c can be any constant satisfying |b (t)| c for all t 0.
The entire adaptive control system based on this observer design can be
implemented as shown in Fig. 6.5.

The methodology discussed in the above example can be further extended


to uncertain nonlinear systems.

6.3.3 Lyapunov Redesign of Nonlinear Controllers


184 Lyapunov Design of Nonlinear Feedback Controllers

FIGURE 6.5
Implementation of the observer-based adaptive control system.

Você também pode gostar