Escolar Documentos
Profissional Documentos
Cultura Documentos
Alberto d'Onofrio
Editor
Bounded Noises
in Physics,
Biology, and
Engineering
Modeling and Simulation in Science, Engineering and Technology
Series Editor
Nicola Bellomo
Politecnico di Torino
Torino, Italy
A. Deutsch T. Tezduyar
Center for Information Services Department of Mechanical Engineering &
and High-Performance Computing Materials Science
Dresden
Technische Universitat Rice University
Dresden, Germany Houston, TX, USA
A. Tosin
M.A. Herrero Garcia
Istituto per le Applicazioni del Calcolo
Departamento de Matematica Aplicada
M. Picone Consiglio Nazionale delle
Universidad Complutense de Madrid
Ricerche
Madrid, Spain
Roma, Italy
Mathematics Subject Classification (2010): 60-Gxx, 60-H10, 82-C31, 37-Hxx, 60-H15, 82-Cxx, 92-XX,
92-C40, 34-K18, 34-A08, 93-XX
Since the hallmark seminal works on Brownian motion by Einstein and Langevin,
Gaussian noises (GNs) have been one of the main concepts used in non-equilibrium
statistical physics and one of the main tools of its applications, from engineering
to biology. The later, and quite dichotomic, mathematical works by Ito and
Stratonovich laid a firm theoretical basis for the mathematical theory of stochas-
tic differential equations, as well as a long-lastingand currently unresolved
controversy on which of the two approaches is best suited for describing mathe-
matical models of the real world. Other hallmarks in stochastic physics were in the
1970s the birth, in the framework of the Ilya Prigogine school, of the theory of noise-
induced transitions by Horsthemke and Lefever; in the early 1980s, in the framework
of the Rome school, the introduction of the concept of stochastic resonance, first
introduced by Benzi, Parisi, Sutera, and Vulpiani to model climatic changes. Finally,
in nonlinear analysis, starting from 1990s of the past century a rigorous theory of
stochastic bifurcationsboth phenomenological and dynamicshas been and it is
being developed.
As far as the many applications of stochastic dynamical systems are concerned,
in biology and biochemistry noise and noise-induced phenomena are acquiring a
(somewhat unforeseen) fundamental relevance, due to recent discoveries that are
showing the constructive role of noise for some biological functions, for example
cellular differentiation. The increasing importance of noise in understanding intra-
and intercellular mechanisms can indeed be summarized with the motto noise is
not a nuisance.
The above-summarized body of research is essentially based on the use of GNs,
which is backgrounded in the Central Limit Theorem, and which is, it must be
clearly said here, the best approximation of reality in many cases. However, since
1960s an increasing number of experimental data motivated theoretical studies
stressing that many real-life stochastic processes do not follow white or colored
Gaussian laws, but other densities such as fat-tail power laws. Although this is
not the topic of this book, it is important to recall the pioneering studies by Benoit
Mandelbrot and his introduction of the concepts of fractal Brownian motion.
vii
viii Preface
The aim of this collective book is to give, through a series of contributes by the
world-leading scientists, an overview of the state of the art and of the theory and
applications of bounded noises, and of its applications in the domains of statistical
physics, biology, and engineering.
Quite surprisingly, given that in the last 15 years an increasingly large body of
research has been and is being published on the subtle effects of bounded noises
on dynamical systems, this volume is probably the first book really devoted to the
general theory and applications of bounded noises. It is a pleasure to remind to the
reader that a single monographic volume was published in 2000, by Springer, in
a similar topic: Bounded Dynamic Stochastic Systems: Modeling and Control by
Prof. Hong Wang, which was focused on industrial applications, and was mainly
devoted to some innovative approximation methods introduced by its author. On the
contrary, our collective work is a basic science book.
This volume is organized into four parts.
The first part is entitled Modeling of Bounded Noises and Their Applications
in Physics, and it includes both contributes on the definition of the main kinds
of bounded noises and their applications in theoretical physics. Indeed, in this
moment, the theory of bounded stochastic processes is intimately linked to its
applications to mathematical and statistical physics, and it would be extremely
difficult and unnatural to separate theory from physical applications. In the first
contribute of the book, Zhu and Cai illustrate two major classes of bounded noises
the randomized harmonic model and the nonlinear filter modeland their statistical
properties, as well as effective algorithms to numerically simulate them. The second
contribute is written by the pioneer of the theory of bounded stochastic processes,
Prof. Dimentberg, who first introduced the randomized harmonic model in 1988
as a representation of a periodic process with randomized phase modulation. In
his contribute, Prof. Dimentberg focuses on the dynamics of the classical linear
oscillator under external or parametric bounded excitations, with an excursus in
an important nonlinear case. Another major example of bounded noise is the one
based on Tsallis statistics (aka -process or TsallisBorland noise). This noise is
introduced here in the contribute by Wio and Deza, who also illustrate its effects
in the most important noise-induced phenomena, such as stochastic resonance and
noise-induced transitions. Properties of dynamical systems driven by DMN are
investigated in the third contribute by Ridolfi and Laio, who also focus on the
application of DMN in environmental sciences.
Stochastic oscillators are a central topic in statistical physics, which is confirmed
by the next two chapters. The first, by Gitterman, is devoted to the study of Brownian
motion with adhesion, i.e., an oscillator with a random mass for which the particles
of the surrounding medium adhere to the oscillator for some random time after the
collision. The second, by Bobryk, is devoted to the numerical study of energetic
stability for a harmonic oscillator with fluctuating damping parameter, where the
stochastic perturbation is modeled by means of the sine-Wiener noise, a particular
case of the above-mentioned randomized harmonic model. In the next chapter
Hasegawa applies a moment method (MM) to the Langevin model for a Brownian
particle subjected to the above-mentioned TsallisBorland noise.
x Preface
second article is by Field and Grigoriu, who illustrate the problem of model selection
for random functions with bounded range. This is an intriguing problem because
the available information on input and system properties is typically limited, and
as a consequence there may be more than one model that is consistent with the
available information. Finally Milanese, Ruiz, and Taragna examine the filter design
problem for linear time-invariant dynamical systems when no mathematical model
is available, but a set of initial experiments can be performed where also the variable
to be estimated is measured.
The above division of the present volume into four parts has, however, to be
understood as loose, and partially artificial, since the vast majority of the articles
here published are interdisciplinary.
We hope that this volume may trigger new studies in the field of bounded
stochastic processes and that it may be read by an interdisciplinary audience, or
by readers who are willing to extend their expertise to new domains.
I finally thank Prof. Nicola Bellomo and Birkhauser Science for having allowed
this book to exist and for their cooperating attitudeand remarkable patience!
during the development of this volume.
xiii
xiv Contents
IFIMAR, Universidad Nacional de Mar del Plata and CONICET, Mar del Plata,
Argentina
M. Dimentberg Department of Mechanical Engineering, Worcester Polytechnic
Institute, Worcester, MA, USA
Alberto dOnofrio Department of Experimental Oncology, European Institute of
Oncology, Milan, Italy
R.V. Field Sandia National Laboratories, Albuquerque, NM, USA
Alberto Gandolfi Istituto di Analisi dei Sistemi ed Informatica A. Ruberti - CNR
Viale Manzoni 30, Roma, Italy
xv
xvi Contributors
W.Q. Zhu
Department of Mechanics, State Key Laboratory of Fluid Transmission and Control,
Zhejiang University, Hangzhou 310027, China
e-mail: wqzhu@yahoo.com
G.Q. Cai ()
Department of Ocean and Mechanical Engineering, Florida Atlantic University,
Boca Raton, FL 33431, USA
e-mail: caig@fau.edu
1.1 Introduction
E[X(t1 )X(t2 )] = A2 E[sin (0t1 + B(t1 ) +U) sin (0t2 + B(t2 ) +U)]
A2
= E[cos(0 (t2 t1 ) + (B(t2 ) B(t1 )))] (1.4)
2
where b and u are state variables for the stochastic process B(t) and the random
variable U, respectively. The convention of using a lowercase letter to represent the
state variable of an uppercase random quantity will be followed hereafter. Denote
Since the Wiener process B(t) is Gaussian distributed, its increment Z is also
Gaussian distributed with
1 z2
pZ (z) = exp (1.7)
2 (t2 t1 ) 2(t2 t1 )
A2
E[X(t1 )X(t2 )] = E[cos(0 (t2 t1 )) cos( Z) sin(0 (t2 t1 )) sin( Z)]
2
+
A2 cos( z) z2
= cos(0 (t2 t1 )) exp dz
2 2 (t2 t1 ) 2(t2 t1 )
A2 1 2
= cos(0 (t2 t1 ))exp (t2 t1 ) . (1.8)
2 2
Equation (1.8) shows that X(t) is a weakly stationary process with an autocorrela-
tion function
A2 1
RXX ( ) = cos(0 )exp 2 | | , = t2 t1 (1.9)
2 2
Carrying the Fourier transform, we have the power spectral density as follows
A2 2 ( 2 + 02 + 4 /4)
XX ( ) = . (1.10)
4 [( 2 02 4 /4)2 + 4 2 ]
Figure 1.1 depicts the spectral densities in the positive range for the case of 0 =
3 and several values of . It is seen that the spectral densities reach their peaks
near = 0 and exhibit different bandwidths for different values. The stochastic
1 On Bounded Stochastic Processes 7
Fig. 1.1 Spectral densities of randomized harmonic process X(t) with 0 = 3 and different
values
process X(t) reduces to a pure harmonic process with a random initial phase when
= 0. As increases, the bandwidth of the process becomes broader, indicating an
increasing randomness.
To find the probability density of X(t), denote
(1.14)
+ + 1 +2k
1
p1 (1 ) = p (1 + 2k ) =
2 pY (y)dy
k= k= 1 +2(k1)
+
1 1
= pY (y)dy = (1.16)
2 2
In deriving (1.16), use has been made of (1.14). Equation (1.16) shows that 1
is uniformly distributed in [0, 2 ). According to the transformation rule of the
probability density functions,
d 1
pX (x) = p1 (1 ) = 1 , A < x < A. (1.17)
dx A A2 x2
Figure 1.2 depicts the probability density of X(t). It has very large values near
the two boundaries. Note that the probability distribution only depends on A,
determined according to the physical boundary of the underlined phenomenon; thus,
the probability distribution is not adjustable. The parameters 0 and have no effect
on the probability distribution; however, they can be adjusted to match the spectral
density of X(t) to be modeled, according to the peak magnitude, peak location, and
bandwidth.
1 On Bounded Stochastic Processes 9
The randomized harmonic model can be extended to include more terms, as given by
+
X(t) = Ai cos(it + Bi (t) +Ui ) (1.18)
i=1
where Ai are positive constants, Bi (t) are mutually independent unit Wiener
processes, and Ui are mutually independent random variables uniformly distributed
in [0, 2 ]. The spectral density of X(t) is now
+
A2i i2 ( 2 + i2 + 4 /4)
XX ( ) = (1.19)
i=1 4 [( i i /4) + i ]
2 2 4 2 4 2
where:
1
pYi (yi ) = (1.21)
A2i y2i
Fig. 1.3 Spectral densities of X(t) generated from randomized harmonic model (1.18) with two
terms for the case of A1 = 2, A2 = 0.8, 1 = 3, 2 = 6, 1 = 1.2. Taken from Ref. [4] (C) Elsevier
Science Ltd (2004)
and 2 , and their magnitudes depend on A1 , A2 , 1 , and 2 . Thus, for a process with
a two-peak spectral density, parameters in the model can be adjusted to match the
targeted spectral density. Since the probability density only depends on A1 and A2 ,
an identical one is found for the four cases in Fig. 1.3, and another one found for the
four cases in Fig. 1.4. They are drawn as a solid line and a dashed line in Fig. 1.5,
respectively. Although the boundaries for the two probability distributions are the
same, they have different shapes. The probability function is of a singular shape in
the sense that it is infinite at (A1 A2 ).
Randomized harmonic model is simple to apply and versatile to match the spec-
tral density by adjusting model parameters. However, the probability distribution of
the modeled process is of singular shape and cannot be adjusted. For cases in which
the probability distributions of the excitations have insignificant effects on system
behaviors, for example, when stationary responses of linear or weakly nonlinear
systems are of interest [2, 5], the randomized harmonic model is an advantageous
choice for excitation processes.
Fig. 1.4 Spectral densities of X(t) generated from randomized harmonic model (1.18) with two
terms for the case of A1 = 1.4, A2 = 1.4, 1 = 3, 2 = 6, 1 = 1.0. Taken from Ref. [4] (C) Elsevier
Science Ltd (2004)
Fig. 1.5 Probability densities of X(t) generated from randomized harmonic model (1.18) with two
terms. Taken from Ref. [4] (C) Elsevier Science Ltd (2004)
12 W.Q. Zhu and G.Q. Cai
But it is this random variable U that renders the process not ergodic, and a large
number of samples are required in Monte Carlo simulation. If the system under
investigation is complex with many degrees of freedom, the computational time for
the simulation may be prohibitively long. To reduce the computational burden, an
equivalent representation is proposed below.
Let
Applying the Ito differential rule [12], we obtain the following Ito differential
equations [13] from (1.24)
1
dX = (0Y 2 X)dt + Y dB(t)
2
1
dY = (0 X 2Y )dt XdB(t) (1.25)
2
Equation set (1.25) is equivalent to the stochastic differential equations in the
Stratonovich sense by taking account of the WongZakai correction [18],
where is a positive constant and B(t) is a unit Wiener process. Multiplying (1.27)
by X(t ) and taking the ensemble average, we obtain
dRXX ( )
= RXX ( ) (1.28)
d
which is the initial condition for (1.28). Then solution of Eq. (1.28) is given by
The corresponding spectral density of X(t), i.e. the Fourier transform of RXX ( ), is
of the low-pass type
+
1 2
XX ( ) = RXX ( )ei d = (1.31)
2 ( 2 + 2 )
Equation (1.31) shows that the central frequency is = 0, and the bandwidth is
controlled by the parameter .
The stationary probability density pX (x) of X(t) is governed by the reduced
FokkerPlanck equation
d 1 d
pX (x) + (D2 (x)pX (x)) = 0 (1.32)
dx 2 dx
Thus the stochastic process X(t) generated from (1.27) with D(X) given by (1.33)
possesses a given stationary probability density and a low-pass spectral density
(1.31). The parameter can be used to adjust the spectral density, and function
D(X) is used to match any valid probability distribution.
Consider a bounded stochastic process with the following probability density
2 (2 + 2) x2
pX (x) = C( x ) = 2 +1
2
1 2 , > 1 (1.34)
2 ( ( + 1))2
where (.) is the Gamma function, and and are two parameters. It is clear from
(1.34) that X , and is the single parameter which determines the shape
14 W.Q. Zhu and G.Q. Cai
Fig. 1.6 Stationary probability densities of X(t) generated from nonlinear filter (1.27)
of pX (x). Since the mean square value 2 in (1.29) is uniquely determined by and
it is not an independent parameter. Substitution of (1.34) into (1.33) leads to
2
D2 (X) = X2 (1.35)
+1
stationary probability densities of stochastic processes generated from (1.27) are
depicted in Fig. 1.6 for several values. It is shown that the shapes of the probability
densities diverse for different values. For the case of < 0, the shape of the
probability density is similar to that of the randomized harmonic process, shown in
Fig. 1.2. It reaches minimum at x = 0 and approaches infinity at two boundaries.
The case of = 0 corresponds to a uniform distribution. For cases of > 0, the
probability density functions reach their maxima at the zero. For a fixed value,
the shapes of probability densities for different values are diverse, yet they may
share the similar spectral density (1.31).
where ai j are parameters and B1 (t) and B2 (t) are independent unit Wiener processes.
Multiplying the two equations in (1.36) by X1 (t), taking the ensemble average, and
denoting Ri j ( ) = E[Xi (t)X j (t + )], we obtain
d d d
R11 ( ) = a11 R11 ( ) a12 R12 ( )
d d d
d d d
R12 ( ) = a21 R21 ( ) a22 R12 ( ) (1.37)
d d d
(1.37) can be solved for the correlation functions. In modeling a stochastic process,
its spectral density is usually of interest. Following a procedure proposed in [3], the
spectral densities can be obtained directly without solving (1.37) and performing a
Fourier transform. Define the following integral transformation
+
1
i j ( ) = F [Ri j ( )] = Ri j ( )ei d (1.39)
0
1
i 11 E[X12 ] = a11 11 a12 12
1
i 12 E[X1 X2 ] = a21 11 a22 12 (1.42)
Solutions are readily obtained from complex linear algebraic equation set (1.42),
leading to
where A1 = a11 + a22 and A2 = a11 a22 a12 a21 . By adjusting parameters ai j , (1.43)
can represent a spectral density with a peak at a specified location and a given
bandwidth.
The FokkerPlanck equation for the joint stationary probability density pX1 X2
(x1 , x2 ) of X1 (t) and X2 (t) corresponding to (1.36) is given by
[(a11 x1 a12 x2 )p] + [(a21 x1 a22 x2 )p]
x1 x2
1 2 2 1 2 2
[D1 (x1 , x2 )p] [D (x1 , x2 )p] = 0 (1.44)
2 x1
2 2 x22 2
p p
a12 x2 a21 x1 =0 (1.45)
x1 x2
1 2 2
a11 x1 p [D (x1 , x2 )p] = 0 (1.46)
2 x12 1
1 2 2
a22 x2 p [D (x1 , x2 )p] = 0 (1.47)
2 x22 2
indicating that the system belongs to the case of detailed balance [11]. The general
solution for Eq. (1.45) is given by
and
1 On Bounded Stochastic Processes 17
1
( ) = C(k1 2 2 ) 1/2 , > (1.53)
2
The joint stationary probability density is
k1 ( 2 x2 )/k2
pX1 X2 (x1 , x2 )dx2 = C1 (k1 2 x12 )
1
pX1 (x1 ) = 2 (1.55)
0
The probability density (1.55) has the same form as (1.34), but with a more
restrictive range for parameter due to the validity of the joint probability density
(1.54) and the positivity requirement of (1.56) and (1.57). Thus, equation set (1.36),
with D1 (X1 , X2 ) and D2 (X1 , X2 ) given by (1.56) and (1.57), respectively, can be used
to generate a stochastic process X1 (t) with a spectral density (1.43) and a probability
density (1.55). Parameters ai j (i, j = 1, 2) are used to adjust the spectral density,
is determined by the allowable range of process X1 (t), and is used to match the
shape of its probability distribution.
Two examples are listed below for illustration.
Example 1. a11 = 0, a12 = 1, a21 = 02 , a22 = 2 0 , D21 = 0, D22 = (4 03 /
(2 + 1))( 2 X12 X22 /02 )
2 03 2
11 ( ) = , pX1 (x1 ) = C1 ( 2 x12 )
[(0 2 )2 + 4 2 02 2 ]
2
2 0 2 2
11 ( ) = , pX1 (x1 ) = C1 ( 2 x12 )
[(0 2 )2 + 4 2 02 2 ]
2
In both cases, and 0 can be used to adjust the spectral density and and are
used to match the probability density.
Figures 1.7 and 1.8 show the spectral density functions for the two examples with
0 = 3 and several different values of . It is seen that the two example models yield
18 W.Q. Zhu and G.Q. Cai
The nonlinear filter model can also be extended to include cases with multiple peaks
in the spectra. Consider the following governing equations
n
dXi = ai j X j dt + Di (X)dBi (t), j = 1, . . . , n (1.58)
j=1
whereX = {X1 , . . . , Xn }T , and Bi (t) are unit Wiener processes mutually independent
for different i. Following the same procedure as in the preceding section, we can
model a bounded stochastic process X1 (t) with a probability density
n3
pX1 (x1 ) = C1 ( 2 x12 ) , > (1.59)
2
and a spectral density obtained from solving the equations
n
1
i 1i E[X1 Xi ] = ai j 1 j , i = 1, . . . , n (1.60)
j=1
ki ai j + k j a ji , i, j = 1, . . . , n (1.62)
It can be shown that the spectral density 11 ( ) has multiple peaks if n > 2. The
locations of the peaks and the bandwidth of each peak are adjustable by selecting
coefficients ai j . The low-pass case of n = 1 and the case of a single peak at a nonzero
frequency, n = 2, are special cases of (1.58).
An example of the case n = 4 is given below for illustration. The nonlinear filter
model is governed by
dX1 = X2 dt
1 On Bounded Stochastic Processes 19
Fig. 1.7 Spectral densities of X1 (t) generated from 2-D nonlinear filter model (1.36) for
Example 1 with 0 = 3. Taken from Ref. [4] (C) Elsevier Science Ltd (2004)
Fig. 1.8 Spectral densities of X1 (t) generated from 2-D nonlinear filter model (1.36) for
Example 2 with 0 = 3. Taken from Ref. [4] (C) Elsevier Science Ltd (2004)
20 W.Q. Zhu and G.Q. Cai
where 1 , 2 , 1 , and 2 are positive parameters, a24 and a42 are coupling
parameters with opposite signs, and
1 2 a24 22 2 a24 2
D22 (X) = 41 2 ( 2 X12 X + X + X )
12 2 a42 12 3 a42 12 4
2 1 2 2
D24 (X) = D2 (X) (1.64)
1
The process X1 (t) possesses a spectral density determined by (1.60) and a probabil-
ity density
1
pX1 (x1 ) = C1 ( 2 x12 ) , > (1.65)
2
Thus, parameters and can be used to adjust the probability density, while 1 ,
2 , 1 , 1 , a24 , and a42 can be used to match the spectral density. Figure 1.9 shows
the spectral density functions for three cases of 1 = 6, 2 = 2, 1 = 2 = 0.05 and
a24 = a42 = 1, 3, 4. By changing a single parameter a24 , the spectral density has
different shapes. For a more complicated shape of a spectral density, optimization
may be needed to select a set of ai j parameters in the model (1.58).
It may be noted that, in all the examples given above, the bounded processes
are defined on a symmetrical interval [ , ]. This interval can be shifted to an
asymmetrical one, simply by adding a constant to the process. The terms D2i (X) in
these examples are polynomials up to the second order, although other nonnegative
expressions are also admissible. In passing, we note that if one of the two spectrum
peaks is located at = 0, then only a three-dimensional filter will be required.
The bounded processes modeled by the nonlinear filters (1.27), (1.36), and (1.58)
with the same probability distribution (1.34) have their diffusion coefficients
given by (1.35), (1.56), (1.57), and (1.61), respectively. They are not suitable for
carrying out Monte Carlo simulation directly since the state variables may exceed
their respective boundaries during the numerical calculations. Taking the one-
dimensional nonlinear filter for example, D2 (X) in (1.35) will be negative during
the simulation if |X| > . To overcome the difficulty, transformations are proposed
to obtain sets of Ito stochastic differential equations for new variables. Two cases
are considered below for illustration.
1 On Bounded Stochastic Processes 21
Fig. 1.9 Spectral densities of X1 (t) generated from 4-D nonlinear filter model (1.58) for 1 = 6,
2 = 2, 1 = 2 = 0.05 Taken from Ref. [4] (C) Elsevier Science Ltd (2004)
First we consider the low-pass nonlinear filter given by (1.27) and (1.35). Make
the transformation
and obtain
d 1 d2 sin( )
= , = 2 3 (1.67)
dX cos( ) dX 2 cos ( )
Applying the Ito differential rule [12] and using (1.27) and (1.35), we obtain an Ito
equation for the new variable
2 + 1
d = tan( )dt + sgn(cos( ))dB(t) (1.68)
2( + 1) +1
where sgn(.) denotes the sign function. The Ito equation (1.27) is equivalent to a
stochastic differential equation in Stratonovich sense
2 + 1
X = X + ( 2 X 2 )W (t) (1.69)
2( + 1)
22 W.Q. Zhu and G.Q. Cai
where W (t) is a Gaussian white noise with a spectral density /2 ( + 1). Then
we have from (1.69)
2 + 1
= tan( )dt + sign(cos( ))W (t) (1.70)
2( + 1)
Either (1.69) or (1.70) can be used conveniently and effectively for simulation.
For the two-dimensional nonlinear filter in the Example 1 of Sect. 1.3.2, i.e.
the system
dX1 = X2 dt
4 03 1
dX2 = (02 X1 2 0 X2 )dt + 2 X12 2 X22 dB(t) (1.71)
2 + 1 0
The following partial derivatives can be obtained from (1.71) and (1.72),
cos( ) sin( )
= , = ,
X1 cos( ) X2 0 cos( )
sin( ) cos( )
= , =
X1 sin( ) X2 0 sin( )
2
2 1 cos ( )
2 sin( ) sin2 ( ) 2 sin( ) cos( )
= 2 2 + , , = 2 2 2
X22 0 sin( ) cos( ) cos ( )
3 X2
2
0 sin ( )
(1.73)
The Ito differential equations for the new processes (t) and (t) can be derived
using the Ito differential rule
where h = 2 0 /(2 + 1). On the other hand, taking into account the WongZakai
correction terms [18], the two Ito equations in (1.71) are equivalent to the following
two Stratonovich stochastic differential equations
1 On Bounded Stochastic Processes 23
X1 = X2
1
X2 = 02 X1 (2 0 h)X2 + 2 X12 2 X22 W (t) (1.76)
0
where W (t) is a Gaussian white noise with a spectral density 02 h/ . The corre-
sponding equations for the new variables are
1
= (2 0 h) tan( ) sin2 ( ) sgn(cos( )) sin( )W (t) (1.77)
0
| cos( )|
= 0 (2 0 h) sin( ) cos( ) cos( )W (t) (1.78)
0 sin( )
Either the set of Ito equations (1.74) and (1.75) or the set of Stratonovich equations
(1.77) and (1.78) can be used for simulation.
1.4 Conclusions
Two different models can be used for generating bounded stochastic processes: the
randomized harmonic model and the nonlinear filter model. Both models are capable
to generate processes of spectra with single or multiple peaks and with either narrow
or broad bandwidths. The randomized harmonic model is simple to implement by
introducing a random noises in the phase angle, but the probability distributions of
the generated processes are of singular shapes and cannot be adjusted. Thus it is
suitable for cases in which effects of the probability distribution are not important.
In the nonlinear filter model, the drift terms in the Ito equations are adjusted to
match the spectral density, and the diffusion terms are determined according to the
boundary of the stochastic process and the shape of its probability density. Since it is
capable to cover a variety of probability distribution profiles, it may be used for cases
in which the probability distribution plays an important role, such as when system
transient behaviors are relevant. It is noted that the computational effort might be
moderately more to use the nonlinear filter processes than to use the randomized
harmonic processes.
Acknowledgments The first author thanks the support from National Natural Science Foundation
of China under Key Grant No. 10932009, No. 11072212, and No. 11272279. The second author
contributes to this work during his stay at Zhejiang University as a visiting professor. The financial
support from Zhejiang University is greatly acknowledged.
24 W.Q. Zhu and G.Q. Cai
References
M. Dimentberg
2.1 Introduction
The present survey covers response studies for systems subject to randomly disor-
dered periodic excitations using the following basic model of temporal variations
in the applied force h(t) as suggested for Engineering Mechanics independently
in [1, 22]
M. Dimentberg ()
Department of Mechanical Engineering, Worcester Polytechnic Institute,
100 Institute Road Worcester, MA 01609, USA
e-mail: diment@wpi.edu
where:
(t) = 0 (t) (t + ) = D ( )
and thus is similar to that of a Gaussian white noise passed through a second-order
shaping filter with bandwidth D . On the other hand, probability density function
(PDF) p(h) of the sinusoid with random phase h(t) is drastically different from
Gaussian:
1
p(h) = (2.3)
1 h2
The model (2.1) may also be called PERPM (Periodic Excitation with Random
Phase Modulation) model. Obviously it should be more accurate than, say, Gaussian
model, for applications where temporal variations in amplitudes of loads, if at all,
are of secondary importance than those in phase (frequency). These applications
may include cases of excitation due to spatially periodic travelling dynamic loads
and/or travelling structures (e.g., traffic loads on bridges) where imperfect period-
icity should be accounted for. Thus, the first reported case of such an application
[1] was the classical problem of parametric resonance in coal mine cages with
potential random scatter in distance between neighboring supports. It should also
be emphasized that in the vicinity of resonances disregarding amplitude variations
in excitation in the framework of the PERPM model may be warranted even if
these variations are not very small because of higher sensitivity of the response
to variations in the frequency of excitation. Thus the model had been used, say,
for structural buffeting in a turbulent flow [17] and may be used for ship rolling in
rough seas.
It may be added that the basic PERPM model (2.1) has proved itself to be simpler
for analytical studies of sophisticated parametric random vibration problems with
narrow-band random excitations than the model of a filtered Gaussian white-noise
excitation. In particular, it can be easily incorporated into the SDE (Stochastic
Differential Equations) Calculus by using the following equivalent autonomous
description of the trigonometric functions
Applying to the SDEs (2.4) expectation operator denoted by angular brackets one
may obtain two ODEs for mean values mi = zi , i = 1, 2. This is done through
the use of expressions for the so-called WongZakai corrections [1, 17] for cross-
correlations between Gaussian white-noise excitations and state variables governed
by a set of SDEs for components of an n-dimensional state space vector X. The
general rule is as follows: if
then
n
gi (X)
gi (X)i (t) = (1/2) gk (X) D ,ik (2.5)
k=1 Xk
In case of linear functions g the expected values, which appear in the RHSs of the
deterministic equations (ODEs) for expectations of the components of state vector
X, are seen to be linear in these components. Thus, for SDEs (2.4) the resulting
ODEs are
m 1 = m2 (D /2)m1 , m 2 = m1 (D /2)m2 (2.6)
z + 2 z + 2 z = h(t) (2.7)
The mean square response analysis is performed then for the combined SDE
set (2.1) and (2.7) (with the latter written in a space state form). The desired PSD of
h(t) can be found then from the basic relation between PSDs of the excitation h(t)
and response z(t) of the shaping filter which results in
hh ( )
z2 = lim .
0+ 2 2
In the following Sects. 2.2 and 2.4 the basic model (2.1) is used to describe external
force as applied, respectively, to linear and nonlinear systems and response analyses
are presented, whereas in Sect. 2.3 the model describes parametric excitation of a
linear system and results are presented of stochastic stability analyses; subcritical
response to an external excitation can also be studied by the method of moments.
28 M. Dimentberg
X + 2 X + 2 X = h(t) (2.8)
It goes without saying that whenever only second-order moments of the displace-
ment and velocity response are of interest one can just use basic excitation/response
relation for PSDs [1, 17], with the PSD of the RHS of the Eq. (2.8) being 2 hh ( )
(see Eq. (2.2)). However, with the PDF (2.3) of the (scaled) excitation the response
may (although not necessarily!) be strongly non-Gaussian. Regretfully, no analytical
solutions for response PDFs are known for the corresponding random vibration
problems (except for the case of broadband excitation, or D
where X(t)
becomes asymptotically normal). This lack of a benchmark analytical result may
bring difficulties with reliability evaluations for those applications where the
PERPM is the appropriate model indeed for dynamic loads. Two basic options for
further analytical studies are then the method of moments [5] and path integration
method [12].
The first of these approaches may be greatly simplified for an important special
case where the system (2.8) is lightly damped and the excitation is narrow-band and
with small detuning, so that ,D , and | | are proportional to a small parameter
and , D and | | . Under these conditions the response X(t)
is narrow-band indeed. This case permits analytical study by stochastic averaging
approach [1, 17, 20] with subsequent direct application of the method of moments.
To apply the averaging method to the system (2.8), (2.5) introduce first two new
state variables Xc (t) and Xs (t) as
The relations (2.9) are then resolved for Xc (t), Xs (t) and differentiated over time.
Using the Eq. (2.8) we obtain then a pair of first-order SDEs with their RHSs being
proportional to a small parameter. Then, upon application of averaging over the
period 2 / , which ultimately implies neglecting terms with sin(q) and cos(q),
[5, 12, 19] this set is reduced to
Xc = Xc Xs Xs (t), Xs = Xs + Xc + Xc (t) + (2.10)
2
where = ( 2 2 )/2 .
The linear system (2.10) permits straightforward analysis by the method of
moments. This reduction to just a pair of SDEs is especially important whenever
high-order moments are sought for. But it seems of importance also to derive
a simple analytical expression for the mean square amplitude [12]. Firstly, dir-
ect calculation of WongZakai corrections brings additional terms (D /2)Xc
2 Dynamics of Systems with Randomly Disordered Periodic Excitations 29
and (D /2)Xs to the first and second of the equations (2.10), respectively.
Applying then unconditional probabilistic averaging yields set of two deterministic
ODEs for the expected values mc,s = Xc,s which has constant steady-state solution
1
mc = , ms = ( + D /2) , Q = ( + D /2)2 + 2 . (2.11)
2 Q 2 Q
Introduce now a new state variable H(t) which may be identified as a squared
response amplitude as long as the detuning | | is proportional to a small
parameter:
Differentiating (2.12) over time and substituting RHSs of the SDEs (2.10) yields then
Applying to (2.13) the probabilistic averaging we obtain a first-order ODE for the
mean square amplitude. As long as stationary response with constant values of
moments is sought for, using (2.11) results in
1 + D /2
A2 = H = ( /2 )ms = ( /2 )2 (2.14)
(1 + D /2 )2 + ( / )2
This result clearly shows how imperfect periodicity of excitation leads to reduction
of the response as long as the apparent response bandwidth 2 + D does not
exceed detuning | |. Indeed, as long as the mean excitation frequency lies
within this resonant range increasing excitation bandwidth implies removal of
excitation energy out of resonant domain, as long as total excitation energy is fixed.
On the other hand, in case of higher detunings such an increase should bring more
energy into the resonant domain, as can be seen from the fact that mean square
response amplitude increases with D if 1 + D /2 > | |/ . This means that
neglecting random imperfections in periodicity of a nominally periodic excitation
may not necessarily lead to conservative estimates of reliability.
The method of moments had been applied in [5] to both the exact SDE
set (2.4), (2.5) (with the latter equation rewritten as the equivalent pair of first-
order SDEs) and the approximate set (2.4), (2.10) to predict fourth-order moments
of the response X(t) (25 independent ODEs were derived by the exact analysis
upon adding 10 additional trigonometric relations). Results for the (constant in
time) stationary fourth-order moment were presented as curves of the excess factor
of steady-state displacement X(t), that is the quantity = X 4 /(X 2 2 ) 3, as
functions of excitation/system bandwidth ratio D / for various values of the
scaled detuning [5]. (The above ratio emerged as an important nondimensional
parameter in all studies based on the model (2.1).) The value of was found to
be 1.5 for D / 0; this should be expected for the almost sinusoidal response,
that is, for the process with the PDF of the same general shape as (2.3) but with
30 M. Dimentberg
Fig. 2.1 Four examples of the stationary PDF p(x) of displacement. Full line: bimodal, for the
case D /2 = 0.40, dashed: bimodal, D /2 = 1.60; dash-dot: transitional, D /2 = 3.60;
dotted: unimodal, D /2 = 10.0. Expected excitation frequency = in all cases. Taken from
Ref. [12]
The results were scaled with respect to the corresponding numbers of upcrossings
for a Gaussian process with the same PSD as X(t) (and therefore same rms responses
x and V ).
nuG = (V /2X )exp u2 /2X2 (2.16)
The calculated values of ratio nu /nuG are presented in Fig. 2.2 as functions of the
excitation/system bandwidth ratio for the case of zero detuning and several values
of the ratio u/X (equal to 2.0, 2.5, 3.0, 3.5, and 4.0 starting from the upper curve
downwards). Each of the curves exhibits a finite range of almost zero values at small
D /2 , this range expanding with increasing u/X . At higher values of D /2
the scaled number of upcrossings starts to increase and may reach unity eventually
(normalization effect!) provided that the level of upcrossings u/X is not very high.
Thus it can be seen how convergence rate to normal PDF of the response X(t) is
strongly reduced with increasing level u = X; it may be very poor for the tails of the
response PDF. Qualitatively similar behavior of the ratio nu /nuG had been obtained
in [12] for other (nonzero) values of detuning; the convergence rate of normalization
effect seems to increase in general with increasing detuning.
The joint PDFs of response displacement and velocity were also used to
calculate stationary PDFs of amplitude in [12], although direct use of the approxi-
mate SDEs (2.10) is possible as well. With increasing D /2 smooth transition had
32 M. Dimentberg
Fig. 2.2 Expected numbers of upcrossings of several different levels X = u by the displacement
X(t) as functions of excitation/system bandwidth ratio D /2 for the case = . The numbers
are scaled with respect to the corresponding upcrossings numbers for the Gaussian process with
the same PSD as X(t). The levels shown correspond to values of the ratio u/X equal to 2.0, 2.5,
3.0, 3.5, and 4.0 (starting from the upper curve downwards) where X is the standard deviation of
the response X. Taken from Ref. [12]
been observed from sharp peak at the value close to square root of the value defined
by the Eq. (2.14) to the Rayleigh PDF as corresponding to asymptotically Gaussian
response at high D /2 .
The above results may be of use for reliability evaluation, for example when
fatigue life is of concern. They show in particular that whenever the admissible
safe level of vibration is assigned as based on endurance limit of the material, the
imperfect periodicity of the excitation should be accounted for in general as long
as it may become a source for damage accumulation because of nonzero excursions
beyond the endurance limit.
Concluding this section certain extensions of the basic model (2.1) of h(t)
should be mentioned. Firstly, as suggested in [14], the white noise in the RHS
of the equation for may be multiplied by a deterministic time-variant envelope
function. This makes the resulting random process nonstationary thereby providing
potential for predicting transient effects. Analysis of a linear systems response
to such an extended external excitation h(t) can still be done by the method of
moments [14, 16]. Thus, second- and fourth-order moments had been calculated
in [14] as functions of time for envelopes being the rectangular pulses of different
durations.
2 Dynamics of Systems with Randomly Disordered Periodic Excitations 33
Another extension of the basic model (2.1) as introduced in [15] involves addition
of a second Gaussian white noise according to the relations
where q = + (t) and (t) is the same white noise as defined for the Eq. (2.1),
whereas the factor 2 is added just for convenience to study the principal instability
domain. The same change of variables (2.9) as in Sect. 2.2 is now applied to the
Eq. (2.18) and the averaging over the response period is applied similarly to the new
(slowly varying) state variables Xc (t), Xs (t) assuming ,D , and | | to be
proportional to a small parameter. This results in the following pair of SDEs
( / )2
( / )2 = 1 + D / + (2.21)
1 + D /
From the first Eq. (2.23) the condition for almost sure neutral stability is seen to be
2
= cos(2 ) = cos(2 )w( )d (2.24)
0
where w( ) is a stationary PDF of the phase (t). This PDF satisfies the Fokker
PlanckKolmogorov (FPK) equation which corresponds to the second SDE (2.23).
The quadrature solution to this FPK equation has been obtained by Stratonovich
and Romanovsky [20] for the original SDOF system with a different type of random
parametric excitation (perfect sinusoid plus white noise). For the present case this
36 M. Dimentberg
solution for w( ) yields the following relation for the critical value of the excitation
amplitude which satisfies the relation (2.24) and is denoted as
1 Ii /D +1 ( /D ) Ii /D +1 ( /D )
= + (2.25)
2 Ii /D ( /D ) Ii /D ( /D )
Here Is are modified Bessel functions. In the worst case of exact tuning to
resonance ( = 0) the critical excitation amplitude for almost sure instability
satisfies the relation
/ = I0 ( /D )/I1 ( /D ) (2.26)
Using in (2.26) asymptotic expressions for the Bessel functions at high and small
values of argument and comparing results with the Eq. (2.21) shows that with
increasing D / the ratio of critical excitation amplitudes / increases from
unity at D / 1 (which should be expected with approaching perfectly periodic
case) to 2 at D /
1. Numerical results based on the Eq. (2.25) are illustrated
in [6] in the form of generalized Ince-Strutt chartsset of neutral stability curves
on the plane ( , ) for various values of D / . The full set of curves / of vs.
D / for various detunings see also in [6].
Similar analyses have been performed recently [4] for the so-called sum combi-
national resonance in a two-degrees-of-freedom system governed by equations
X1 + 21 X1 + 12 X1 + 12 12 X2 h(t) = 0
X2 + 22 X2 + 22 X2 + 21 22 X1 h(t) = 0, (2.27)
where h(t) = sin(q(t)), q = 2( + (t)) and (t) is the same as in the Eq. (2.1).
Assuming total detuning 2 = 1 + 2 2 as well as damping ratios i /i
and coefficients 12 , 21 to be proportional to a small parameter the KB-averaging
can be applied [19]. This results in four shortened SDEs
states (which is the eigenvalue with the largest real part) is purely real. This
implies that the type of instability is divergence (as opposed to flutter) andwhich
is computationally importantthat the point of transition from stable to unstable is
associated with expression for the determinant had been obtained. This results in
expression for critical excitation amplitude
( / )2
( / )2 = 1 + D / +
1 + D /
where
1
= (1 + 2 ), = 12 21 /( 2 ), = (1 2 )/2 (2.29)
2
It can be seen that in a special symmetric case 1 = 2 = , 12 = 21 = , = 0,
= the solution (2.29) precisely coincides with solution (2.21) for the principal
parametric resonance with and being excitation amplitude and damping factor
of the single excited mode. But there is also something more in this case. Namely,
direct inspection of the SDE set (2.27) shows that it is equivalent to two uncoupled
pairs of second-order SDEs for variables X+c = X1c + X2c , X+s = X1s + X2s and
variables Xc = X1c X2c , Xs = X1s X2s . This means that condition for almost
sure stability in this symmetric case is also the same as in case of principal
parametric resonance.
Concluding this section the example of a coal mine cage as mentioned in
the Introduction may be referred to: even modest random scatter in distances
between supportsresulting in just 3% in / where is standard deviation
of the excitation frequencymay provide 50% increase in the critical excitation
amplitude .
where Y (t) is a systems displacement from its equilibrium position. Equation (2.30)
is supplemented by impact condition at the barrier at Y = 0
Y = |X|, Y = Xsgn(X)
(2.32)
This transformation effectively reduces the system to a nonimpact one for the
case of the elastic impact, as long as the impact condition (2.31) is transformed
to just a continuity condition for X(t) if r = 1. This condition will be adopted
here for simplicity, with understanding that in case of small impact losses, with
1 r = O( ) the impacting system is asymptotically causal and the impact losses
may be accounted for through the use of an additional equivalent viscous damping
(1 r)( / ) [1, 8, 23]. The transformed equation (2.30) for the motion between
impacts is then found to be
and it was analyzed in [10] by stochastic averaging. The procedure for analysis
is very similar to one described here in Sect. 2.2 with some adjustment needed to
handle nonlinearity in the RHS. Two new slowly varying state variables A, are
introduced to this end and the solution is represented as X = A sin( ), X = A
2 Dynamics of Systems with Randomly Disordered Periodic Excitations 39
cos( ), = q n . Then the following Fourier series expansion can be used in the
RHS of the Eq. (2.33)
sin((2k 1) )
sgn(X) = sgn(sin( )) = (4/ ) (2.34)
k=1 2k 1
1 + D /2
A2 = (n / )2 (2.35)
(1 + D /2 )2 + (n / )2
where:
4n
n =
4n2 1
This expression contains the same second cofactor as in (2.14) which describes
influence of the excitation/system bandwidth ratio. And applying this result to
moored bodies excited by ocean wavesthe problem considered in [21] assuming
perfectly sinusoidal excitationwe may expect that for the worst-case scenario
n = 0 the mean square response amplitude may be up to several times less than
with perfect periodicity since bandwidth of ocean waves may be of the order of
10% or more of their mean frequency.
It should be emphasized that closed set of ODEs for moments could be obtained
only for this very special nonlinear system that could be transformed to the
SDE (2.33) with linear LHS. In general the nonlinearity does not disappear which
imply necessity for adopting some closure procedure for the infinite set of moments.
This was the case with the barrier as offset from the equilibrium position [10, 11].
Concluding this section we may consider the vibroimpact system (2.30), (2.31)
for a non-resonant case, whereby mean excitation frequency is not close to any
even integer multiple of the natural frequency. This case had been studied in
[21] for the case of perfectly periodic excitation and a certain range of excitation
frequencies was found where response becomes chaotic through breeding of
multiple frequencies; potential application to moored bodies excited by ocean waves
was mentioned once again.
The case of non-resonant perfectly periodic excitation had been considered in
[9] using qualitative analysis based on iterative scheme for the transformed sys-
tem (2.33). In this way a clear description of the frequency breeding phenomenon
was obtained through the use of the series expansion (2.34) with replaced by t
in the first approximation.
Influence of the imperfect periodicity had been studied in [9] then for the non-
resonant case by numerical (Monte-Carlo) simulation. It had been shown that
40 M. Dimentberg
2.5 Conclusions
References
Abstract In this chapter, suitable tools are developed for dealing with the stochastic
dynamics of nonlinear systems submitted to noises which are neither white nor
Gaussian. These tools are then applied to some physical problems:
stochastic resonance
Brownian motors
resonant gated trapping
noise-induced transition
which, besides being highly relevant to biology and technology, are fine instances of
the fact that in nonlinear systems, noise canand hence does, often challenging our
intuitionhave highly nontrivial constructive effects. In the first three examples,
the systems response is either optimized (signal enhancement) or becomes more
robust (spectral broadening) when the noise is non-Gaussian. In the last one, a shift
of transition lines is observed, in the sense in which order is enhanced.
3.1 Introduction
During the last decades of the twentieth century, the scientific community has
recognized that noise or fluctuations can in many situations be (against everyday
intuition) the trigger of new phenomena or new forms of order. A few examples
are noise-induced phase transitions [1], noise-induced transport [2, 3], stochastic
resonance [4, 5], noise-sustained patterns [6, 7].
Most studies of such noise-induced phenomena have assumed the noise source
to have a Gaussian distribution, either white (memoryless) or colored (i.e.,
with memory, a concept defined below). Although customarily accepted without
criticisms on the basis of the central limit theorem, the true rationale behind this
assumption lies in the possibility of obtaining some analytical results, and avoiding
many difficulties arising in handling non-Gaussian noises. There is, however,
experimental evidence that at least in some casesparticularly in sensory and
biological systemsnon-Gaussian noise sources may add desirable features (like
robustness or fault tolerance) to noise-induced phenomena. These findings add
practical interest to the intrinsic one of finding viable ways to deal with non-
Gaussian noises (or at least some classes thereof).
The present chapter is a brief review of recent results on some noise-induced
phenomena arising when the system is submitted to a colored (or time-correlated)
and non-Gaussian noise source whose statistics obeys the q-distribution found
within the framework of nonextensive statistical physics [8]. In all the phenomena
analyzed, the systems response is shown to be strongly affected by a departure of
the noise source from Gaussian behavior (corresponding to q = 1). This translates
into a shift of transition lines, an enhancement of the systems response or a marked
broadening thereof, according to the case. In most examples, the value of the
parameter q optimizing the systems response turns out to be q = 1. Clearly, this
result would be highly relevant for many technological applications, as well as the
understanding of some situations of biological interest.
For the time being, we disregard any explicit dependence on t of the functions f (the
drift) and g (the coefficient of the noise, which yields an x-dependent diffusion in
a FokkerPlanck description, both in Itos and in Stratonovichs interpretations), but
keep of course the implicit one through the stochastic process x.
Our focus here is the stochastic or noise source (t), which is called mul-
tiplicative because it affects the values of g(x(t)). Usually, is assumed to
3 Noise-Induced Phenomena: Effects of Noises Based on Tsallis Statistics 45
x = x + (t)
if (t) is a Gaussian white noise with zero mean and variance D. Here we assume
the noise (t) to be of the OU type, but obeying a particular class of non-Gaussian
distribution arising in nonextensive thermostatistics [8]. (t) is a generalization of
the OU process and can be generated through the SDE
dVq
= + (t), (3.2)
d
where (t) is again a Gaussian white noise with variance D. The q- (and -)
dependent potential has the expression
D (q 1) 2
Vq ( ) = ln 1 + ,
(q 1) 2D
and limq1 Vq ( ) = 2 /2. Since this article is not the appropriate space to elaborate
on all the properties of the process , we refer to [9] for details. However, it is
instructive to display the stationary probability density function (pdf) Pqst ( ), which
can be normalized only for q < 3 and is given by
1
Pqst ( ) = expq 2 . (3.3)
Zq 2D
1
expq (x) = [1 + (1 q)x] 1q , (3.4)
1q
1 +1
(1q) 1 + 3
for < q < 1,
1q 2
for q = 1,
q11 1
2
(q1) 1
for 1 < q < 3
q1
( indicates the Gamma function). The first moment of Pqst ( ) is = 0 while the
second,
2D
2 = 2 Pqst ( ) d = Dq ,
(5 3q)
is finite only for q < 5/3. Also the -process correlation time diverges near q = 5/3
and can be approximated over the whole range of q values by
2D
q .
(5 3q)
The shape of the pdf as a function of is shown in Fig. 3.1, for different values of q.
In the next section we outline the path-integral approach to obtain an effective
Markovian approximation, and in the following ones we briefly review a few non-
Gaussian noise-induced phenomena.
dVq D 2P
[ f (x) + g(x) ]Pq 1
q
Pq = , Pq + 2 (3.5)
t x d 2
pdf(x)
corresponds to a wide
(Levy-like) distribution 0.2
(q = 2)
0.1
0.0
5 4 3 2 1 0 1 2 3 4 5
x
d D
+ip (s)[ (s) + 1 Vq ( (s))] + 2 [ip (s)]2 . (3.6)
d
is the stochastic action, where the time-derivatives are interpreted as the limits of
discrete differences.
In the following paragraphs we sketch the path-integration over the dynamical
variables p (s) px (s) and (s), and the adiabatic-like elimination whereby we
retrieve an effective Markovian approximation. Gaussian integration over p (s)
yields
x(tb )=xb , (tb )=b
Pq (xb , b ,tb | xa , a ,ta ; ) = D [x(t)] D [ (t)] D [px (t)] eSq,2 ,
x(ta )=xa , (ta )=a
(3.7)
with
tb
Sq,2 = f (x(s)) g(x(s)) (s)]
ds ipx (s)[x(s)
ta
tb
2 d d
+ ds
[ (s)+ 1 Vq ( (s))] (ss
)[ (s
)+ 1 Vq ( (s
))] . (3.8)
4D ta d d
48 H.S. Wio and R.R. Deza
(3.9)
with
tb
2 d
Sq,3 = ds (s) + 1 [ Vq ( (s))]2 , (3.10)
4D ta d
and ( ds [x f (x) g(x) ]) indicates that
f (x(s))
x(s)
(s) = (3.11)
g(x(s))
is to be fulfilled at each instant s. With this condition, the integration over (s) just
amounts to replacing (s) by Eq. (3.10) and (s) and by the time-derivative of this
equation, namely
g
1
(s) = x (x f ) + (x f
x),
(3.12)
g2 g
where the prime is a shorthand for d/dx, and x x(s). The resulting stochastic
action corresponds to a non-Markovian description as it involves x(s).
In order
to obtain an effective Markovian approximation we resort to approximations and
arguments used before in relation with colored Gaussian noise [1113], whose
results resembled those of the unified colored noise approximation (UCNA)
[14, 15]. In short, we neglect all the contributions including x(s) n with
and/or x(s)
n > 1 and get the approximate relation
dVq 1 x f 1 /D(q 1) f 2 x
+ 1 ( f /g)
x + 2 .
d g 1 + (q1) ( f /g)2 g
2D 1 + (q1)
2D f /g) 2
(3.13)
As is the case for the UCNA, this approximation gives reliable results for small
values of .
The final result for the transition pdf is
x(tb )=xb
Pq (xb ,tb | xa ,ta ; ) = D [x(t)] eSq,4 , (3.14)
x(ta )=xa
tb
2
1
1 2D (q 1)U
2 U
Sq,4 = ds U + x + .
4D ta [1 + 2D (q 1)U
2 ]2 1 + 2D (q 1)U
2
(3.15)
It is immediate to recover some known limits. For > 0 and q 1 we get the known
Gaussian colored noise result (OU process), while for 0 we retrieve a Gaussian
white noise, even for q = 1.
The FPE for the evolution of P(x,t) in (3.14) is
1
t P(x,t) = x [A(x)P(x,t)] + x2 [B(x)P(x,t)], (3.16)
2
where
U
A(x) = (q1)U
2 (3.17)
1 2D
(q1)U
2
1+ 2D
+ U
[1 + 2D (q 1)U
2 ]
and
2
[1 + 2D (q 1)U
2 ]2
B(x) = D . (3.18)
U [1 + 2D (q 1)U
2 ]2 + [1 2D
(q 1)U
2 ]
Pst (x) = exp [ (x)] , (3.19)
B
where is the normalization factor and
A
(x) = 2 dy. (3.20)
B
The FPE (3.16)(3.18) and its associated stationary distribution (3.19)(3.20) allow
to compute the mean first-passage time (MFPT) and other results, through a
Kramers-like approximation. Their analytical dependence on the different param-
eters in the case of a double-well potential agrees remarkably with the results of
extensive numerical simulations.
The phenomenon of stochastic resonance (SR) is but one example of the counter-
intuitive role played by noise in nonlinear systems, as enhancing the response of
such a system to a weak external signal may require increasing the noise intensity.
50 H.S. Wio and R.R. Deza
The study of SR has risen considerable interest since it was first introduced by Benzi
et al. to explain the periodicity of the Earths ice ages (see [4, 5] and references
therein). Some causes are its potential technological applications in optimizing the
response of nonlinear dynamical systems, and its connection with some biological
mechanisms.
A large number of the studies on SR have been done analyzing a paradigmatic
bistable one-dimensional double-well potential
x4 x2
U0 (x) = . (3.21)
4 2
In almost all descriptions, the transition rates between the two wells were estimated
as the inverse of the Kramers time (or the typical mean first-passage time between
the wells), which was evaluated using standard techniques. Moreover, the noises
have been assumed to be Gaussian in almost all cases.
Let us return to Eqs. (3.1)(3.2) and consider now an explicitly time-dependent
drift of the form
U(x,t)
f (x,t) = U0
(x) + S(t)
x
12
5 q=0.75
10 =0.1
4
8
3
6
R
R
2
4
2 1
0 0
0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,0 0,1 0,2 0,3 0,4 0,5 0,6
D D
12
12 =0.1 q=0.75
8
8
R
R
4
4
0
0
0,0 0,4 0,8 1,2 1,6 2,0 0,0 0,2 0,4 0,6 0,8 1,0
D D
Fig. 3.2 Signal-to-noise ratio R vs. noise intensity D for the double-well potential U0 (x) =
x4 x2
4 2 . Upper row: theoretical results; lower row: Monte Carlo results. Left: = 0.1 and
q = 0.25, 0.75, 1.0, 1.25 (from top to bottom); right: q = 0.75 and = 0.25, 0.75, 1.5 (from top
to bottom)
J
squares: Monte Carlo results. 0.002
Calculations performed for
m = 0, = 1, T = 0.5, F =
0.1, D = 1 and = 100/(2 ). 0.000
Taken from Ref. [19] (C) 0.006
Springer b
0.004
0.002
0.000
1.0 1.2 1.4 1.6
q
0.003 a b c
0.002
0.001
J
0.000
0.001
0.002
0.8 1.0 1.2 1.4 0.8 1.0 1.2 1.4 0.8 1.0 1.2 1.4
q q q
Fig. 3.4 Mass separation, Monte Carlo results for the current as a function of q, for particles of
masses m = 0.5 (hollow circles) and m = 1.5 (solid squares). Calculations performed for = 2,
T = 0.1, = 0.75, and D = 0.1875, and F = 0.025 (a), F = 0.02 (b) and F = 0.03 (c). Taken from
Ref. [19] (C) Springer
in real situations. Hence, the effect on the model of [21] of a noise of the
class described in Sect. 3.2 was analyzed in [22], showing the relevant effects
that arise when departing from Gaussian behaviorparticularly related to current
enhancementand their relevance for both biological and technological situations.
Among other aspects, a value of q = 1 optimizing the current was found, in addition
to the already known maximum of J as a function of the noise intensity.
Also, the combination of two different enhancing mechanisms was analyzed.
Besides non-Gaussian noises (whose effects on current and efficiency have been
described above), time-asymmetric forcing can separately enhance the efficiency
and current of a Brownian motor [23]. In [24], the effects of subjecting a Brownian
motor to both effects simultaneously were studied. The results were compared with
those obtained in [23] for the Gaussian white noise regime in the adiabatic limit,
finding that although the inclusion of the time-asymmetry parameter increases the
efficiency up to a certain extent, for the mixed case this increase is much less
appreciable than in the white noise case.
As commented before, SR has been found to play a relevant role in several biology-
related problems. In particular, ionic transport through cell membranes. These
possess voltage-sensitive ion channels that switch (randomly) between open and
closed states, thus controlling the ion current. Experiments measuring the current
3 Noise-Induced Phenomena: Effects of Noises Based on Tsallis Statistics 55
through these channels have shown that ion transport depends (among other factors)
on the electric potential of the membrane, which plays the role of the barrier
height, and can be stimulated by both dc and ac external fields. Together with
related phenomena, these experiments have stimulated several theoretical studies.
Different approaches have been used, as well as different ways of characterizing SR
in ionic transport through biological cell membranes. A toy model considering the
simultaneous action of a deterministic and a stochastic external field on the trapping
rate of a gated imperfect trap was studied in [25, 26]. The main result was that even
such a simple model of a gated trapping process showed an SR-like behavior.
The study was based on the so-called stochastic model for chemical reactions,
properly generalized in order to include the traps internal dynamics. The dynamic
process consists in the opening or closing of the traps according to an external
field that has two contributions, one periodic with a small amplitude, and another
stochastic whose intensity is (as usual) the tuning parameter. The absorption
contribution is approximately modeled as
(t) = [B sin t + c ],
Fig. 3.5 Value of J (amplitude of the oscillating part of the absorption current) as a function
of o for a given observational time (t = 1, 140). Different values of q (triangles q = 0.5, crosses
q = 1.0, squares q = 1.5) and a fixed value of ( = 0.1). Taken from Ref. [26] (C) Elsevier
Science Ltd (2002)
In [29] the same system was studied, but with dynamically generated through
Eq. (3.2). The main result showed the persistence of the indicated reentrance effect,
together with a strong shift in the transition line, as q departed from q = 1. The
transition was anticipated for q > 1, while it was retarded for q < 1.
In order to obtain some analytic results a strong approximation, valid for
|q 1| < 1 (both for q < 1 and q > 1), was derived within a path integral description.
Its comparison with simulations yielded a good agreement even beyond its theoret-
ical validity range, indicating that (at least for this case) such an approximation
results to be robust. Finally, a conjecture about a possible reentrance effect with q
was shown to be false.
The results discussed above clearly show that non-Gaussian noises can significantly
change the systems response in many noise-induced phenomena, as compared with
the Gaussian case. Moreover, in all the cases presented here, the systems response
was either enhanced or altered in a relevant way for values of q departing from
Gaussian behavior. In other words, the optimum response occurs for q = 1. Clearly,
the study of the change in the response of other related noise-induced phenomena
when subject to such a kind of non-Gaussian noise will be of great interest.
Other recent related works are, for instance, studies of
(a). the stationary properties of a single-mode laser system [30],
(b). the effect of non-Gaussian noise and system-size-induced coherence resonance
of calcium oscillations in arrays of coupled cells [31],
(c). work fluctuation theorems for colored-noise driven open systems [32],
(d). multiple resonances with time delays and enhancement by non-Gaussian noise
in NewmanWatts networks of HodgkinHuxley neurons [33],
(e). effects of non-Gaussian noise and coupled-induced firing transitions of
NewmanWatts neuronal networks [34],
(f). non-Gaussian noise-optimized intracellular cytosolic calcium oscillations [35],
(g). effects of non-Gaussian noise near supercritical Hopf bifurcation [36],
(h). a model of irreversible thermal Brownian refrigerator and its performance [37],
among many others.
An extremely relevant point is related to some recent work [38, 39] where the
algebra and calculus associated with the nonextensive statistical mechanics has been
studied. It is expected that the use of such a formalism could help to directly study
Eq. (3.1), without the need to resort to Eq. (3.2), and also to build up a nonextensive
path-integral framework for this kind of stochastic process.
58 H.S. Wio and R.R. Deza
References
1. Sagues, F., Sancho, J.M., Garca-Ojalvo, J.: Rev. Mod. Phys. 79, 829 (2007)
2. Astumian, R.D., Hanggi, P.: Phys. Today 55(11), 33 (2002)
3. Reimann, P.: Phys. Rep. 361, 57 (2002)
4. Bulsara, A., Gammaitoni, L.: Phys. Today 49(3), 39 (1996)
5. Gammaitoni, L., Hanggi, P., Jung, P., Marchesoni, F.: Rev. Mod. Phys. 70, 223 (1998)
6. Izus, G.G., Deza, R.R., Sanchez, A.D.: J. Chem. Phys. 132, 234112 (2010)
7. Izus, G.G., Sanchez, A.D., Deza, R.R.: Phys. A 391, 4070 (2012)
8. Gell-Mann, M., Tsallis, C. (eds.): Nonextensive Entropy, Interdisciplinary Applications.
Oxford University Press, New York (2004)
9. Fuentes, M.A., Wio, H.S., Toral, R.: Phys. A 303, 91 (2002)
10. Colet, P., Wio, H.S., San Miguel, M.: Phys. Rev. A 39, 6094 (1989)
11. Wio, H.S., Colet, P., Pesquera, L., Rodrguez, M.A., San Miguel, M.: Phys. Rev. A 40,
7312 (1989)
12. Castro, F., Wio, H.S., Abramson, G.: Phys. Rev. E 52, 159 (1995)
13. Abramson, G., Wio, H.S., Salem, L.D.: In: Cordero, P., Nachtergaele, B. (eds.) Nonlinear
Phenomena in Fluids, Solids, and other Complex Systems. North-Holland, Amsterdam (1991)
14. Jung, P., Hanggi, P.: Phys. Rev. A 35, 4464 (1987)
15. Jung, P., Hanggi, P.: J. Opt. Soc. Am. B 5, 979 (1988)
16. Fuentes, M.A., Toral, R., Wio, H.S.: Phys. A 295, 114 (2001)
17. Fuentes, M.A., Tessone, C., Wio, H.S., Toral, R.: Fluct. Noise Lett. 3, 365 (2003)
18. Castro, F.J., Kuperman, M.N., Fuentes, M.A., Wio, H.S.: Phys. Rev. E 64, 051105 (2001)
19. Bouzat, S., Wio, H.S.: Eur. Phys. J. B 41, 97 (2004)
20. Bouzat, S., Wio, H.S.: Phys. A 351, 69 (2005)
21. Mateos, J.L.: Phys. A 351, 79 (2005)
22. Mangioni, S.E., Wio, H.S.: Eur. Phys. J. B 61, 67 (2008)
23. Krishnan, R., Mahato, M.C., Jayannavar, A.M.: Phys. Rev. E 70, 021102 (2004)
24. Krishnan, R., Wio, H.S.: Phys. A 389, 5563 (2010)
25. Sanchez, A.D., Revelli, J.A., Wio, H.S.: Phys. Lett. A 277, 304 (2000)
26. Wio, H.S., Revelli, J.A., Sanchez, A.D.: Phys. D 168169, 165 (2002)
27. Horsthemke, W., Lefever, R.: Noise-Induced Transitions, 2nd printing. Springer, Berlin (2006)
28. Castro, F., Sanchez, A.D., Wio, H.S.: Phys. Rev. Lett. 75, 1691 (1995)
29. Wio, H.S., Toral, R.: Phys. D 193, 161 (2004)
30. Bing, W., Xiu-Qing, W.: Chin. Phys. B 20, 114207 (2011)
31. Yubing, G.: Phys. A 390, 3662 (2011)
32. Sen, M.K., Baura, A., Bag, B.C.: Eur. Phys. J. B 83, 381 (2011)
33. Yinghang, H., Yubing, G., Xiu, L.: Neurocomputing 74, 1748 (2011)
34. Yubing, G., Xiu, L., H.Y. et al.: Fluct. Noise Lett. 10, 1 (2011)
35. Yubing, G., Yinghang, H., L.X. et al.: Biosystems 103, 13 (2011)
36. Ruiting, Z., Zhonghuai, H., Houwen, X.: Phys. A 390, 147 (2011)
37. Lingen, C., Zemin, D., Fengrui, S.: Appl. Math. Mod. 35, 2945 (2011)
38. Borges, E.P.: Phys. A 340, 95 (2004)
39. Surayi, H.: In: Beck, C., et al. (eds.) Complexity, Metastabilty and Nonextensivity. World
Scientific, Singapore (2005)
Chapter 4
Dynamical Systems Driven by Dichotomous
Noise
In this chapter, we will focus on dynamical systems which fall in the class of one-
dimensional stochastic differential equations
d
= f ( ) + g( ) d (t), (4.1)
dt
dn (t)
1
k2
k1 t
2
Fig. 4.1 Parameters of the dichotomous noise and an example of time series. Taken From Ref [4]
(C) Cambridge University Press (2011). Reprinted with Permission
k2 k1
P1 = P2 = , (4.2)
k1 + k2 k1 + k2
while the mean, the variance, and the autocovariance function of the dichotomous
process are [1, 4]
k2 1 + k1 2
d = , (4.3)
k1 + k2
k1 k2 (2 1 )2
(d d )2 = = 1 2 , (4.4)
(k1 + k2 )2
and
k1 k2 (2 1 )2 |tt
|(k1 +k2 )
d (t)d (t
) = e = 1 2 e|tt |(k1 +k2 ) . (4.5)
(k1 + k2 ) 2
The autocovariance function does not vanish for t = t
, which entails that the
dichotomous noise is a colored noise. A typical temporal scale of a correlated
process is the integral scale, I , defined as the ratio between the integral of the
4 Dynamical Systems Driven by Dichotomous Noise 61
autocovariance function with respect to the time lag and the variance of the process.
The integral scale is a measure of the memory of the process, and in the case of
dichotomous noise it reads
1
I = = c . (4.6)
k1 + k2
Equation (4.1) is commonly written by assuming a zero-average noise process.
In this case, Eq. (4.3) gives
1 2
1 k2 + 2 k1 = + = 0, (4.7)
2 1
and the (stationary) dichotomous Markov process results characterized by three
independent parameters. For example, one can choose the mean durations, 1 and
2 (i.e., the two transition rates k1 and k2 ) and the value of one of the states of d ,
say 2 , and obtain the other value (i.e., 1 ) using Eq. (4.7). In what follows we will
refer to the case of zero-mean (Eq. (4.7)) dichotomous Markov noise.
The dichotomous noise is generally used in the scientific modelling in two
different ways: the mechanistic usage and the functional one. In the first case,
dichotomous noise is introduced for its ability to model systems that randomly
switch between two deterministic dynamics, while in the functional usage the DMN
is adopted to suitably represent a colored random forcing.
The mechanistic approach applies to a class of processes characterized by two
alternating dynamics of the state variable, (t), that can grow or decay depending
on a random driver, q(t), being greater or lower than a threshold value, . When
q is a resource for the state variable, the growth and decay are modelled by two
functions, f1 ( ) and f2 ( ), respectively,
d f1 ( ) if q(t) (4.8a)
=
dt f2 ( ) if q(t) < (4.8b)
with f1 ( ) > 0 and f2 ( ) < 0. If the random driver is a stressor, the conditions
in (4.8ab) are reversed. The overall dynamics of the variable can then be
expressed by a stochastic differential equation forced by a dichotomous Markov
noise, d (t), assuming (constant) values, 1 and 2
d
= f ( ) + g( ) d (t) (4.9)
dt
with
2 f 1 ( ) 1 f 2 ( ) f 1 ( ) f 2 ( )
f ( ) = g( ) = . (4.10)
1 2 1 2
The transition rates between dynamics f1 and f2 are k1 = PQ ( ) and k2 = 1
k1 = 1 PQ ( ), where PQ () is the cumulate probability function of the random
62 L. Ridolfi and F. Laio
a b
dn(t) dn(t)
1
0.4
0.5
0.2
t t
0.5 0.2
1 0.4
(t) (t)
1
0.8 1.4
0.6 1.2
0.4 t
5 10 15 20 25 30
0.2 0.8
t 0.6
5 10 15 20 25 30 35
Fig. 4.2 Noise path and the corresponding evolution of the (t) variable for Example I (panel a,
= 1) and Example II (panel b), described by Eq. (4.9), with functions (4.11), and Eq. (4.14),
respectively. Taken From Ref [4] (C) Cambridge University Press (2011). Reprinted with
Permission
forcing q. Notice that in this mechanistic usage, the rates k1 and k2 are the only
relevant characteristics of the DMN, while the other noise characteristics (e.g., its
mean, 1 k2 + 2 k1 , and variance, 1 2 ) are uninfluential to the representation
of the dynamics. In fact, in this case switches between two states ( f1 ( ) and
f2 ( )) that are independent of 1 and 2 . As a consequence, 1 and 2 may assume
arbitrary values.
A simple example of mechanistic approach (in the following we refer it as
Example I) is when DMN is used to switch between the two dynamics
f1 ( ) = (1 ) and f2 ( ) = , (4.11)
where determines the rates of growth and decay. Therefore, (t) exponentially
increases (decreases) toward the asymptote = 1 ( = 0) when the noise is in the 1
(2 ) state. A realization of the corresponding (t) dynamics is shown in Fig. 4.2a.
Different from the mechanistic usage, the functional interpretation of the DMN
is commonly introduced to investigate how an autocorrelated random forcing, d (t)
(whose effect on the dynamics can be in general modulated by a function g( ) of the
state variable), affects the dynamics of a deterministic system, d /dt = f ( ). The
temporal dynamics results therefore modelled by the stochastic differential equation
and in this case none of the parameters k1 , k2 , 1 , and 2 has arbitrary value. These
parameters need to be determined by adapting the DMN to the characteristics of
4 Dynamical Systems Driven by Dichotomous Noise 63
the driving noise: for example, by matching the mean, variance, skewness, and
correlation scale. Moreover, the functions f ( ) and g( ) are in this case assigned
a priori, while f1 ( ) and f2 ( ) are obtained from (4.10) and depend on the noise
characteristics
f1 ( ) = f ( ) + g( )1 f2 ( ) = f ( ) + g( )2 . (4.13)
d
= ( ) + d . (4.14)
dt
An example of the resulting (t) dynamics, with = 1, is shown in Fig. 4.2b.
The steady state probability density function for the process described by the
Langevin equation (4.1) can be obtained by taking the limit as t in the master
equation for the processi.e., the forward differential equations that relate the
state probabilities at different points in timeand by solving the resulting forward
differential equation to find the steady state probability density function (a less
rigorous but simpler approach is described in [4]). The steady state probability
density function, p( ), for the state variable, , reads [3, 6, 7]
1 1 k1 k2
p( ) = C exp + d
, (4.15)
f 1 ( ) f 2 ( ) f 1 (
) f 2 (
)
where
k1 +k
2
k1 k2
p( ) = (1 ) 1 1 , (4.18)
k1 k1
dV1 ( ) dV2 ( )
f 1 ( ) = f 2 ( ) = ; (4.20)
d d
the stable (unstable) stationary points correspond in fact to the minima (maxima)
of the potentials. The dynamics of can then be represented as those of a particle
moving along the -axis driven by the switching between the two potentials. It is
evident that the particle remains trapped between any pair of nearby stable points
(minima of the potentials V1 ( ) and V2 ( )) that are not separated by an unstable
point (i.e., a maximum of either V1 ( ) or V2 ( )). These pairs of minima define
the domain, [in f , sup ], of the steady state pdf. Note that the same criteria for
the determination of the extremes of the steady state domain apply when the minima
of the potential are at . Finally, if the stable points are coincident the pdf reduces
to a Dirac delta function centered in the two overlapping stable points.
Boundaries can also be externally imposed. For example, a frequent case in the
bio-geosciences is when the variable is positive-valued, or it has a boundary at
a certain threshold value, th . This corresponds to changing the potential of the
deterministic dynamics by setting V1 ( ) = V1 ( ) and V2 ( ) = V2 ( ) for th ,
and V1 ( ) and V2 ( ) for > th (if th is assumed to be an upper bound).
The general rule described in the previous paragraph to determine the boundaries of
the domain can now be applied to the modified potentials V1 ( ) and V2 ( ). Notice
that the external bound may create a new minimum in the potential and affect the
original boundaries of the domain if the sign of the derivative of V1 ( ) and V2 ( ) is
negative at th .
4 Dynamical Systems Driven by Dichotomous Noise 65
a b
p p
6 5
k1=0.5, k2=0.5 k1=0.5, k2=0.5
5 4
k1=5, k2=5 k1=1.5, k2=1.5
4 k1=0.8, k2=2 k1=5, k2=5
3
3
2
2
1 1
Fig. 4.3 Steady state probability density functions for Example I with = 1 (panel a) and
Example II with = 1 and = 0.5 (panel b). Taken From Ref [4] (C) Cambridge University
Press (2011). Reprinted with Permission
We have now the elements (i.e., the general expression of the pdf and the
conditions for the determination of the boudaries) to obtain the pdfs for our
examples. In the case of Example I, both f1 ( ) = 1 and f2 ( ) = have a
single stable fixed point. The boundaries of the domain correspond to the minima
of the two potential, V1 ( ) = + 2 /2 and V2 ( ) = 2 /2, in f = 0 and sup = 1.
The expression for the steady state pdf (4.18) is therefore valid for [0, 1] (see
Fig. 4.3a).
In the Example II and in the case of a symmetric dichotomic noise (1 = 2 = ),
one has f1,2 ( ) = ( ) and V1,2 ( ) = 3 /3 ( ) 2 /2. If > , the
domain is therefore [ , + ], while the domain is [0, + ] in the reverse
case. An example of pdf with = 1 and = 0.5 is reported in Fig. 4.3b.
We conclude this analysis of the pdf by recalling some tools for investigating
the behavior of the steady state pdf near the boundaries of the domain. Assume that
the boundary i (with i = in f or i = sup ) is a stable point of the f1 ( ) dynamics,
i.e. f1 (i ) = 0. If f2 (i ) = 0, the steady state pdf in the vicinity of i is determined
as a limit of Eq. (4.15) for f1 ( ) 0
1 k1
p( ) exp d . (4.21)
f 1 ( ) f 1 (
)
If f1 ( ) is expanded around i and the expansion is truncated to the first order (i.e.,
d f 1 ( )
f 1 ( ) = ( i ) d ), using Eq. (4.21) the pdf can be represented as
= i
k1
1+ d f1 ( )
1 d
p ( ) i . (4.22)
| i |
This limit behavior makes evident the competition between the time scale
characteristic of the switching between the two deterministic dynamics and the time
scale of the deterministic dynamics f1 ( ) near the attractor. In fact when the random
66 L. Ridolfi and F. Laio
switching (i.e., the transition rate) is relatively slow with respect to the deterministic
dynamics for i , the particle tends to spend much time near the boundary and
the pdf diverges at the boundary i (being d f1 ( )/d |i > k1 ). Vice versa,
when the switching between the two dynamics is sufficiently fast to prevent that
remains much time near the attractors the pdf becomes null at the boundary because
d f1 ( )/d |i < k1 .
Notice that these results are valid only when f1 (i ) = 0 and f2 (i ) = 0, which
excludes the cases when the bound is externally imposed. Moreover Eq. (4.21) is
not valid for the cases when i is also an unstable stationary point of f2 ( ).
A particular case refers to the state-dependent DMN [9], where one (or more)
of the parameters (k1 , k2 , 1 , and 2 ) depends on the state variable, . While a
possible -dependency of 1 and/or 2 can be accounted easily through a suitable
modification of the g( ) function, the state-dependency in k1 and k2 profoundly
affects the dynamics. The solution in this state-dependent case is simply obtained
from Eq. (4.15) by setting k1 = k1 ( ) and k2 = k2 ( )
1 1 k1 (
) k2 (
)
p( ) = C exp + d
(4.23)
f 1 ( ) f 2 ( ) f 1 (
) f 2 (
)
where C is the usual normalization constant calculated by imposing that the integral
of p( ) in the pdf domain is equal to 1. The zeros of f1 ( ) and f2 ( ) are the natural
boundaries for the dynamics and represent the limits of the domain.
In the mechanistic approach, the dynamics switch between the two deterministic
processes,
d d
= f 1 ( ) > 0 and = f2 ( ) < 0, (4.24)
dt dt
depending on whether the value of a stochastic external driver, q, is greater or
smaller than a given threshold, , respectively. If the variance of the driving
force, q, is decreased while maintaining constant, its mean, q , in the zero-
variance limit q becomes a constant deterministic value, q = q . The corresponding
deterministic stationary state is determined by the position of q relative to . If
q > , the deterministic steady state, st,1 , is obtained as a solution of the first of
equations (4.24), i.e. f1 (st,1 ) = 0. Instead, if q < the deterministic steady state,
st,2 , is obtained by setting f2 (st,2 ) = 0.
Once the deterministic counterpart of the dynamics is identified, it is possible
to investigate how the noise modifies the modes and antimodes, m , of the pdf of
the process. These are obtained by setting equal to zero the first-order derivative
of (4.16) or (4.15), depending on the interpretation adopted for the DMN. In the
functional interpretation, the modes and antimodes are the solution of the following
equation
f (m ) + c 1 2 g(m )g
(m ) + c (1 + 2 ) f
(m )g(m ) +
f 2 (xm )g
(m )
+ c 2 f ( m ) f
( m ) = 0, (4.25)
g(m )
where
dg( )
d f ( )
g ( m ) = and f ( m ) = . (4.26)
d =m d =m
The impact of noise properties on the shape of the pdf is evident from Eq. (4.25).
In fact, apart from the first term that is independent of the noise parameters,
the second term expresses the effect of the multiplicative nature of the noise (i.e.,
the fact that g( ) = const), the third term results from the asymmetry of the
noise (i.e, 1 = 2 ), while the fourth term is due to the noise autocorrelation.
If the mechanistic interpretation is adopted, it is convenient to rewrite Eq. (4.25)
in terms of the functions f1 ( ) and f2 ( ),
f12 (m ) f2
(m ) f22 (m ) f1
(m )
k1 f2 (m ) k2 f1 (m ) = 0, (4.27)
f 2 ( m ) f 1 ( m )
where
d f1 ( ) d f2 ( )
f1
(m ) = and f2
(m ) = , (4.28)
d = m d = m
68 L. Ridolfi and F. Laio
p p
k2
1
that clearly shows how the stable points of the noisy dynamics, m , can be very
different from their deterministic counterparts, st,1 and st,2 .
To show an example of how noise may profoundly affect the dynamical
properties of a system through noise-induced transitions, one can consider the
dynamics described in Example I. In this case (with = 1) Eq. (4.27) becomes
1 k1
m = . (4.29)
2 k1 k2
Thus, the mode or antimode, m , is comprised between the boundaries of the interval
]0, 1[, if k1 < 1 and k2 < 1 or k1 > 1 and k2 > 1. In the first case m is an antimode,
while in the second case m is a mode. It is useful also to explore the behavior of
the pdf close to the boundaries. Using Eq. (4.22) we obtain
when k1 < 1 (k2 < 1) the pdf has a vertical asymptote at = 1 ( = 0).
Figure 4.4 collects the possible shapes of the pdf as a function of the parameters
k1 and k2 . When k1 < 1 and k2 > 1 or k1 > 1 and k2 < 1, the noise is unable to
create new states, in that the preferential state of the stochastic system coincides with
the stable state of the underlying deterministic dynamics. In this case noise creates
only disorder in the form of random fluctuations about the stable deterministic state.
Conversely, when the switching rates, k1 and k2 , exceed the threshold k1 = k2 = 1 a
new noise-induced state exists at xm and, then, a noise-induced transition emerges.
Finally, when k1 < 1 and k2 < 1 the noise allows for the coexistence of the two
steady states of the underlying deterministic dynamics. Thus, noise induces a
bistable (i.e., bimodal) behavior that is not observed in the deterministic counterpart
of the process, where only one steady state can exist for a given set of parameters.
4 Dynamical Systems Driven by Dichotomous Noise 69
4b
3b
2b
k
2b 4b 6b 8b 10b
Fig. 4.5 Scenario of the steady state pdfs for the Verhulst model driven by a symmetric
multiplicative noise. Taken From Ref [4] (C) Cambridge University Press (2011). Reprinted with
Permission
f 1 ( ) f 2 ( ) 1
g( ) = = (4.31)
2 1 2 1
a constant. It follows that noise-induced transitions can emerge even with this simple
form of dichotomous noise. In this case, being the noise symmetric and additive,
transitions are due to the autocorrelation of the dichotomous noise.
It is instructive to describe also a case of transitions induced by a multiplicative
noise. To this aim, let us consider the case of the Verhulst model and concentrate on
the case in which the noise term is a linear function of (Example II), i.e.,
d
= ( ) + d = [( + d ) ] . (4.32)
dt
The deterministic steady state is st = , while the modes and antimodes are found
from Eq. (4.25),
m [(m )(2k 3m + ) 2 ]
= 0, (4.33)
2k
with solution
1
m,1 = 0 m,2,3 = (k 3 2 + (k )2 + 2 ). (4.34)
3
70 L. Ridolfi and F. Laio
where a1 and a2 are two positive coefficients determining the rates of growth and
decay, respectively. With probability P1 the dynamics are in state 1 (i.e., q(t) )
with dB/dt = f1 (B), while with probability 1 P1 the dynamics are in state 2 (i.e.,
q(t) ) with dB/dt = f2 (B). Dichotomous noise determines the rate of switching
between these two states. The probability density function of B reads
1P P
1+ a 1 1+ a1
p(B) = C [a1 (1 B) + a2 B] (1 B) 1 B 2 (4.36)
where C is the normalization constant and B [0, 1] being the roots of f1 (B) = 0
and f2 (B) = 0 (i.e., B = 1 and B = 0) the natural boundaries of the dynamics.
4 Dynamical Systems Driven by Dichotomous Noise 71
2
I 4 a1 P1 1 V III
a2=
4(P1 1 + a1)
B B B
a2 1
II
0.5
a 2=P 1 p(B)
P1=1a1
p(B)
B
IV B
0.2 0.4 0.6 0.8 1
P1
Fig. 4.6 Qualitative behavior of the probability distributions of biomass, B, in the parameter space
{P1 , a2 } (a1 is constant and equal to 0.2). A variety of shapes emerges: L-shaped distributions with
preferential state at B = 0 (case I); J-shaped distributions with preferential state at B = 1 (case II);
bistable dynamics with bimodal (U-shaped) distribution (case III), dynamics with only one stable
state located between the extremes of the domain of B (case IV); bimodal distributions with a
preferential state at B = 0 and the other for B < 1 (case V). Taken From Ref [4] (C) Cambridge
University Press (2011). Reprinted with Permission
Figure 4.6 shows how the probability distribution of B changes in the parameter
space. For a2 > P1 the distribution, p(B), has a singularity at B = 0 and p(B) is
L-shaped (Fig. 4.6, case I). Similarly, p(B) is J-shaped (i.e., it has a singularity in
B = 1) for P1 > 1 a1 (Fig. 4.6, case II). When both conditions are met, p(B) is
U-shaped and two spikes of probability at B = 0 and B = 1 occur (Fig. 4.6, case III).
Differently, when these conditions are not met, the probability distribution of B has
only one mode within the interval [0, 1] and no spikes of probability at B = 0 and
at B = 1 (Fig. 4.6, case IV). When p(B) has a singularity at B = 0 (but not at B = 1)
and a2 < (4a1 P1 1)/[4(P1 1 a1 )], p(B) has both a mode and an antimode in
[0, 1] as in Fig. 4.6 (case V).
Figure 4.6 demonstrates that the preferential states of B vary across the parameter
space. For relatively low (high) rates of decay, a2 , and high (low) probability, P1 ,
of occurrence of unstressed conditions the dynamics have a preferential state (i.e.,
spike of probability) in B = 1 (B = 0). In intermediate conditions the system may
show either one (case IV) or two (case III and V) statistically stable states. This
bistability (i.e., bimodality in p(B)) emerges as a noise-induced effect and is a
clear example of the ability of noise to induce new states, which do not exist in
the underlying deterministic system [4, 5]. The deterministic counterpart of these
dynamics is in fact a system that is either always unstressed (Bm = 1) or always
stressed (Bm = 0), depending on whether the constant level, q, of available resources
is greater or smaller than the minimum value, , required for survival. Thus, the
deterministic dynamics are not bistable and it is the random driver that induces
bistability in the stochastic dynamics of B.
72 L. Ridolfi and F. Laio
Different from the case described in the previous subsection, let us now consider
the case where the noise is able to stabilize the system around an intermediate state
between two deterministically stable states. We refer to the case of dryland plant
ecosystems that can exhibit a bistable behavior with two stable states corresponding
to unvegetated (desert) and vegetated land surface conditions [17, 18]. The
existence of these two stable states is usually due to positive feedbacks between
vegetation and water availability [17, 1921].
Natural and anthropogenic disturbances acting on bistable dynamics may induce
abrupt transitions from the stable vegetated state to the unvegetated one [22]. When
this transition occurs, a significant increase in water availability (i.e., rainfall) is
necessary to destabilize the desert state and reestablish a vegetation cover. This
picture of drylands as deterministic bistable systems contrasts with the existence of
intermediate states between desert and completely vegetated landscapes. Spatial
heterogeneities and lateral redistribution of resources can explain the emergence of
patchy distributions of vegetation [2325], but a similar result can be induced also
by temporal fluctuations in environmental conditions [26], like random interannual
rainfall fluctuations typical of arid climates. In order to show this constructive action
by noise, let us express the dynamics of dryland vegetation as [26, 27]
3
dv v (if R < R1 ) (4.37a)
=
dt v(1 v)(v c) (if R R1 ) (4.37b)
0.8
0.6
0.4
0.2
R1 R2
0 R
100 200 300 400 500
Fig. 4.7 Deterministic stable (solid thick lines) and unstable (dashed thick lines) states of
Eq. (4.37) (R1 =260 mm and R2 =360 mm). Dotted line shows (analytically calculated) noise-
induced statistically stable states of the stochastic dynamics, while crosses (R = 0.4R) and
squares (R = 0.6R) correspond to numerically evaluated values of the modes of v. Taken from
Ref. [32] (C) Elsevier Science Ltd (2008)
where B is the species biomass, is the carrying capacity (i.e., the maximum
sustainable value of B), and the coefficients a1 and a2 give the decay and growth
rates, respectively.
The stochastic dynamics resulting from the random switching between the two
Eqs. (4.39) are modelled as a dichotomous Markov process. When the environ-
mental variable, R, is comprised within the nichethis happens with probability
R +
P1 = R00 p(R) dR, where R0 is the lower limit of the niche, I the species is
not stressed and its growth is expressed by (4.39a). Vice versa, with probability
1 P1 the species is stressed and its dynamics are modelled by (4.39b). The solution
of the stochastic differential equation associated with these dynamics provides the
probability distribution, p(B). In particular, when [33]
a a2
P1 Plim = a= (4.40)
a+ a1
B is zero with probability tending to one and species goes extinct. In fact, low values
of P1 correspond to conditions in which the environmental variable remains too often
4 Dynamical Systems Driven by Dichotomous Noise 75
p(R)
PPlim
PPlim
P=Plim PPlim
P=Plim
PPlim
PPlim
d d d d d d R
R*I Ru*
Fig. 4.8 Probability distribution of the resource, R. The biodiversity potential = [Rl , Ru ] defines
the interval where species with niche range remain unstressed for a sufficient fraction of time to
avoid extinction (after [32]). Taken from Ref. [32] (C) Elsevier Science Ltd (2008)
outside the niche to allow for the survival of that species. Therefore, a species can
survive only when P1 Plim . Given a distribution of resources p(R) and a niche
interval , Fig. 4.8 shows that there are two limit positions Rl and Ru in which the
condition P1 = Plim is found. They correspond to the conditions
R + R
l u
p(R) dR = Plim and p(R) dR = Plim . (4.41)
Rl Ru
For a given distribution, p(R), of the environmental variable (Fig. 4.8) one can
determine the interval [Rl , Ru ] on the R-axis, in which species with niche range, ,
remain unstressed for a sufficient fraction of time to avoid extinction. Being =
Ru Rl the interval width, / is a proxy of the biodiversity potential that could be
sustained in the ecosystem: large values of / are associated with a broader range
of species that are be able to have access to favorable environmental conditions.
When the variance of R is zero the process becomes deterministic, with R = R and
= 2 .
In order to investigate the effect of environmental variability on biodiversity,
the values of the parameters a, , and R, are kept constant and the dependence
of biodiversity potential on the standard deviation R is investigated. The results
are shown in Fig. 4.9 for different values of the niche range, . Two effects of
the environmental noise are evident. Firstly, moderate levels of environmental
fluctuations enhance the biodiversity potential with respect to the deterministic case
(i.e., > 2 ). In this case, noise plays a constructive role on the dynamics by
favoring biodiversity. Secondly, relatively large noise intensities limit the ability of
the system to support diverse communities of individuals (i.e., < 2 ). In this
second case, noise has a destructive effect, namely noise-induced extinctions
occur. These results are consistent with the so-called intermediate disturbance hy-
pothesis [34, 35], i.e., that moderate disturbances can be beneficial to an ecosystem.
76 L. Ridolfi and F. Laio
Figure 4.9 shows also that generalist speciesi.e., species with high are
better adapted than specialists species with low to benefit from environmental
fluctuations.
References
23. von Hardenberg, J., Meron, E., Shachak, M., Zarmi, Y.: Phys. Rev. Lett. 87, 198101 (2001)
24. Rietkerk, M., Boerlijst, M.C., van Langevelde, F., HilleRisLambers, R., van de koppel, J.,
Kumar, L., Klausmeier, C.A., Prins, H.H.T., de Roos, A.M.: Am. Nat. 160, 524 (2002)
25. van de Koppel, J., Rietkerk, M.: Am. Nat. 163, 113 (2004)
26. DOdorico, P., Laio, F., Ridolfi, L.: Proc. Natl. Acad. Sci. USA 102, 10819 (2005)
27. Borgogno, F., DOdorico, P., Laio, F., Ridolfi, L.: Water Resour. Res. 43(6), W06411 (2007)
28. Chesson, P.L.: Theor. Popul. Biol. 45, 227 (1994)
29. Yachi, S., Loreau, M.: Proc. Natl. Acad. Sci. USA 96, 1463 (1999)
30. Mackey, R.L., Currie, D.J.: Ecology 82(12), 3479 (2001)
31. Hughes, A.R., Byrnes, J.E., Kimbro, D.L., Stachowicz, J.J.: Ecol. Lett. 10, 849 (2007)
32. DOdorico, P., Laio, F., Ridolfi, L., Lerdau, M.T.: J. Theor. Biol. 255, 332 (2008)
33. Camporeale, C., Ridolfi, L.: Water Resour. Res. 42, W10415 (2006)
34. Connell, J.H.: Science 199, 1302 (1978)
35. Huston, M.A.: Am. Nat. 113(1), 81 (1979)
Chapter 5
Stochastic Oscillator: Brownian Motion
with Adhesion
M. Gitterman
Abstract We consider an oscillator with a random mass for which the particles of
the surrounding medium adhere to the oscillator for some random time after the
collision (Brownian motion with adhesion). This is another form of a stochastic
oscillator, different from oscillator usually studied that is subject to a random force
or having random frequency or random damping. A comparison is performed for the
first two moments, stability analysis and different resonance phenomena (stochastic
resonance, vibration resonance) for stochastic oscillators subject to external periodic
force as well as to linear and quadratic, white, dichotomous, and trichotomous
noises.
5.1 Introduction
d2x dx
2
+ + 2 x = (t) (5.1)
dt dt
with the correlation function
M. Gitterman ()
Department of Physics, Bar Ilan University, Ramat Gan 52900, Israel
e-mail: gittem@mail.biu.ac.il
Usually one considers the Brownian motion of a free particle ( 2 = 0). Our analysis
covers the more general problem of a stochastic harmonic oscillator. The random
force (t) enters Eq. (5.1) additively. Another forms of a stochastic oscillator
contain the multiplicative random forces, which connected with the fluctuations of
the potential energy or damping [1]. These models have been applied in physics,
chemistry, biology, sociology, etc., everywhere from quarks to cosmology. In fact,
a person who is worried by oscillations of prices in the stock market (described by
the stochastic oscillator model) can be relaxed by classical music produced by the
oscillations of string instruments!
We consider an oscillator with random mass [2]. Such model describes, among
another phenomena, the Brownian motion with adhesion, where the molecules of the
surrounding medium not only randomly collide with the Brownian particle, which
produces its well-known zigzag motion, but they also stick to the Brownian particle
for some (random) time, thereby changing its mass. The appropriate equation
of motion of the Brownian particle subject to an external periodic field has the
following form,
d2x dx
[1 + (t)] 2
+ + 2 x = (t) + A sin ( t) (5.3)
dt dt
Since the same molecules take part in colliding and adhering to the Brownian
particle, we assume that (t) and (t) are delta correlated,
There are many applications of an oscillator with a random mass such as ionion
reactions, electrodeposition, granular flow, cosmology, film deposition, traffic jams,
and the stock market. Specific to these fluctuations, as distinct from other noise
in oscillator equations, is their restriction to large negative values, which would
lead to the negative mass. Therefore, the simplest form of these fluctuations is not
white noise, but the so-called dichotomous (or trichotomous) noise which jumps
randomly between two (three) different restricted values. Its correlation function
has an exponential OrnsteinUhlenbeck form,
2
(t1 ) (t2 ) = exp [ |t1 t2 |] , (5.5)
d 2 x 1 (t) d [1 (t)] (t) A [1 (t)]
+ + x =
2
+ sin ( t) (5.6)
dt 2 1 2 dt 12 12
d2x dx
1 + 2 (t) 2
+ + 2 x = (t) + A sin ( t) (5.7)
dt dt
2 (t) = 2 + (5.8)
d2x dx
1 + 2 + (t) + + 2 x = (t) + A sin ( t) (5.9)
dt 2 dt
There are many situations in which chemical and biological solutions contain small
particles which not only collide with a large particle, but they may also adhere to it.
The diffusion of clusters with randomly growing masses has also been considered
[3]. There are also some applications of a variable-mass oscillator [4]. Modern
applications of such a model include a nano-mechanical resonator which randomly
absorbs and desorbs molecules [5]. The aim of this note is to describe a general and
simplified form of the theory of an oscillator with a random mass, which is a useful
model for describing different phenomena in Nature.
For generality we consider trichotomous noise, when for the stationary states, the
probabilities P of values a and 0 are
dx
=y
dt
dy dy
= y 2 x + (t) + A sin ( t) (5.12)
dt dt
which after averaging take the following form
dx d<y> d
= < y >; = + < y > < y > 2 x + A sin ( t)
dt dt dt
(5.13)
where the ShapiroLoginov formula for splitting the correlation [6] (with n = 1),
which yields for exponentially correlated noise has been used
n
dng d
(t) n = + g (5.14)
dt dt
d
< g >=< 2 > < g > (5.15)
dt
and for stationary states (d/dt. . . =0) and white noise ( 2 and with
2 / = D) one gets for g = y,
Additional relation between averaged values can be obtained by multiplying the first
of Eq. (5.12) by 2x and the second by 2y, which yields
d 2 d 2 dy2
x = 2xy; y + + 2 y2 + 2 2 xy = 2y + 2yA sin ( t) (5.18)
dt dt dt
Averaging Eqs. (5.18) by using (5.14) yields
5 Stochastic Oscillator: Brownian Motion with Adhesion 83
d 2!
x = 2 xy
dt
d d
+ 2 <y > +
2
+ < y2 > +2 2 < xy > = 2D + 2 < y > A sin ( t)
dt dt
(5.19)
Analogously, multiplying Eqs. (5.12) by y and x, respectively, and summing leads to
d d
xy = y2 (xy) y2 xy 2 x2 + x + xA sin ( t) (5.20)
dt dt
Averaging Eqs. (5.19) and (5.20) leads to
< xv > = 0
d d
< xy > =< y >
2
+ < xy > + < y2 > 2 < x2 > +xA sin ( t)
dt dt
(5.21)
Additional equations for the correlators can be obtained by multiplying Eqs. (5.12)
and (5.20) by 2 x, 2 y, and , respectively, and averaging,
d !
+ x2 = 2 xy
dt
d !
+ + 2 y2 + 2 2 xy = 2 < y > A sin ( t)
dt
d ! 2qa2 d 2qa2
+ + xy = y2 (xy) + < y2 > +Rx
dt dt
2R
2 < x2 > + A sin ( t)
( + ) + 2
(5.22)
The splitting of correlation formula
which is exact for the OrnsteinUhlenbeck noise, has been used in last equation
in (5.22).
For the stationary states (d/dt . . . = 0), Eqs. (5.13), (5.17), (5.19), (5.21),
and (5.22) take the following form,
By this means we obtained eight equations (5.24) for eight correlators x, x2 ,
y2 , x, y, xy, x2 , and y2 .
From equations, obtained in the previous section one finds the first
2R A
x = + sin ( t) (5.25)
2 [ ( + ) + 2 ] 2
where
2 2 2 ( + 2 )
U= (5.27)
2 2
2 2 qa2 2 2 + ( + 2 ) ( + ) + 2 2
V= (5.28)
2 2
5 Stochastic Oscillator: Brownian Motion with Adhesion 85
and x, is given in (5.25), and < x > and < y > are equal to
R R
< x >= ; < y >= (5.29)
( + ) + 2 ( + ) + 2
The last terms in Eqs. (5.25) and (5.26) are related to an oscillator response to the
external periodic force, while the other terms describe the common action on an
oscillator of the additive and multiplicative forces.
In the limit case of Eq. (5.26) in the absence of both an external field (A = 0) and
correlation between additive and multiplicative noise (R = 0), Eq. (5.26) reduces to
the following form,
D 2Dqa2U
< x2 > = (5.30)
2 V 2
! D
x2 = (5.31)
2
This result coincides with the well-known result for free Brownian!motion with
2 = 0. For free Brownian particle, 2 0 and one obtains x2 , as it
should be for Brownian motion. The independence of the stationary results on the
mass fluctuation is due to the fact that the multiplicative random force appears in
Eq. (5.1) in front of the higher derivative. It is remarkable that these results are
significantly different from
! the stationary second moments! for the cases when the
random frequency x2 and the random damping x2 , are the white noises of
strength D1 ,
! D ! D
x2
= ; x2
= (5.32)
2 2 ( D1 2 ) 2 (1 2 D1 )
showing the energetic instability [1]. It! turns out that for symmetric dichotomous
noise, the stationary second moment x2 for the mass ! fluctuations, in contrast to its
white noise form (5.31), may lead to instability, x < 0.
2
It is interesting to compare Eq. (5.26), obtained for trichotomous noise with that
for dichotomous noise, when the random variable (t) jumps between two values !
x = a, and not between three values, a and zero. The second moment x2 for
dichotomous noise is ! obtained from (5.26), when 2q = 1. The variable q appears
in Eq. (5.26) for x2 through expressions of V in the form q/ (a bq), which is a
monotonically increasing function for 0 < q < 1/2. Therefore, for trichotomous
noise the second moment < x2 > is always smaller than that for the dichotomous
noise.
86 M. Gitterman
We start from the traditional model of Brownian motion, where the Brownian
particle is subject to the systematic damping force 2 v and the linear random force
(t) or the quadratic random force 2 (t) ,
dv
+ 2 v = (t) (5.33)
dt
dv
+ 2 v = 2 (t) (5.34)
dt
Multiplying Eq. (5.33) by 2v and averaging, one obtains for stationary states
(d/dt . . . = 0)
1
< v2 >= < v > (5.35)
2
Multiplying Eq. (5.33) by (t) and using the ShapiroLoginov procedure for
splitting the correlations [6]
dg d
< >= + < g > (5.36)
dt dt
one gets for stationary state with g = v,
< v >= (5.37)
(2 + )
Combining Eqs. (5.35) and (5.37), one gets
< v2 >= (5.38)
2 (2 + )
Let us turn now to the analysis of Eq. (5.34), which can be rewritten, using (5.8) as
dv
+ 2 v = 2 + (5.39)
dt
For stationary state, the averaging of Eq. (5.39) leads to
< v >= (5.40)
2
Multiplying Eq. (5.39) by 2v and averaging gives for stationary state
< v2 >= < v > + < v > (5.41)
2 2
5 Stochastic Oscillator: Brownian Motion with Adhesion 87
< v >= (5.42)
+ 2
As one can see from Eqs. (5.38) and (5.43), the stationary second moment < v2 >
is positive for both linear and quadratic noise, i.e., a system remains stable.
Till now we analyzed the classical Brownian motion. In order to put it into
considered here stochastic oscillator framework, let us consider Brownian motion
in the parabolic potential V (x) = 02 x2 /2, which is described by Eq. (5.1) for linear
noise, and by the equation
d2x dx
+ 2 + 02 x = 2 (t) (5.44)
dt 2 dt
for the quadratic noise.
The stationary second moment for Eq. (5.1) with dichotomous internal noise (5.5)
has the following form [7]
2 ( + 2 )
< x2 > = (5.45)
202 2 + 2 + 02
(5.46)
According to Eqs. (5.46) and (5.47) the stability conditions (positivity of < x2 >)
for quadratic noise has the form
88 M. Gitterman
2 2 2 2 (4 + )2
<1 (5.48)
2 (1 + 2 ) (2 + ) [4 2 (1 + 2 ) + (4 + )]
2 2 2 2 (4 + )2
<1 (5.49)
2 (2 + ) [4 2 + (4 + )]
Comparison of (5.48) and (5.49) shows that for small strength of quadratic
noise an oscillator, like in the case of linear noise, becomes unstable when the
inequality (5.49) does not obey. However, by increasing the strength of quadratic
noise one can attain the fulfilment of (5.48), i.e. to stabilize an oscillator with the
help of noise (noise-induced stability).
where
B2 = + 2 1 + 2 ; B4 = + 4 1 + 2
In the limit case of linear noise, 1 + 2 1 and = 1 the latter equation reduces to
2
2 + [ + 2 ] ( + 4 ) + 2 2 + 8 2 2 2
< x >= D
2
4 2 [2 2 + ( + 2 ) ( + 4 )] 16 2 2 2 [2 2 + ( + 2 )]
(5.51)
Stability condition for linear noise is
2 2 + ( + 2 ) ( + 4 ) > 4 2 2 2 + ( + 2 ) (5.52)
As in the previous cases, replace Eq. (5.9) with A = 0 by two first-order differential
equations
dx dy
dy dy
= y; = 1+2 2 y 2 x + (t) (5.54)
dt dt dt dt
Multiplying the first equation in (5.54) by 2x and the second by 2y gives, after
averaging and using Eq. (5.36),
d
< x2 > =< xy > (5.55)
dt
d
d d
< y2 > = 1 + 2 < y2 > + < y2 >
dt dt dt
4 < y2 > 2 2 < xy > + (t) + 4D (5.56)
4D 2A1 2 ( + B1 )
< x2 >= (5.61)
2 4 A1 2 B1
with
2 2 2
A1 = + 2 + 2B1 + ; B1 = (5.62)
2 +
Comparing Eqs. (5.59)(5.62) shows that A > A1 and B < B1 , i.e., as in the previous
cases, the replacement of linear noise by quadratic noise makes the system more
stable.
For white noise, Eq. (5.59) reduces to
! D
x2 = (5.63)
2
Here we consider the more complicated problem of the stability of the solutions. For
a deterministic equation, the stability of the fixed points is defined by the sign of ,
found from the solution of the form exp ( t) of a linearized equation near the fixed
points. The situation is quite different for a stochastic equation. The first moment
x (t) and higher moments become unstable for some values of the parameters.
However, the usual linear stability analysis, which leads to instability thresholds,
turns out to be different for different moments making them unsuitable for a stability
analysis. A rigorous mathematical analysis of random dynamic systems shows [8]
that, similar to the order-deterministic chaos transition in nonlinear deterministic
equations, the stability of a stochastic differential equation is defined by the sign
of Lyapunov exponents . This means that for stability analysis, one has to go
from the Langevin-type equations to the associated FokkerPlanck equations which
describe the properties of statistical ensembles and to calculate the Lyapunov index
, defined by [8]
1 ln x2 x/ t
= < >=< > (5.64)
2 t x
5 Stochastic Oscillator: Brownian Motion with Adhesion 91
One can see from Eq. (5.64) that it is convenient to replace the variable x in the
Langevin equations with the variable z = (dx/dt) /x,
where Pst (z) is the stationary solution of the FokkerPlanck equations corresponded
to the Langevin equations expressing in the variable z.
Replacing the variable x in Eq. (5.65) by the variable z leads to
dz
= A (z) + 1 B (z) (5.67)
d
where
1
A (z) = z2 B (z) ; B (z) = 1 + 2 z + 2 1 (t) = (t)
R 1+2
(5.68)
According to [10], the stationary solution of the FokkerPlanck equation, corre-
sponding to the Langevin equation (5.67), has the following form
B 1 z 1 1
Pst (z) = N 2 2 exp dx +
B A2 2 A (x) B (x) A (x) + B (x)
(5.69)
Equation (5.69) has been analyzed for different forms of functions A (x) and B (x) :
A = x, B = 1 [11]; A = x, B = x [12] ; A = x xm , B = x, [13]; A = x x3 ,
B = 1 [14]; A = x3 . B = x [15, 16]; A = x x2 , B = x [10].
For
A = x2 + x + ; B = x+
2
= 1; = 1+2 ; = 1+2 . (5.70)
R R
Inserting (5.70) into (5.69) gives
1 1
Pst (z) = N (z x1 )1[2 (x1 x2 )] (z x2 )1+[2 (x1 x2 )]
1 1
(z x3 )1+[2 (x3 x4 )] (z x4 )1[2 (x3 x4 )] (5.71)
92 M. Gitterman
Equation (5.71) defines the boundary of stability of the fixed point x = 0, which
depends on characteristics 2 , of an oscillator, and , , and of the noise.
d2x dx
+ 02 x + (t) x + bx3 = (t) + A sin ( t) +C sin ( t) (5.72)
dt 2 dt
phenomenon, where the noise increases a weak input signal. SR occurs in the
case that a deterministic time-scale of the external periodic field is synchronized
with a stochastic time-scale, determined by the Kramer transition rate over the
barrier.
4. Stochastic resonance in a linear overdamped oscillator (d 2 x/dt 2 = = b =
C = 0), as distinct from the nonlinear case, allows an exact solution [19, 20].
However, this effect occurs only when the multiplicative noise (t) is colored
and not white.
5. Vibrational resonance ( = = 0) , which occurs in a deterministic system,
manifests itself in the enhancement of a weak periodic signal through a high-
frequency periodic field, instead of through noise, as in the case of stochastic
resonance.
where
A3 = 2 ( + 2 ) + 2 1 + 2 + 2 1 + 2
A2 = 2 ( + ) + 1 + 2 2 2 + + 3 + 2 2 + 2
A1 = 1 + 2 + 2 + 2 2 +
A0 = 2 2 + + 2 1 + 2 + 2 (5.75)
In a similar way, one can obtain the equation for the second moment < x2 >,
associated with Eq. (5.73), which is transformed into six equations for six variables,
< x2 >, < y2 >, < xy >, < x2 >, < y2 >, and < xy >, but we shall not write
down these cumbersome equations.
Analogous to the cases of random frequency and random damping [26], we seek
the solution of Eq. (5.74) in the form
with
f5 = f4 f2 2 A; f6 = f3 A;
f7 = 3 2 f2 + f1 f3 2 2 2 2 f4 + 2 f3
f8 = 2 f4 2 2 f2 + f1 f4 + 2 f3 2 2 2 + 4 f1 f2 2 2
f1 = 1 + 2 ; f2 = 1 + 2 + 2 ; f3 = + 2 f2 ; f 4 = 2 + ( + f 2 )
(5.78)
One can compare Eqs. (5.76)(5.78) with the equations for the first moment x,
obtained [26] for the cases of random frequency and random damping, respectively,
subject to symmetric dichotomous noise, and extended afterwards [27, 28] to the
case of asymmetric noise. All these equations are of fourth order with the same
dependence on the frequency of the external field but with a slightly different
dependence on the parameters of the noise.
The amplitude a of the output signal depends on the characteristics , ,
of the asymmetric dichotomous noise and the amplitude A and the frequency
of the input signal. The signal-to-noise ratio is of frequent use in the analysis of
stochastic resonance, which involves the use of the second moments. For simplicity,
we call stochastic resonance the non-monotonic behavior of the ratio a/A of the
5 Stochastic Oscillator: Brownian Motion with Adhesion 95
1.5
0.5
0 0.6 1.2
W
amplitude of the output signal a to the amplitude A of the input signal. (Output
Input ratio, OIR).
Figures 5.1 and 5.2 show the dependence of the OIR on the external frequency
and confirms the existence of the phenomenon of stochastic resonance. Moreover,
the presence of noise, which usually plays a destructive role, here results in an
increase of the output signal, thereby improving the efficiency of a system in the
amplification of a weak signal. In the absence of noise, the usual dynamic resonance
occurs, when the frequency of an external force approaches the eigenfrequency of
an oscillator. Figures 5.1 and 5.2 show the -dependence of the OIR for parameters
= 2 = = 1, = 0.5 and different eigenfrequencies < 1 (Fig. 5.1) and > 1
(Fig. 5.2). The values of the maxima increase with a decrease of on both plots,
although the positions of maxima are shifted to the right with a decrease of for
< 1 and to the left for > 1.
d2x dx
+ 02 x + x3 = A sin ( t) + C sin ( t) . (5.79)
dt 2 dt
96 M. Gitterman
0
0 1 2
W
The amplitude of the output signal as a function of the amplitude C of the high-
frequency field has a bell shape, showing the phenomenon of vibrational resonance.
For close to the frequency 0 of the free oscillations, there are two resonance
peaks, whereas for smaller , there is only one resonance peak. These different
results correspond to two different oscillatory processes, jumps between the two
wells and oscillations inside one well.
Assuming that >> , resonance-like behavior (vibrational resonance [29])
manifests itself in the response of the system at the low-frequency , which depends
on the amplitude C and the frequency of the high-frequency signal. The latter
plays a role similar to that of noise in SR. If the amplitude C is larger than the
barrier height d, the field during each half period / transfers the system from
one potential well to the other. Moreover, the two frequencies and are similar
to the frequencies of the periodic signal and the Kramer rate of jumps between the
two minima of the underdamped oscillator. Therefore, by choosing an appropriate
relation between the input signal A sin ( t) and the amplitude C of the large signal
(or the strength of the noise) one can obtain a non-monotonic dependence of the
output signal on the amplitude C (vibration resonance) or on the noise strength
(stochastic resonance). To put this another way [30], both noise in SR and the
high-frequency signal in vibrational resonance change the parameters of the system
response to a low-frequency signal.
Let us now pass to an approximate analytical solution of Eq. (5.79). In accor-
dance with the two times scales in this equation, we seek a solution of Eq. (5.79) in
5 Stochastic Oscillator: Brownian Motion with Adhesion 97
the form
C sin ( t)
x (t) = y (t) (5.81)
2
where the first term varies significantly only over times t, while the second term
varies much more rapidly. On substituting Eq. (5.81) into (5.79), one can average
over a single cycle of sin ( t) . Then, odd powers of sin ( t) vanish upon averaging,
while the sin2 ( t) term gives 1/2. In this way, one obtains the following equation
for y (t) ,
d2y dy 3bC2
+ 0
2
y + by3 = A sin t (5.82)
dt 2 dt 2 4
with
4
02 3bC2 /2 4 2 3bC2 /2 4
y0 = 0; y = ; d= 0 (5.83)
b 4b
One can say that Eq. (5.83) is the coarse-grained version (with respect to time)
of Eq. (5.79). For 3 C2 /2 4 > 02 , the phenomenon of dynamic stabilization [31]
occurs, namely, the high-frequency external field transforms the previously unstable
position = 0 into a stable position.
A resonance in the linearized equation (5.83) occurs when [32]
3bC2 3bA2
2 = 2
+ (5.84)
2 4 0
4 2 2
For an oscillator with random mass one has to perform the preceding analysis of
Eq. (5.79), based on dividing its solution in the two time scales (Eq. (5.81)) followed
by the linearization of Eq. (5.83) for the slowly changing solution. The subsequent
analysis of an oscillator equation with one periodic force is quite analogous to
analysis of Eq. (5.73), which describes the stochastic resonance phenomenon.
5.7 Conclusions
References
1. Gitterman, M.: The Noisy Oscillator: The First Hundred Years, from Einstein Until Now.
World Scientific, Singapore (2005)
2. Gitterman, M.: J. Phys. Conf. Ser. 012049 (2010)
3. Luczka, J., Hanggi, P., Gadomski, A.: Phys. Rev. E 51, 5762 (1995)
4. Sewbawe Abdalla, M.: Phys. Rev. A 34, 4598 (1986)
5. Portman, J., Khasin, M., Shaw, S.W., Dykman, M.I.: Bull. APS, March Meeting 2010
6. Shapiro, E., Loginov, V.M.: Phys. A 91, 563 (1978)
7. Hwalisz, L., Hung, P., Hunggi, P., Talkner, P., Schimansky-Geier, L.: Z.f. Phys. 77, 471 (1989)
8. Arnold, L.: Random Dynamic Systems. Springer, Berlin (1998)
9. Leprovost, N., Aumaitre, S., Mallick, K.: Eur. Phys. J. B 49, 453 (2006)
10. Kitahara, K., Horsthemke, W., Lefever, R.: Phys. Lett. A 70, 377 (1979); Progr. Theor. Phys.
64, 1233 (1980)
11. Klyatskin, V.I.: Radiophys. Quant. Electron. 20, 381 (1977)
12. Berdichevsky, V., Gitterman, M.: Phys. Rev. E 60, 1494 (1999)
13. Sasagawa, F.: Progr. Theor. Phys. 69, 790 (1983)
14. Ouchi, K., Horita, T., Fujisaka, H.: Phys. Rev. E 74, 031106 (2006)
15. Jia, Y., Zheng, X.-P., Hu, X.-M., Li, J.-R.: Phys. Rev. E 63, 031107 (2001)
16. Ke, S.Z., Wu, D.J., Cao, L.: Eur. Phys. J. B 12, 119 (1999)
17. Gitterman, M.: J. Phys. A 23, 119 (2002)
18. Dykman, M.I., Mannela, R., McClintock, P.V.E., Moss, F., Soskin, M.: Phys. Rev. E 37, 1303
(1988)
19. Fulinski, A.: Phys. Rev. E 52, 4523 (1995)
20. Berdichevsky, V., Gitterman, M.: Europhys. Lett. 36, 161 (1996)
21. Benzi, R., Sutera, S., Vulpani, A.: J. Phys. A 14, L453 (1981)
22. Nicolis, G.: Tellus 34, 1 (1982)
23. Gammaitoni, L., Hanggi, P., Jung, P., Marchesoni, F.: Rev. Mod. Phys. 70, 223 (1998)
24. Stokes, N.G., Stein, N.D., McClintocl, V.P.E.: J. Phys. A 26, L385 (1993)
25. Marchesoni, F.: Phys. Lett. A 231, 61 (1997)
26. Gitterman, M.: Phys. A 352, 309 (2005)
27. Jiang, S.-Q., Wu, B., Gu, T.-X.: J. Electr. Sci. China 5(4), 344 (2007)
28. Jiang, S., Guo, F., Zhow, Y., Gu, T.: In: Communications, Circuits and Systems, 2007. ICCCAS
2007, pp. 10441047.
29. Landa, P.S., McClintock, P.V.E.: J. Phys. A 33, L433 (2000)
30. Braiman, Y., Goldhirsch, I.: Phys. Rev. Lett. 66, 2545 (1991)
31. Kim, Y., Lee, S.Y., Kim, S.Y.: Phys. Lett. A 275, 254 (2000)
32. Gitterman, M.: J. Phys. A 34, L355 (2001)
Chapter 6
Numerical Study of Energetic Stability
for Harmonic Oscillator with Fluctuating
Damping Parameter
Roman V. Bobryk
6.1 Introduction
Gaussian processes play very important role in theory of stochastic processes and
without them the theory would be incomplete. However they have two features that
may not fit in a real modelling. The first one is a lack of boundedness property;
the Gaussian process may take arbitrarily large value with positive probability. The
second one is a unimodality of Gaussian distribution. There are of course random
processes that have these features but analytical studies of them are much more
complicated than in the Gaussian case. In many cases the above features have not
a significant impact but there are important situations where the assumption of
Gaussianity is not appropriate. Interesting and important examples of such cases are
noise-induced transitions [13], stochastic resonance [4] and Brownian motors [5].
Noise-induced transitions are very interesting phenomena in nonlinear stochastic
systems [6]. Most often in such systems Gaussian excitations have been considered,
particularly the white noise. It is pointed out in [1] that non-Gaussian excitations
may lead to new important effects. A simple example of non-Gaussian process is
the sine-Wiener (SW) process:
(t) = 2/ sin[ 2 1 w(t)], (6.1)
where w(t) is the standard Wiener process, and are intensity and correlation
time, respectively. Using the well-known properties of the Wiener process and the
Euler representation of the sine function one can easily show that
2 4s
E[ (t)] = 0, E[ (t) (s)] = exp((t s)/ ) 1 exp( ) , t s. (6.2)
Therefore the SW process has the following correlation function in the stationary
regime:
2
K(t) = E[ (t + u) (u)] = exp(|t|/ ). (6.3)
One can make some modification of the SW process by introducing the random
phase :
(t) = 2/ sin[ + 2 1 w(t)], (6.4)
than a 100 years ago with the well-known paper of A. Einstein on Brownian
motion investigation [8] and since then it has been attracted the attention of many
researchers (see, e.g., [9] and references therein).
We consider a harmonic oscillator described by the following equation:
d2x dx
+ 2 [1 + (t)] + 2 x = 0, t > 0, (6.5)
dt 2 dt
4 2 < 1 (6.6)
gives a necessary and sufficient condition of the energetic stability if the Eq. (6.5)
is interpreted in Stratonovich sense [1012]. It is difficult to obtain analytically the
necessary and sufficient conditions of the stability if the excitation is not a white
noise. In the case of OU and SW noises it is rather impossible. Note that if tends to
zero, then the considered excitations tend to a white noise. Therefore it is interesting
to compare the limiting condition (6.4) with stability conditions in the case of the
real fluctuations. The TG and SW processes are special cases of bounded processes
but OU process is not in this classes. It is important to investigate the implication of
the boundedness of the excitation on the stability conditions. In the paper we present
efficient numerical methods for investigation of the energetic stability. Stability
diagrams in these three cases of the excitation are presented.
dy
= Ay + (t)By, t > 0, (6.7)
dt
102 R.V. Bobryk
where
4 2 2 0 4 0 0
A = 1 2 2 , B = 0 2 0 .
0 2 0 0 0 0
It is known [13] that in the case of TG excitation the mean E[y] for the solution of
equation (6.7) satisfies the following system:
dE[y]
= AE[y] + By1 ,
dt
dy1 2
= y1 / + Ay1 + BE[y]. (6.8)
dt
Therefore the problem of energetic stability for Eq. (6.5) with the TG noise is
reduced to the eigenvalue problem for the matrix of coefficients of the system (6.8).
From numerical point of view it is a quite simple task. Unfortunately in the case of
OU and SW excitations the problem is more complicated and we cannot obtain a
closed system of equations for E[y].
Let us first consider the SW noise case. Because we are interested in behaviour
of the system (6.5) for large t we can deal with stationary version (6.4). Using
the CameronMartin formula for the density of Wiener measure under translation
[14] we can obtain for the mean E[y(t)] the following infinite hierarchy of linear
differential equations (see [15] for details):
dE[y]
= AE[y] + Bu1 ,
dt
du1 1 2
= u1 + Au1 + Bu2 + BE[y],
dt 2 2
duk k2 2
= uk + Auk + Buk+1 + Buk1 ,
dt 2 4
E[y(0)] = y(0), uk (0) = 0, k = 1, 2, 3, . . . . (6.9)
Here
k k2 t
uk (t) := exp { }(E[eik y(t; w(s) + iks/ )]
( 2 i)k 2
+ (1)k E[eik y(t; w(s) iks/ )]), k N,
where y(t; w(s) iks/ ) is the solution of Eqs. (6.4), (6.7) with w(t) replaced by
w(t) ikt/ . In this way we reduce the problem of the energetic stability for
Eq. (6.5) to the asymptotic stability for this hierarchy. Note that in the case of
nonstationary sine-Wiener process (6.1) we again obtain the chain (6.9) but with
6 Numerical Study of Energetic Stability for Harmonic Oscillator. . . 103
The solution of this equation is a functional of (t) and there exist functional
derivatives of all orders
k y(t)
,
(s1 ) (sk )
k1 y(si )
, k = 1, 2, 3, . . . , (6.10)
(s1 ) (si1 ) (si+1 ) (sk )
where R[ ] is a functional of (t). Applying this formula to the Eqs. (6.7), (6.10) we
obtain for E[y(t)] the following infinite hierarchy of integro-differential equations:
t
dE[y] y(t)
= AE[y] + B K(t s)E[ ]ds,
dt 0 (s)
104 R.V. Bobryk
t
k y(t) k y(t1 )
E[ ]=A E[ ]dt1 (6.11)
(s1 ) (sk ) 0 (s1 ) (sk )
t t1
k+1 y(t1 ) k
+B K(t1 sk+1 )E[ ]dsk+1 dt1 + B (t si )
0 0 (s1 ) (sk+1 ) i=1
k1 y(si )
E[ ], k = 1, 2, 3, . . . .
(s1 ) (si1 ) (si+1 ) (sk )
k y(t)
E[ ]ds1 dsk , k = 1, 2, 3, . . . .
(s1 ) (sk )
Using this substitution we rewrite the hierarchy (6.11) as the following infinite
hierarchy of coupled linear differential equations [18]:
dE[y]
= AE[y] + Bv1 ,
dt
dv1
= v1 / + Av1 + Bv2 + 2 BE[y]/ ,
dt
dvk
= kvk / + Avk + Bvk+1 + +k 2 Bvk1 / , k = 2, 3, . . . ,
dt
E[y(0)] = y(0), vk (0) = 0, k = 1, 2, 3, . . . .
It is important that the coefficients in this hierarchy are constant. In the equations for
vn we neglect vn+1 . Then we obtain a closed set of linear differential equations of
first order with constant coefficients. It is proved [19] that the solution of this closed
set converges to E[y(t)] as n . Stability of the closed set is determined by the
signs of eigenvalues of its coefficient matrix.
Here we present stability diagrams for three considered cases of the random
excitation. They are obtained numerically by applying the methods from the
previous section. In the case of the SW and OU noises the truncation index n is
chosen in such a way that a further increase does not change the diagrams. In Fig. 6.1
6 Numerical Study of Energetic Stability for Harmonic Oscillator. . . 105
2.5
4
2.0 Unstable
3
1.5 Unstable
1.0 2
0.5 Stable 1
Stable
0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.5 1.0 1.5 2.0 2.5
Fig. 6.1 Energetic stability diagrams to the Eq. (6.1) for the values = 0.5, = 1 and [0.01, 3]
(left) and for the values = = 1, [0.01, 3] (right). Dotted, solid and dashed curves separate
stability and unstability regions for TG, SW and OU noise cases, respectively
2.5 2.5
Unstable
2.0 2.0
Unstable
1.5 1.5
1.0 1.0
Stable
0.5 0.5
Stable
0.5 1.0 1.5 2.0 2.5 3.0 0.5 1.0 1.5 2.0 2.5 3.0
Fig. 6.2 As in Fig. 6.1 but for the values = 0.5, = 3 and [0.001, 3] (left) and for the values
= = 1, [0.1, 3] (right)
they are presented in the parameter spaces ( , ) (left) and ( , ) (right). Dotted,
solid and dashed curves separate stability and instability regions for TG, SW and
OU noise cases, respectively. One can observe significant differences in stability
regions with growth of the correlation time.
In Fig. 6.2 the stability diagrams are shown in the parameter spaces ( , ) (left)
and ( , ) (right). We also present here the stability curve obtained from condition
(6.6) (dotted-dashed curve). It is interesting to note that the stability curve in this
limiting case is below other curves. In OU noise case this fact was rigorously proven
in [20]. In a similar way as it was done in [20] the property can also be proven in
the TG and SW noise cases. The numerical method is quite efficient. The stability
diagrams for the truncation index n = 30 and n = 90 are the same.
3
2.5
2 2.0
1.5
1 1.0
0.5
4 4
0.8 3 0.8 3
0.6 2 0.6 2
0.4 1 0.4 1
0.2 0.2
0 0
2.5
2.0
1.5
1.0
0.5
4
0.8 3
0.6 2
0.4
1
0.2
0
Fig. 6.3 Upper-left panel: Energetic stability diagram to Eq. (6.5) with three types of noise for
= 1. The surface separates stability (below) and instability (above) regions. Upper-left panel:
TG noise; upper right panel: SW noise; lower panel: OU noise
In Fig. 6.3 the stability diagrams to Eq. (6.5) are shown in the space ( , , ) for the
value = 1 and for the TG, SW and OU excitations.
In Fig. 6.4 the stability diagrams to Eq. (6.5) are shown in the space ( , , ) for the
value = 0.5 and for the TG, SW and OU noise cases.
In Fig. 6.5 the stability diagrams to Eq. (6.5) are shown in the space ( , , ) for the
value = 1 and for the TG, SW and OU noise cases.
6 Numerical Study of Energetic Stability for Harmonic Oscillator. . . 107
3
2
3
1 2
0 1
4 0
3
1.5
2
Out[21]=
1 1.0
0.0
0.5 0.0
1.0 0.5
1.0
1.5 1.5
2.0 2.0
3
2
1
0
1.4
Out[24]= 1.2
1.0
0.8
0.0
0.5
1.0
1.5
2.0
Fig. 6.4 Energetic stability diagram to Eq. (6.5) with three kinds of noises for = 0.5. The surface
separates stability (below) and instability (above) regions. Upper-left panel: TG noise; upper right
panel: SW noise; lower panel: OU noise
2.0
1.5
1.0
0.5
2
Out[33]=
1
0
3
2
1
Fig. 6.5 Energetic stability diagram to Eq. (6.5) with three types of noise for = 1. The surface
separates stability (below) and instability (above) regions. Upper-left panel: TG noise; upper right
panel: SW noise; lower panel: OU noise
108 R.V. Bobryk
6.5 Conclusion
In the paper we have presented the energetic stability diagrams for the harmonic
oscillator with random damping parameter. Three cases of zero mean random
excitation with the same correlation are considered. It is shown that the random
excitations can have important influence on stability regions especially if the
correlation time is not small. It follows from the numerical computations that the
stability regions in the case of TG excitation are larger than in the case of SW one.
On the other hand, the stability regions in the case of SW excitation are larger that
in the OU one. It is interesting to note a similarity of the surfaces which separate
stability and instability regions in all cases of the excitation. Proposed numerical
methods are quite efficient and can be applied to other stability problems.
References
Hideo Hasegawa
7.1 Introduction
A study of stochastic systems has been extensively made with the use of the
Langevin model where Gaussian white (or colored) noise is usually adopted because
of a simplicity of calculation. In recent years, however, there is a growing interest in
studying dynamical systems driven by non-Gaussian colored noise (NGCN). This
is motivated by the fact that NGCN is quite ubiquitous in natural phenomena. For
example, experimental results for crayfish and rat skin offer strong indication that
there could be NGCN in these sensory systems [16, 20]. It has been theoretically
shown that the peak of the signal-to-noise ratio (SNR) in the stochastic resonance
for NGCN becomes broader than that for Gaussian noise [7]. This result has been
confirmed by an analog experiment [6]. Effects of NGCN on the mean first-passage
H. Hasegawa ()
Tokyo Gakugei University, 4-1-1, Nukui-kita machi, Koganei, Tokyo 184-8501, Japan
e-mail: hideohasegawa@goo.jp
time [3], Brownian motors with rachet potential [4], a supercritical Hopf bifurcation
[23], and spike coherence in a HodgkinHuxley neuron [22] have been studied.
Stochastic systems with Gaussian colored noise are originally expressed by the
non-Markovian process, which is transformed into the Markovian one by extending
the number of variables and equations. The FokkerPlanck equation (FPE) for
colored noise includes the probability distribution function (PDF) expressed in
terms of multi-variables. We may transform this multivariate FPE to the univariate
FPE or obtain the effective Langevin equation with the use of some approximation
methods such as the universal colored noise approximation (UCNA) [9, 15] and the
functional-integral methods (FIMs) [7, 8, 14, 21]. The purpose of this paper is to
study an application of a moment method (MM) to the Langevin model with NGCN
[13], which is simpler and more transparent than UNCA and FIM.
The paper is organized as follows. The MM is explained in Sect. 7.2, where
we derive the effective Langevin equation, eliminating variables relevant to NGCN
[13]. A comparison is made among the MM, UCNA, and FIMs in Sect. 7.3, where
results of direct simulation (DS) are also presented. Section 7.4 is devoted to
conclusion.
We consider a Brownian particle subjected to NGCN (t) and white noise (t)
whose equations of motion are expressed by [5]
with
K( ) = . (7.3)
[1 + (q 1)( / 2 ) 2 ]
Here F(x) = U
(x), U(x) expresses a potential; I(t) is an external input; signifies
the relaxation time; and denote magnitudes of noises; and are zero-mean
white noises with correlations: (t) (t
) = (t t
), (t) (t
) = (t t
) and
(t) (t
) = 0. The stationary PDF of the Langevin equation given by Eq. (7.2) has
been extensively discussed [1, 2, 11, 17] in the context of the nonextensive statistics
[18, 19]. The stationary PDF for is given by [1, 2, 11, 17]
1/(q1)
p( ) 1 + (q 1) 2 , (7.4)
2 +
where [x]+ = x for x 0 and zero otherwise. Eq. (7.4) for q = 1.0 yields the Gaussian
PDF,
p( ) e( /
2 ) 2
. (7.5)
7 A Moment-Based Approach to Bounded Non-Gaussian Colored Noise 111
The PDF given by Eq. (7.4) for q > 1.0 is non-Gaussian distribution with a long-tail
for q < 1.0 is non-Gaussian distribution bounded for [c , c ] with
while that
c = / (1 q) . Expectation values of and 2 are given by
2
= 0, 2 = . (7.6)
(5 3q)
2 2
p(x, ,t) = {[F(x) + + I(t)]p(x, ,t)} + p(x, ,t)
t x 2 x2
1 1 2 2
[K( )p(x, ,t)] + p(x, ,t). (7.7)
2 2
By using the MM for the Langevin model given by Eqs. (7.1) and (7.2), we obtain
equations of motion given by [11, 12]
dx
= F(x) + + I(t), (7.8)
dt
d 1
= K( ), (7.9)
dt
dx2
= 2xF(x) + 2x + 2xI(t) + 2 , (7.10)
dt
2
d 2 2
= K( ) + , (7.11)
dt
dx 1
= F(x) + 2 + I(t) + xK( ). (7.12)
dt
In order to close equations of motion within the second moment, we approximate
K( ) by [21]
2(2 q)
K( ) with rq = , (7.13)
rq (5 3q)
= + (t), (7.14)
rq
which generates Gaussian noise with variance depending on q and but non-
Gaussian noise in a strict sense.
112 H. Hasegawa
d
= f0 + f2 + + I(t), (7.16)
dt
d
= , (7.17)
dt rq
d
= 2( f1 + ) + 2 , (7.18)
dt
2
d 2
= + , (7.19)
dt rq
d 1
= f1 +, (7.20)
dt rq
where f = (1/!) F( )/ x .
When we adopt the stationary values for , , and :
rq 2 rq2 2
s = 0, s = , s = , (7.21)
2 2(1 rq f1 )
equations of motion for and become
d
= f0 + f2 + I(t), (7.22)
dt
d rq2 2
= 2 f1 + + 2, (7.23)
dt (1 rq f1 )
where rq is given by Eq. (7.13).
We may obtain the effective Langevin equation given by
from which Eqs. (7.22) and (7.23) are derived [11, 12]. Equations (7.24) and (7.25),
which are the main results of this study, clearly express the effect of non-Gaussian
colored noise. The effective magnitude of noise e f f depends on q and .
7 A Moment-Based Approach to Bounded Non-Gaussian Colored Noise 113
7.3 Discussion
We will compare the result of the MM with those of several analytical methods such
as the universal colored noise approximation (UCNA) [9,15] and functional-integral
methods (FIM-1 [21] and FIM-2 [7, 8]).
(a) UCNA. Jung and Hanggi [9, 15] proposed the UCNA, interpolating between
the two limits of = 0 and = of colored noise, and it has been widely adopted
for a study of effects of Gaussian and non-Gaussian colored noises. Employing the
UCNA, we may derive the effective Langevin equation. Taking the time derivative
of Eq. (7.1) with = 0 and using Eq. (7.14) for , we obtain the effective Langevin
equation given by [13]
with
F(x) rq (I + rq I)
FeUf f (x) = , eUf f = , IU (t) = , (7.27)
(1 rq F
) (1 rq F
) e f f (1 rq F
)
where F = F(x), F
= F
(x), and rq is given by Eq. (7.13). It is noted that eUf f given
by Eq. (7.27) generally depends on x, yielding the multiplicative noise in Eq. (7.26).
(b) Functional-Integral Method (FIM1). Wu, Luo, and Zhu [21] started from the
formally exact expression for P(x,t) of Eqs. (7.1) and (7.14) with I(t) = 0 given by
P(x,t) = [F(x)P(x,t)] (t) (x(t) x)
t x x
(t) (x(t) x), (7.28)
x
where denotes the average over the probability P(x,t) to be determined. They
obtained the effective Langevin equation which yields Eq. (7.26) but with [13]
rq
FeWf f (x) = F(x), eWf f = , IeWf f (t) = 0, (7.29)
1 rq Fs
Table 7.1 A comparison among various approaches to the Langevin model given
by Eqs. (7.1) and (7.2) [or (7.14)], which yield the effective Langevin equation
given by x = Fe f f + e f f (t) + Ie f f (t), where rq = 2(2 q)/(5 3q) and sq =
1 + (q 1)( /2 2 )F 2 : (1) MM [13], (2) FIM1 [21], (3) UCNA [9, 15] and (4)
FIM2 [7, 8] (see text)
Fe f f e f f Ie f f Method
F rq /( 1 rq f1 ) I(t) MM(1)
F rq /( 1 rq Fs
) FIM1(2)
F/(1 rq F
) rq /(1 rq F
) [I(t) + I(t)]/
UCNA(3)
(1 rq F
)
F/(1 sq F
) sq /(1 sq F
) FIM2(4)
F sq
FeFf f (x) = , eFf f = , I F (t) = 0, (7.30)
(1 sq F )
(1 sq F
) e f f
with
sq = 1 + (q 1) F2 . (7.31)
2 2
eFf f in Eq. (7.30) depends on x in general and yields the multiplicative noise in
Eq. (7.26).
Results of various methods are summarized in Table 7.1. We note that the result
of the MM agrees with that of FIM1, but it is different from those of UCNA and
FIM2. Even for q = 1.0 (Gaussian noise), the UCNA does not agree with the MM
within O( ).
Figure 7.1 shows stationary PDFs calculated with F(x) = x, = 1.0, =
0.5, and = 0.0 for several sets of (q, ) by using various methods as well as
DS which is performed for Eqs. (7.1) and (7.2). We note that widths of PDFs for
= 1.0 are narrower
than those for = 0.5, because the effective noise strength of
e f f [= rq / 1 + rq ] is decreased with increasing . Widths of PDF for q = 0.8
are slightly narrower than those for q = 1.0. This is due to a reduced rq = 0.92 for
q = 0.8 which is smaller than rq = 1.0 for q = 1.0.
7.4 Conclusion
2 2
a b
P(x)
1 1
q=0.8 q=0.8
=0.5 =1.0
0 0
2 1 0 1 2 2 1 0 1 2
2 2
c d
MM
UCNA
P(x)
1 FIM2
1
DS
q=1.0 q=1.0
=0.5 =1.0
0 0
2 1 0 1 2 2 1 0 1 2
x x
Fig. 7.1 Stationary PDFs P(x) for F(x) = x calculated by MM (solid curves), UCNA (chain
curves), FIM2 (dotted cures), and DS (dashed curves), results of FIM1 being the same as those of
MM: (a) (q, ) = (0.8, 0.5), (b) (0.8, 1.0), (c) (1.0, 0.5), and (d) (1.0, 1.0) ( = 1.0, = 0.5 and
= 0.0)
Acknowledgments This work is partly supported by a Grant-in-Aid for Scientific Research from
the Japanese Ministry of Education, Culture, Sports, Science, and Technology.
References
8.1 Introduction
distinct from phase transitions that need spatially extended systems [4]. Genuine
noise-induced phase transitions have been, instead and not surprisingly, found in
many spatiotemporal dynamical systems [57].
Many studies in the field of noise-induced phenomena both in zero-dimensional
and in spatially extended systems were, respectively, based on temporal [3] or
spatiotemporal white noises [710]. This important model of noise is, however,
mainly appropriate when modeling internal hidden degrees of freedom, of mi-
croscopic nature. On the contrary, extrinsic fluctuations (i.e., originating externally
to the system in study) may exhibit both temporal and spatial structures [6, 11],
which may induce new effects. For example, it was shown that zero-dimensional
systems perturbed by colored noises exhibit correlation-dependent properties that
are missing in case of null autocorrelation time, such as the emergence of stochastic
resonance also for linear systems, and reentrance phenomena, i.e. transitions from
monostability to bistability and back to monostability [2, 4, 12]. Even more striking
effects are observed in spatially extended systems that are perturbed by spatially
white but temporally colored noises. These phenomena are induced by a complex
interplay between noise intensity, spatial coupling, and autocorrelation time [4].
Garca-Ojalvo, Sancho, and Ramrez-Piscina introduced in [13] the spatial ver-
sion of the OrnsteinUhlenbeck noise, which we shall call GSR noise, characterized
by both a temporal scale and a spatial scale [14]. The GinzburgLandau
field modelone of the best-studied amplitude equation representing universal
nonlinear mechanismsadditively perturbed by the GSR noise was investigated
in [6, 15], where it was shown the existence of a non-equilibrium phase transition
controlled by both the correlation time and the correlation length [6, 15].
In order to generate a temporal bounded noise, two basic recipes have been
adopted so far. The first consists in generating the noise by means of an appropriate
stochastic differential equation [16,17], whereas the second one consists in applying
a bounded function to a standard Wiener process. In the purely temporal setting,
two relevant examples of noises obtained by implementing the fist recipe are the
TsallisBorland [16] and the CaiLin [17] noises, whereas an example generated
by following the second recipe is the zero-dimensional sine-Wiener noise [18].
Our aim here is twofold. First, we want to review the definitions and properties
of three simple spatiotemporal bounded noises we recently introduced [19, 20]. The
first two noises extend the above-mentioned TsallisBorland and CaiLin noises
[19], the third extends the sine-Wiener bounded noise [20].
Second, we want to assess the effects of such bounded stochastic forces (i.e., of
additive bounded noises) on the statistical properties of the spatiotemporal dynamics
of the GinzburgLandau (GL) equation.
Phase transitions induced in GL model by additive and multiplicative unbounded
noises were extensively studied in the last 20 years [2, 6, 13, 2127]. Thus, our aim
here is uniquely to focus on the effects related to the boundeness of the noises.
8 Spatiotemporal Bounded Noises 119
A family of Langevin equations generating bounded noises that extend the Tsallis
Borland noise [16, 28] is the following:
2D
(t) = f ( ) +
(t), (8.5)
where (t) is a Gaussian white noise with (t) (t1 ) = (t t1 ), and f ( ) is
a continuous decreasing function such that: (i) f (0) = 0; (ii) f ( ) = f ( ); (iii)
f (+B) = and f (B) = +; (iv) the potential U( ) associated with f ( ) is such
120 S. de Franciscis and A. dOnofrio
D P
( )
f ( ) = . (8.6)
2 P ( )
1 (t)
f ( ) = . (8.8)
1 ( (t)/B)2
We recall that in the temporal case the TsallisBorland noise is such that: i) its
stationary distribution of (t) is a Tsallis q-statistics [16, 28]:
1
PT B ( ) = A(B2 2 )+1q , q [, 1) ; (8.9)
In [17, 29, 30], the following family of bounded noises was introduced:
1
(t) = (t) + g( ) (t), (8.10)
c
where g(|B|) = 0 and (t) is a Gaussian white noise with (t) (t1 ) = (t t1 ).
If g( ) is symmetric, then the process (t) has zero mean, and the same
autocorrelation of the OU process [17,29], i.e., c denotes the actual autocorrelation
time of the process (t).
As shown in [17,29], a pre-assigned stationary distribution P( ) can be obtained
by the Eq. (8.10) by setting
8 Spatiotemporal Bounded Noises 121
x
2
g( ) = uP(u)du. (8.11)
c P( ) B
1 2 2
t (x,t) = (x,t) + (x,t) + g( ) (x,t). (8.12)
c 2c
implying, in the purely temporal case, the following stationary distribution for :
PCL ( ) = A B2 2 + , > 1. (8.14)
For > 0 PCL ( ) is unimodal, for (1, 0) it is bimodal with PCL (B) = +.
2L p (t1 ) 0. (8.15)
2 2
p (t1 + dt) = a + dt p (t1 ) a B, (8.16)
2c L
and, being dt infinitesimal, p (t1 + dt) B. Similar reasoning holds p (t1 ) is very
close to B.
122 S. de Franciscis and A. dOnofrio
The sine-Wiener
noise is obtained by applying the bounded function h(u) =
B sin( u) to a standard Wiener process W (t) yielding:
2
2
(t) = B sin W (t) . (8.17)
1
Pe q( ) = , (8.18)
B2 2
In order to characterize the properties of the bounded noises defined in the previous
sections, we may study the global behavior of the noise by means of the equilibrium
heuristic probability density of the noise lattice variables p , Peq ( ) (Fig. 8.1).
a 0.1 b 0.12
=0.0 2D=0.1
=0.5 2D= 0.15
0.075 =0.75
0.08 2D= 0.2
=1
2D=0.25
Peq(p)
Peq(p)
=2
0.05 2D=0.5
0.04
0.025
0.02
0 0
1 0.5 0 0.5 1 1 0.5 0 0.5 1
p p
Fig. 8.1 Equilibrium distribution Peq ( p ) for some spatiotemporal bounded noises, on a 40 40
lattice system with B = 1. Panel (a): CaiLin noise with = 1 and = 0.5 and 2D = 1;
panel (b): sine-Wiener noise with c = 2 and = 0
8 Spatiotemporal Bounded Noises 123
1
t p = p p3 + 2L p + A p (t), (8.20)
2
where A p (t) is a generic bounded or unbounded additive noise. If A p (t) is the GSR
noise, it was shown [6] that both spatial and temporal correlation parameters ( and
) shift the transition point towards larger values.
In the following we will illustrate some analytical and numerical results for the
case where A p (t) is one of the three bounded noises above described. Our aim is to
provide a testbed to our novel spatiotemporal bounded noises and not to evidence
some unknown aspects of the very studied GL model.
In line with [6], phase transitions in GL equation will be characterized by means
of the order parameter global magnetization M and of its relative fluctuation M :
< | p p| > < | p p |2 > < | p p | > 2
M , M . (8.21)
N2 N2
Again in line with [6], we define a transition from large to small values of the order
parameter as an order to disorder transition.
All simulations have been performed in a 40 40 lattice for a time interval
[0, 250], and the temporal averages were computed in the interval [125, 250]. In all
cases, noise initial condition was set to 0. Moreover the initial condition of GL
system was the ordered phase, i.e., (x, 0) = 1x; thus we measured the robustness
of order against the presence of the bounded noise.
124 S. de Franciscis and A. dOnofrio
k p 0. (8.22)
This property and the fact that A p (t) B implies that p (t) p (t), where
1
t p =
p p3 + 2L p B, p (0) = p (0). (8.23)
2
Now, if 0 < B < B = 1/(3 3), then the equation
s s3 = 2B (8.24)
has three solutions sa (B) < 0, sb (B) (0, 1) and sc (B) (0, 1) such that sb (B) <
sc (B). For example, for B = 0.19 < B it is: sa (0.19) 1.15306, sb (0.19)
0.52331 and sc (0.19) = 0.62975. In particular, if B 1, then it is sc (B) 1 B
and sa (B) 1 B. It is an easy matter to show that if p (0) > sb (B) then
p (t) > sb (B), also implying p (t) > sb (B) and of course that M(t) > sb (B) and
Ms (t) > sb (B). Indeed, suppose that at a given time instant t1 all p (t1 ) sb (B), but
a point q where q (t1 ) = sb (B). Thus, it is
1
1
t q (t1 ) = q q3 + 2L q B = 0 + 2L q 0. (8.25)
2 2
The vector c(B) = sc (B)(1, . . . , 1) is a locally stable equilibrium point for the
differential system ruling the dynamics of p (t). Indeed, c is a minimum of the
associated energy. However, the system might be multistable, similar to the GL
model with total coupling in the lattice [32]. By adopting a Weiss mean field
approximation, one can proceed as in [32] and infer that the equilibrium is unique
for N
1. Namely, defining the auxiliary variable m p = jne(p) j , the equilibrium
equations reads
p3 + 3 p = 4m p 2B. (8.26)
We are only interested to the subset p sb (B) that also implies m p sb (B). Note
now that the equation s + 3s3 = x for x > 0 has a unique positive solution s = k(x).
Thus
Now, by the following approximation m p (1/N) Nj=1 j , one gets the equation:
which has to be solved under the constraint m > sb (B). As it is easy to verify, the
above equation has only one solution, m = sc (B).
8 Spatiotemporal Bounded Noises 125
Any case for B 1 the initial point p (0) = 1 should be such that p (t) remains
in the basin of attraction of c(B), so that for large times p (t) sc (B), implying that
From the inequality A p (t) B, by using similar methods one may infer that for
small B it is
where uc (B) > 1 is the unique positive solution (for B < B ) of the equation
u u3 = 2B. (8.31)
Note that it is uc (B) = sa (B), due to the anti-symmetry of function ss3 . Summing
up, we may say that for small B and probably for all B (0, B ) it is asymptotically
1
p p3 + 2L p B = 0 (8.33)
2
for various values of B in the interval (0.01, B ) and in all cases we found only
one equilibrium with components greater than sb (B): = c(B) = sc (B)(1, . . . , 1).
Similarly, when setting A p (t) = +B in Eq. (8.20), we found only one equilibrium
value: uc (B)(1, . . . , 1).
Figure 8.2a shows the effect of the noise amplitude B on curve M vs. . For small
B, in line with our analytical calculations, no phase transition occurs. For larger B,
a phase transition is observed, whose transition point decreases with increasing B.
Based on the analytical study of the previous subsection, it is excluded that for small
values of B a phase transition could be observed for any values of c .
In absence of spatial coupling ( = 0), the magnetization M is a decreasing
function of autocorrelation time c (see Fig. 8.2b). This finding suggests that
bounded noises promote the disordered phase for the GL system. If the perturbation
is the GSR noise, then the phenomenology is opposite: c enhances the ordered
phase [21].
The differences of Peq ( p ) between the bounded and the unbounded noises may
roughly explain these behaviors. Indeed, in the GSR noise the standard deviation of
126 S. de Franciscis and A. dOnofrio
a 1 b 1
B= 0.19 TB q=-1
B=1.6 CL =+0.5
0.75 B=2.4 0.75 CL =-0.5
B=3.2 SW
B=20
M
M
0.5 0.5
0.25 0.25
0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
c c
Fig. 8.2 Panel (a): effect of the noise amplitude B on the curve M vs. c for GL model perturbed
by additive spatiotemporal sine-Wiener noise. Here the initial condition is (x, 0) = 1. Other
parameters: = 1 and 2D = 1. Panel (b): effects of autocorrelation parameter c on GL
perturbed by additive spatiotemporal bounded noise. Other parameters: B = 2.4, = 0
model
and 2D = 1
a b 1
1
0.75
0.75
M
0.5
0.5 =-0.75
=-0.5
0.25 =-0.25 0.25 q =-0.3333
=0.0 q= -1.0
=+0.25 q=-3.0
0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Fig. 8.3 Effects of CaiLin parameter (panel (a)) and of TsallisBorland parameter q (panel (b))
on the reentrant transition. Other parameters: 40 40 lattice, B = 2.6 and c = 0.3. Taken from
Ref. [19]. (C) American Physical Society (2011)
scales with 1 . Thus, the related disorder-to-order transition with c in the GL field
could be caused by the noise amplitude reduction. On the contrary, in both the Cai
Lin and TsallisBorland noises the equilibrium standard deviation is independent of
c , while in sine-Wiener noise it is weakly dependent. As a consequence, the field
is driven by an even more quenched noise, c dependent, with a constant broad
distribution, enhancing the disordered phase.
The behavior of the system is deeply affected by the spatial coupling. In fact,
for CaiLin and TsallisBorland noises one observes in some cases a reentrant
transition order/disorder/order in (Fig. 8.3). It is possible to explain the emergence
of the reentrant transition by the double role that has on noise equilibrium
distribution: from one side enhances the spatial quenching of the noise, while
from the other it reduces noise amplitude in terms of standard deviation of
P( p ). The intrinsic dynamical process that generates bounded noise, and not
8 Spatiotemporal Bounded Noises 127
a 1 b 0.3
=0 =1 = 6
= 0.5 = 3
0.75
0.2
M
0.5
M
0.1
0.25
0 0
0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5
2D 2D
Fig. 8.4 Reentrant phase transition in GLmodel perturbed by additive spatiotemporal sine-Wiener
noise for varying white noise strength 2D. Initial condition is (x, 0) = 1. Panel (a): global
magnetization M. Panel (b): relative fluctuation M . Other parameters B = 2.6 and c = 2
0.03 0.16
2D=0.05 0.14 B=0.19
0.025 2D=0.75 B=1
0.12 B=2.6
0.02 2D=2
0.1
Peq(p)
Peq(p)
0.015 0.08
0.01 0.06
0.04
0.005 0.02
0 0
2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2
p p
Fig. 8.5 Stationary distribution of the field for the GL model perturbed by additive spatiotemporal
sine-Wiener noise, in response to changes in noise parameters B (left panel) and D (right panel).
Other parameters are, respectively, (c = 2, = 1, 2D = 0.75) and (c = 2, = 1, B = 2.6)
In the first part of this work, we defined three classes of spatiotemporal colored
bounded noises, which extend the zero-dimensional TsallisBorland noise, the
CaiLin noise, and the sine-Wiener noise. We analyzed the role of the spatial cou-
pling parameter and of the temporal correlation parameter c on the distribution
of the noise by studying the noise equilibrium distribution. Unlike the case of GSR
noise, the equilibrium distributions of the noises introduced here do not depend
on (or have a weak dependence), while in some cases the increase of induces
transitions from bimodality to unimodality or trimodality in the distributions. These
features could be important in the study of bounded noise-induced transitions of
stochastically perturbed nonlinear systems.
In the second part we employed the above-mentioned bounded noises to investi-
gate the phase transitions of the GinzburgLandau model under additive stochastic
perturbations. Our simulations showed a phenomenology quite different from the
one induced by colored unbounded noises. To start, in the presence of spatially
uncoupled bounded noises, the increase of the temporal correlations enhances the
quenching of the noise, eventually producing an order-to-disorder transition in the
GL model.
If the perturbation is unbounded, an opposite transition is observed. Furthermore,
spatial coupling induces contrasting effects on the spatiotemporal fluctuations of the
noise, resulting for some kind of noises in a reentrant transition (orderdisorder
order) in the GL field. This specific case of dependence of the transition on the
type of noise has not been observed previously, at the best of our knowledge, in
spatiotemporal dynamical systems, and it is in line with previous observations in
zero-dimensional systems.
We studied the effect of bounded perturbations on GL transitions, and stressed
out, with both numerical simulation and analytical considerations, that the bound-
edness of noise is crucial for the stability of the ordered state.
In general the observed phenomenologies in GL systems resulted to be strongly
depend on the specific model of noise that has been adopted. Then in absence of
experimental data on the distribution of the stochastic fluctuations for the problem
in study, could be necessary to compare multiple kinds of possible stochastic
perturbations models. This is in line with similar observations concerning bounded
noise-induced-transitions in zero-dimensional systems [33].
Acknowledgements This research was performed under the partial support of the Integrated EU
project P-medicineFrom data sharing and integration via VPH models to personalized medicine
(Project No. 270089), which is partially funded by the European Commission under the Seventh
Framework program.
8 Spatiotemporal Bounded Noises 129
References
1. Gammaitoni, L., Hanggi, P., Jung, P., Marchesoni, F.: Rev. Mod. Phys. 70, 223 (1998)
2. Ridolfi, L., DOdorico, P., Laio, F.: Noise-Induced Phenomena in the Environmental Sciences.
Cambridge University Press, Cambridge (2011)
3. Horsthemke, W., Lefever, R.: Noise-Induced Transitions: Theory and Applications in Physics,
Chemistry, and Biology. Springer Series in Synergetics. Springer, New York (1984)
4. Wio, H.S., Lindenberg, K.: Modern challenges in statistical mechanics. AIP Conf. Proc. 658,
1 (2003)
5. Ibanes, R.T.M., Garca-Ojalvo, J., Sancho, J.M.: Lect. Note Phys. 557/2000, 247 (2000)
6. Garca-Ojalvo, J., Sancho, J.M.: Noise in Spatially Extended Systems. Springer, New York
(1996)
7. Sagues, F., Sancho, J., Garca-Ojalvo, J.: Rev. Modern Phys. 79(3), 829 (2007)
8. Wang, Q.Y., Lu, Q.S., Chen, G.R.: Phys. A Stat. Mech. Appl. 374(2), 869 (2007)
9. Wang, Q.Y., Lu, Q.S., Chen, G.R.: Eur. Phys. J. B Condens. Matter Complex Syst. 54(2),
255 (2006)
10. Wang, Q.Y., Perc, M., Lu, Q.S., Duan, S., Chen, G.R.: Int. J. Modern Phys. B 24, 1201 (2010)
11. Sancho, J., Garca-Ojalvo, J., Guo, H.: Phys. D Nonlinear Phenom. 113(24), 331 (1998)
12. Jung, P., Hanggi, P.: Phys. Rev. A 35, 4464 (1987)
13. Garca-Ojalvo, J., Sancho, J.M., Ramrez-Piscina, L.: Phys. Rev. A 46, 4670 (1992)
14. Lam, P.M., Bagayoko, D.: Phys. Rev. E 48, 3267 (1993)
15. Garca-Ojalvo, J., Sancho, J.M.: Phys. Rev. E 49, 2769 (1994)
16. Wio, H.S., Toral, R.: Phys. D Nonlinear Phenom. 193(14), 161 (2004)
17. Cai, G.Q., Lin, Y.K.: Phys. Rev. E 54, 299 (1996)
18. Bobryk, R.V., Chrzeszczyk, A.: Phys. A Stat. Mech. Appl. 358(24), 263 (2005)
19. de Franciscis, S., dOnofrio, A.: Phys. Rev. E 86, 021118 (2012)
20. de Franciscis, S., dOnofrio, A.: arXiv:1203.5270v2 [cond-mat.stat-mech] (2012)
21. Garca-Ojalvo, J., Sancho, J., Ramrez-Piscina, L.: Phys. Lett. A 168(1), 35 (1992)
22. Garca-Ojalvo, J., Parrondo, J.M.R., Sancho, J.M., Van den Broeck, C.: Phys. Rev. E 54,
6918 (1996)
23. Carrillo, O., Ibanes, M., Garca-Ojalvo, J., Casademunt, J., Sancho, J.M.: Phys. Rev. E 67,
046110 (2003)
24. Maier, R.S., Stein, D.L.: Proc. SPIE Int. Soc. Opt. Eng. 5114, 67 (2003)
25. Komin, N., Lacasa, L., Toral, R.: J. Stat. Mech. Theor Exp. P12008 (2010) doi:10.1088/1742-
5468/2010/12/P12008
26. Scarsoglio, S., Laio, F., DOdorico, P., Ridolfi, L.: Math. Biosci. 229(2), 174 (2011)
27. Ouchi, K., Tsukamoto, N., Horita, T., Fujisaka, H.: Phys. Rev. E 76, 041129 (2007)
28. Borland, L.: Phys. Lett. A 245(12), 67 (1998)
29. Cai, G., Suzuki, Y.: Nonlinear Dyn. 45, 95 (2006)
30. Cai, G., Wu, C.: Probabilist. Eng. Mech. 19(3), 197 (2004)
31. Coppel, W.: Asymptotic Behavior of Differential Equations. Heath, Boston (1965)
32. Komin, N., Lacasa, L., Toral, R.: J. Stat. Mech. Theor Exp. 10, 12008 (2010)
33. dOnofrio, A.: Phys. Rev. E 81, 021923 (2010)
Part II
Bounded Noises in the Framework
of Discrete and Continuous Random
Dynamical Systems
Chapter 9
Bifurcations of Random Differential Equations
with Bounded Noise
Abstract We review recent results from the theory of random differential equations
with bounded noise. Assuming the noise to be sufficiently robust in its effects
we discuss the feature that any stationary measure of the system is supported on a
Minimal Forward Invariant (MFI) set. We review basic properties of the MFI sets,
including their relationship to attractors in systems where the noise is small. In the
main part of the paper we discuss how MFI sets can undergo discontinuous changes
that we have called hard bifurcations. We characterize such bifurcations for systems
in one and two dimensions and we give an example of the effects of bounded noise
in the context of a HopfAndronov bifurcation.
A.J. Homburg
KdV Institute for Mathematics, University of Amsterdam, Science park 904,
1098 XH Amsterdam, The Netherlands
Department of Mathematics, VU University Amsterdam, De Boelelaan 1081,
HV Amsterdam, The Netherlands
e-mail: a.j.homburg@uva.nl
T.R. Young ()
Department of Mathematics, Ohio University, Morton Hall, Athens, OH 45701, USA
e-mail: youngt@ohio.edu
M. Gharaei
KdV Institute for Mathematics, University of Amsterdam, Science park 904, 1098 XH
Amsterdam, The Netherlands
e-mail: m.gharaei@uva.nl
9.1 Introduction
x = A(x) + B(x)t ,
where the dependence on the noise is linear. Bounded noise in contrast may be much
more general but is less understood. In recent years the effects of bounded noise
have received increasing attention for dynamical systems generated by both maps
and differential equations. One type of bounded noise that has been of interest is
Dichotomous Markov Noise (see the review article [11]). This type of noise is often
accessible to analysis and arises naturally in various applications (e.g., [21, 37]).
In these pages we review aspects of dynamics and bifurcations in another type of
bounded noise system, namely, random differential equations (RDEs) with bounded
noise. We will consider random differential equations of the form
x = f (x, t ), (9.1)
m
dx = X0 (x)dt + fi ( )Xi (x)dt,
i=1
l
d = Y0 ( )dt + Y j ( ) dW j ,
j=1
given by differential equations for the state space variable x, driven by a stochastic
process defined by a Stratonovich stochastic differential equation on a bounded
manifold, see, e.g., [31]. Another example is by random switching between solution
curves of a finite number of ordinary differential equations [9], a generalization
of dichotomous Markov noise. Under some conditions such noise is sufficiently
rich to fit into the framework of this paper. Reference [15] also discusses some
constructions of stochastic processes with bounded noise.
We will discuss the fact that under mild conditions on the noise, the RDEs
admits a finite number of stationary measures with absolutely continuous densities.
The stationary measures provide the eventual distributions of typical trajectories.
Their supports are the regions accessible to typical trajectories in the long run. It is
important to note that in the case of bounded noise, there may exist more than one
stationary measure.
It was observed that under parameter variation, stationary measures of RDEs can
experience dramatic changes, such as a change in the number of stationary measures
or a discontinuous change in one of their supports. The RDEs we consider possess a
finite number of absolutely continuous stationary measures. The stationary measures
therefore have probability density functions. We distinguish the following changes
in the density functions:
1. The density function of a stationary measure might change discontinuously
(including the possibility that a stationary measure ceases to exist), or
2. The support of the density function of a stationary measure might change
discontinuously.
A discontinuous change in the density function is with respect to the L1 norm
topology. A discontinuous change of the support of a stationary measure is with
respect to the Hausdorff metric topology. It is appropriate to call such changes
hard in reference to hard loss of stability in ordinary differential equations. In [7]
a loss of stability of an invariant set is called hard if it involves a discontinuous
change, in the Hausdorff topology, of the attractor. There is an obvious analogy
with discontinuous changes in (supports of) density functions. The examples studied
later show how adding a small amount of noise to a family of ordinary differential
equations unfolding a bifurcation can lead to a hard bifurcation of density functions.
We note that these hard bifurcations may not be captured by Arnolds notion of
dynamical bifurcation.
Hard bifurcations are related to almost or near invariance in random dynamical
systems, and the resulting effect of metastability. This phenomenon found renewed
interest in [13, 14, 39]. In the context of control theory near invariance was studied
136 A.J. Homburg et al.
in [16, 24] for RDEs and [17] for random diffeomorphisms. One approach, taken in
[17, 44], to study near invariance is through bifurcation theory. It is then important
to describe mechanisms that result in hard bifurcations.
The following sections will contain an overview of the theory of RDEs, in
particular their bifurcations, along the lines of [12, 26, 27]. We do not touch on
the similar theory for iterated random maps. Here are some pointers to the literature
developing the parallel theory for randomly perturbed iterated maps. A description
in terms of finitely many stationary measures can be found in [1, 44]. Aspects of
bifurcation theory are considered in [33,44,45], see [25] for an application in climate
dynamics. References [17, 23, 38, 44] consider quantitative aspects of bifurcations
related to metastability and escape, we do not address such issues here.
In this section we describe the precise setup of the random differential equations
discussed in this chapter. Let M be a compact, connected, smooth d dimensional
manifold and consider a smooth RDE
x = f (x, t ) (9.2)
: RU U , t ( ()) = ( + t),
for any U and all initial conditions x in M, and the solutions are absolutely
continuous in t. Furthermore, solutions depend continuously on in the space U .
By the assumptions, t (, ) : M M is a diffeomorphism for any , and t 0.
Further, if is continuous, then t is a classical solution. We also consider the
skew-product flow on U M given by St t t .
9 Bifurcations of Random Differential Equations with Bounded Noise 137
t (C, U ) C (9.3)
= t P
for P -a.e. ( , x) U M.
We say that a point x M is -generic if (9.5) holds for every C0 (M, R)
and for P-a.e. U . The set of generic points of a stationary ergodic measure
is called the ergodic basin of and will be denoted E( ). An ergodic stationary
probability measure whose basin has positive volume, m(E( )) > 0, will be called
a physical measure.
Theorem 2 ([22,26]). Let (9.2) be a random differential equation with -level noise
whose flow satisfies (H1) and (H2) on a compact manifold M. Then there are a finite
number of physical, absolutely continuous invariant probability measures 1 , . . . , k
on M. Each i is supported on the closure of a minimal forward invariant set Ei .
Further, given any x M and almost any U , there exists t = t (x, ), such that
t (x, ) Ei for some i and all t > t .
We end this general introduction with a simple but important example of how
MFI sets may occur. Suppose that the random differential equation (9.2) is a small
perturbation of a deterministic system. In this case, attractors generally become
minimal forward invariant sets. Consider a random differential equation:
x = f (x, t ) (9.6)
is forward invariant [26]. Since A is asymptotically stable and x is inside its basin
(for = 0) it follows from (H1) that O+ (x) intersects A. Since A is an attractor,
any of the conditions A3 (a), (b), or (c) with (H1) implies that A O+ (x). Since
a forward invariant set must contain the forward orbits of all of its points, every
forward invariant set in U contains A. Therefore, there is only one MFI set in U
and it contains A.
Now consider a trapping region U A. Suppose that is small enough that
U U and 1 is small enough that the previous conclusion holds for U . Note that
K = U \ U is compact and that the Lyapunov function is strictly decreasing on it.
Thus there exists 2 such the Lyapunov function is also decreasing for the perturbed
flow on K for 2 . Thus there can be no forward invariant set in K for any less
than the minimum of 1 and 2 and the conclusion holds.
Corollary 1. If x0 is an asymptotically stable equilibrium for = 0, then for all
sufficiently small > 0 the system has a small MFI set that contains x0 . If is an
asymptotically stable limit cycle for = 0, then for small > 0 the system has a
MFI set that is a small neighborhood of .
140 A.J. Homburg et al.
Proof. If not, then there is an x (a, b) such that either f (x, ) 0 or f (x, ) 0. In
the first case the forward invariance of (a, b) implies that (a, x) is forward invariant.
In the second case we obtain that (x, b) is forward invariant. Either case contradicts
the minimality of (a, b).
Proposition 2. If (a, b) is an MFI set, then
for all = [ , ] and that f (a) = 0 and f+ (b) = 0. Further, f
(a) 0 and
f+
(b) 0.
Proof. The inequalities (9.9) are necessary for a and b to be boundary points of
an MFI set. The claim that f (a) = f+ (b) = 0 follows from (H1). The final claim
f
(a) 0 and f+
(b) 0 then follows from the assumption that f is C1 .
We can distinguish the following types for endpoints a and b based on the
properties of f
. We say that a is hyperbolic if f
(a) = 0 and similarly for b.
Otherwise, a or b is said to be non-hyperbolic. For one-dimensional RDEs the
following stability result is straightforward.
9 Bifurcations of Random Differential Equations with Bounded Noise 141
a b
a E c
E
a b b
f f+ f f+
Fig. 9.1 (a) A stable one-dimensional MFI set. Both endpoints of E = (a, b) are hyperbolic.
(b) A random saddle-node in one dimension. E = (b, c) is minimal forward invariant. Taken from
Ref. [27]
Proposition 3. Given any f satisfying (H1), (H3) suppose that (a, b) is a MFI
set with both a and b hyperbolic. Then (a, b) is isolated with some isolating
neighborhood W . If f is sufficiently close to f in the C1 topology, then f has a
unique MFI set (a, inside W . Further, a and b are close to a and b, respectively,
b)
and are each hyperbolic.
Proof. If a is hyperbolic it follows that f (x, t ) > 0 for all x in some neighborhood
(c, a) and all . Similarly, there is a neighborhood (b, d) on which f (x, t ) < 0.
It follows that W = (c, d) is an isolating neighborhood for (a, b).
Now let > 0 be sufficiently small so that f
(x) > f
(a)/2 for all x [a ,
a + ]. If f is within f
(a)/2 of f in the C1 topology, then the conclusion holds.
We continue with families of RDEs and consider equations
x = f (x, t ), (9.10)
Fig. 9.2 Extremal flow lines near a stationary point (left picture) or wedge point (right picture) on
the boundary of an MFI set. Taken from Ref. [27]
Definition 8. For each x P denote by i (x), i = 1, 2, the two local solution curves
to the extremal direction fields. Denote by i the forward and backward portions of
these curves.
We will build up a description of the possible boundary components of an MFI
set. To begin, for a point on the boundary either (9.1) K(x) is less than a half
plane, or, (9.2) K(x) is an open half plane, in which case x must a stationary point,
i.e. f (x, ) = 0 for some . We begin by classifying points of type (9.1).
Lemma 1. If x E for an MFI set E and K(x) is less than a half plane, then
either:
One of the local solution curves i (x) coincides locally with E, or,
Both backward solution curves i (x) belong to the boundary E.
Definition 9. We call a boundary point, x, of an MFI set, E, regular if one of i (x)
coincides locally with E. We call a segment of the boundary of E a solution arc if
it consists of regular points. If both i belong to E locally, then we call x a wedge
point.
The following theorem describes the geometry of MFI sets for typical RDEs on
compact surfaces. Figure 9.2 depicts parts of the boundary and extremal flow lines
near stationary and wedge points.
Theorem 5 ([27]). There is an open and dense set V R so that for any random
differential equation in V , an MFI set E has piecewise smooth boundary consisting
of regular curves, a finite number of wedge points, and a finite number of hyperbolic
points that belong to disks of stationary points inside E. Further, if a component
is a periodic cycle, it has Floquet multiplier less than one. Any RDE in V is stable.
Codimension one bifurcations in families of RDEs on compact surfaces are
described in the following result.
Theorem 6 ([27]). There exists an open dense set O of one-parameter families of
RDEs in R such that the only bifurcations that occur are one of the following:
1. Two sets of stationary points collide at a stationary point on the boundary E
which undergoes a saddle-node bifurcation.
144 A.J. Homburg et al.
Fig. 9.3 Images of the invariant densities for system (9.13) with = 0.1 and increasing values of
. From top left = 0.004, 0.020, 0.041, 0.204, 0.407, 0.448. The bottom middle plot, ( = 0.407),
is immediately after the hard bifurcation. In all six plots the circle exterior to the visible density is
the outer boundary of the MFI set. In the last two plots the interior circle is the inner boundary of
the MFI set. Figures taken from [12], c American Institute for Mathematical Sciences 2012
x = x y x(x2 + y2 ) + u, (9.13)
y = x + y y(x2 + y2 ) + v.
The noise terms u and v are generated via the stochastic system:
du = dW1 , (9.14)
dv = dW2 ,
where dW1 and dW2 are independent (of each other), normalized white noise
processes. Equations (9.14) are interpreted in the usual way as Ito integral equations.
In this setting in order to assure boundedness, (u, v) are restricted to the unit disk by
imposing reflective boundary conditions.
The deterministic Hopf bifurcation involves the creation of a limit cycle. In the
remaining part of this section we discuss the occurrence of attracting random cycles.
Random cycles are closed curves that are invariant for the skew-product system
and thus have a time-dependent position in state space depending on the noise
realization. The following material fits into the philosophy advocated by Arnold
in [2] of studying random dynamical systems through a skew product dynamical
systems approach, so as to capture dynamics with varying initial conditions.
146 A.J. Homburg et al.
with
t s = t+s
for t, s R.
Recall that a random fixed point is a map R : U R2 that is flow invariant,
t (R( ), ) = R( t )
t (S( , S1 ), ) = S( t , S1 ).
noise. Such maps without noise, i.e. with = 0, are known to possess invariant
circles for small positive values of . We follow their construction as elaborated in
[35]. With a normal form transformation, applied to the map without noise, a map
on the complex plane C is obtained. The reasoning in [35] continues with the
following
steps. Apply a rescaling and change to polar coordinates to write z =
f ( ) e (1 + u). Expressing F in , u coordinates gives a map of the form
i
1
Fn, = F , n1 . . . F , 1 F , . (9.18)
for any w Lip1 (S1 , [1, 1]). The graph of the limit function is called the pull-back
attractor, its orbit under the flow t is the random limit cycle. Note that this is the
point where two-sided time is needed.
The computations to check convergence in (9.19) are most easily carried out
by writing = rei for the noise and expanding F , in for small : writing
F = Aei and = rei we get
F , = (A + O( ))ei( + A O( )) .
1
(9.20)
Finally, the contraction properties of the graph transform, uniform in the random
parameter, imply that the random cycle is attracting.
We have confined ourselves with a statement on Lipschitz continuous random
cycles, the graph transform techniques, however, allow establishing more smooth-
ness [35]. The result does not discuss the dynamics on the random cycle, it is still
possible to find an attracting random fixed point on it, compare [4, 6].
References
1. Araujo, V.: Ann. Inst. Henri Poincare, Analyse non lineaire 17, 307 (2000)
2. Arnold, L.: Random Dynamical Systems. Springer, Berlin (1998)
3. Arnold, L.: IUTAM Symposium on Nonlinearity and Stochastic Structural Dynamics, Madras,
1999. Solid Mech. Appl., vol. 85, pp. 15. Kluwer Academic, Dordrecht (2001)
4. Arnold, L., Bleckert, G., Schenk-Hoppe, K.R.: In: Crauel, H., Gundlach, M. (eds.) Stochastic
Dynamics (Bremen, 1997), pp. 71. Springer, Berlin (1999)
5. Arnold, L., Kliemann, W.: On unique ergodicity for degenerate diffusions. Stochastics 21, 41
(1987)
6. Arnold, L., Sri Namachchivaya, N., Schenk-Hoppe, K.R.: Int. J. Bifur. Chaos Appl. Sci. Eng.
6, 1947 (1996)
7. Arnold, V.I., Afraimovich, V.S., Ilyashenko, Yu.S., Shilnikov, L.P.: Bifurcation Theory and
Catastrophe Theory. Springer, Berlin (1999)
8. Aubin, J.P., Cellina, A.: Differential Inclusions. Springer, Berlin (1984)
9. Bakhtin, Y., Hurth, T.: Nonlinearity 25, 2937 (2012) (unpublished)
10. Bashkirtseva, I., Ryashko, L., Schurz, H.: Chaos Solit. Fract. 39, 72 (2009)
11. Bena, I.: Int. J. Modern Phys. B 20, 2825 (2006)
12. Botts, R.T., Homburg, A.J., Young, T.R.: Discrete Contin. Dyn. Syst. Ser. A 32, 2997 (2012)
13. Bovier, A., Eckhoff, M., Gayrard, V., Klein, M.: J. Eur. Math. Soc. 6, 399 (2004)
14. Bovier, A., Gayrard, V., Klein, M.: J. Eur. Math. Soc. 7, 69 (2005)
15. Colombo, G., Pra, P.D., Krivan, V., Vrkoc, I.: Math. Control Signals Syst. 16, 95 (2003)
16. Colonius, F., Gayer, T., Kliemann, W.: SIAM J. Appl. Dyn. Syst. 7, 79 (2007)
17. Colonius, F., Homburg, A.J., Kliemann, W.: J. Differ. Equat. Appl. 16, 127 (2010)
18. Colonius, F., Kliemann, W.: In: Crauel, H., Gundlach, M. (eds.) Stochastic Dynamics. Springer,
Berlin (1999)
19. Colonius, F., Kliemann, W.: The Dynamics of Control. Birkhauser Boston, Boston (2000)
20. Crauel, H., Imkeller, P., Steinkamp, M.: In: Crauel, H., Gundlach, M. (eds.) Stochastic
Dynamics. Springer, Berlin (1999)
21. dOnofrio, A., Gandolfi, A., Gattoni, S.: Phys. A Stat. Mech. Appl. 91, 6484 (2012)
22. Doob, J.L.: Stochastic Processes. Wiley, New York (1953)
23. Froyland, G., Stancevic, O.: ArXiv:1106.1954v2 [math.DS] (2011) (unpublished)
24. Gayer, T.: J. Differ. Equat. 201, 177 (2004)
25. Ghil, M., Chekroun, M.D., Simonnet, E.: Phys. D 237, 2111 (2008)
26. Homburg, A.J., Young, T.R.: Regular Chaotic Dynam. 11, 247 (2006)
27. Homburg, A.J., Young, T.R.: Topol. Methods Nonlin. Anal. 35, 77 (2010)
28. Horsthemke, W., Lefever, R.: Noise-Induced Transitions: Theory and Applications in Physics,
Chemistry, and Biology. Springer Series in Synergetics, vol. 15. Springer, Berlin (1984)
29. Johnson, R.: In: Crauel, H., Gundlach, M. (eds.) Stochastic Dynamics. Springer, Berlin (1999)
30. Kliemann, W.: Ann. Probab. 15, 690 (1987)
31. Kunita, H.: Stochastic Flows and Stochastic Differential Equations. Cambridge University
Press, Cambridge (1990)
9 Bifurcations of Random Differential Equations with Bounded Noise 149
32. Kuznetsov, Yu.A.: Elements of Applied Bifurcation Theory. Springer, Berlin (2004)
33. Lamb, J.S.W., Rasmussen, M., Rodrigues, C.S.: ArXiv:1105.5018v1 [math.DS] (2011)
(unpublished)
34. Mallick, K., Marcq, P.: Eur. Phys. J. B 36 119 (2003)
35. Marsden, J.E., McCracken, M.: The Hopf Bifurcation and Its Applications. Springer, Berlin
(1976)
36. Nadzieja, T.: Czechoslovak Math. J. 40, 195 (1990)
37. Ridolfi, L., DOdorico, P., Laio, F.: Noise-Induced Phenomena in the Environmental Sciences.
Cambridge University Press, Cambridge (2011)
38. Rodrigues, C.S., Grebogi, C., de Moura, A.P.S.: Phys. Rev. E 82, 046217 (2010)
39. Schutte, C., Huisinga, W., Meyn, S.: Ann. Appl. Probab. 14, 419 (2004)
40. Tateno, T.: Phys. Rev. E 65, 021901 (2002)
41. Wieczorek, S.: Phys. Rev. E 79, 036209 (2009)
42. Zeeman, E.C.: Nonlinearity 1, 115 (1988)
43. Zeeman, E.C.: Bull. Lond. Math. Soc. 20, 545 (1988)
44. Zmarrou, H., Homburg, A.J.: Ergod. Theor. Dyn. Syst. 27, 1651 (2007)
45. Zmarrou, H., Homburg, A.J.: Discrete Contin. Dyn. Syst. Ser. B 10, 719 (2008)
Chapter 10
Effects of Bounded Random Perturbations
on Discrete Dynamical Systems
10.1 Introduction
are many examples of such processes arising from different areas of knowledge.
One could, for instance, study the time-dependent scattering of plankton population
with the sea streaming around an island, or the behaviour of charged particles
travelling through a magnetic field, the dynamics of interval between neural spikes,
the oscillation of share prices in the stock market, among many other phenomena.
Their dynamical behaviour is described in terms of observable quantities. We say
that our system evolves in a space of states or phase space M, that is, the collection
of relevant variables describing the dynamics. In our examples, it could be the
concentration of plankton, the position and speed of charged particles, and so on. In
many cases the phase space M is a subset of an Euclidian space Rn .1
The dynamical behaviour of such quantities can be modelled, in general, by
systems of ordinary differential equations. Thus, the model is thought to evolve
in continuous time. Alternatively, we can think of analysing snapshots of the
continuous time dynamics, which gives rise to discrete time models. In this case,
the present state, say given by the possibly multidimensional variable x M, evolves
every unit of time under a given rule f : M M to the state f (x). For a given
initial condition x0 , its associated orbit is the sequence of points xn , obtained by the
iteration of our rule, such that, for n = 1, 2, . . ., we have xn+1 = f (xn ).
Dynamically we would like to reach robust conclusions from a model by seeking
for methods and tools which describe the behaviour of most orbits as time goes
to infinity rather than focusing on a single trajectory. Another important concern
is the understanding of how stable the asymptotic behaviour is under random
perturbations. It represents a natural point of view of dynamics since observations
of phenomena in nature are always subjected to small fluctuations; to some level of
noise. Therefore, the physical perception would intrinsically correspond to some
random contaminated process rather than being a purely deterministic one. The
concept of random dynamical systems is relatively recent, although the interest in
random perturbation of dynamical systems goes back to Kolmogorov. The main
purpose of this chapter is to give an overview on the dynamics of random bounded
perturbed discrete time systems. Obviously the treatment here is not complete, since
the subject is very broad. Instead, we present the general framework and discuss
some examples of applications and the effects of such perturbations. The remaining
part of this chapter is divided as follows. In Sect. 10.2, we discuss generalities
of random bounded perturbed dynamics. Then we present some applications to
scattering dynamics in Sect. 10.3, and to escape from attracting sets in Sect. 10.4.
Each application uses a different approach to randomly perturb the dynamics,
and we comment on the choice of perturbation scheme based on what we are
interested in. We finish this chapter by presenting some further applications and
possible directions.
There are different ways of how randomness comes into play in dynamical systems.
We shall present two different frameworks to study the random perturbation of dy-
namics. In both cases we consider bounded perturbations. The choice between either
mechanism basically depends on the phenomena one is interested in measuring or
modelling, as we shall discuss in the following sections. In general, we also want
some sort of regularity in the class of functions to be considered, for example, it is
common to deal with smooth functions whose inverse are differentiable.
F(x j ) = f (x j ) + j ,
with || j || < , where j is the vector of random noise added to the deterministic
dynamics at the iteration j, and is its maximum amplitude. Note that the sequences
of perturbations applied to each trajectory are independent. We illustrate this in
Fig. 10.1. The idea of perturbations not having preferential directions is to ensure
that the perturbed trajectory should scatter uniformly around the unperturbed one.
xn+1 = fn (xn ),
where we randomly choose slightly different maps fn for each iteration n (see
Eq. (10.4) below). It is known that such dynamics has well-defined (in the ensemble
sense) values of dynamical invariants such as fractal dimensions and Lyapunov
exponents [13]. Note that we associate the choice of the map with the iteration.
Therefore, all initial conditions in a given iteration are mapped by the same sequence
of random maps.
In the following sections we shall discuss some effects of random bounded
perturbation to discrete time dynamics.
2 Within the mathematical literature the random perturbations are defined in terms of spaces of
maps. In this setting, we have a family of maps and the iteration is obtained by randomly selecting
them. Thus it is said to be a family of random maps even when different sequences are applied to
different orbits.
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 155
the dynamics is non-trivial. Trajectories3 can either come from infinity into the
interaction region before being scattered again towards infinity or initialised inside
the interaction region and escape towards infinity.
The way trajectories are scattered fundamentally depends on the characteristics
of the scattering region. We call it chaotic scattering whenever the dynamics inside
the scattering regions is chaotic, or that characteristic quantities associated with the
particles, or trajectories, during the scattering are sensitive to their initial conditions.
There are many ways of detecting such sensitivity. For example, one may be
interested in measuring the time that a set of particles takes to leave a given region
after being sprinkled in. When the scattering is chaotic, such time delay, according
to the initial position of the particles, diverges on a fractal set [4, 5]. Another way
of characterising such sensitivity is by measuring the scattering angle as a function
of the initial angle (or position) of the particles. These are two examples of what
is more generally called scattering functions [4]. The chaotic behaviour of such
scattering is due to the presence of a chaotic non-attracting set containing periodic
orbits of arbitrarily large periods as well as aperiodic orbits distributed on a fractal
geometric structure in the phase spacethe chaotic saddle [4].
From the ergodic point of view, the dynamics of hyperbolic chaotic scattering is
explained by the existence of a chaotic saddle in invertible maps, or a chaotic
repeller in non-invertible ones. It is a zero-measure non-attracting fractal set [4].
Therefore, a randomly chosen initial condition has full probability of escaping the
scattering region. Accordingly, the dynamics of an initial ensemble of particles with
smooth distribution is characterised by orbits that leave the region after a very short
transient, followed by a distribution of particles that leave the scattering region after
a long time [4].
Recall that for hyperbolic systems, the dynamics can be decomposed into
complementary stable and unstable subdynamics.4 As a consequence, points which
are infinitesimally displaced from each other approach or diverge exponentially,
if they are in the stable or unstable invariant directions, respectively. When the
scattering dynamics has an associated hyperbolic structure, we can imagine that
trajectories that approach the saddle along its stable direction and leave it along
its unstable direction will be displaced exponentially too. The overall implication
is that hyperbolic scattering is also associated with an exponential escape rate of
particles. Suppose that there are no disjoint saddles, due to the hyperbolic splitting,
3 Because scattering dynamics are so closely related to scattering of particles in physical systems,
we shall refer to the dynamics of initial conditions in a region of the phase space as dynamics of
particles started in such region.
4 The term subdynamics is used here as a simplification of the splitting of the tangent bundle. See,
all initial non-vanishing distribution around the stable manifold diverge with escape
rate independent of the initial density [4]. Thus, all orbits close to the chaotic
saddle are unstable. There is also a short transient due to some orbits escaping
without approaching the stable manifold. If a given number N0 of initial conditions
is randomly chosen within the scattering region, the decay of the number of particles
still in that region after time t scales as N(t) e t , hence,
P(t) e t , (10.1)
where P(t) is the probability of particles still remaining in the interaction region
after time t, and a constant5 [4].
N(t) t . (10.2)
a b c
Fig. 10.2 Simplified representation of the orbits on the torus. In (a), it is represented a single torus
and its centre. In (b), it is represented the effect of the perturbation. The difference between the
dashed and the continuous line is due to the effect of the noise that shifts the centre of the torus from
O1 to O2 . In (c) we show what would be expected for 4 iterations. Each continuous line represents
the orbit of a particle on the torus for each iteration, hence different values of perturbations at each
iteration. The centre is expected to move around O1 at each iteration and for long enough time,
the distribution of centres would fill densely the area within the dashed circle. From the paper [9].
Copyright: American Physical Society 2010
In order to derive a heuristic model for the escaping, suppose we choose a slightly
different map at each iteration, although they are chosen within the non-hyperbolic
range of parameters. The average effect along the orbits is to shift the invariant tori
in the phase space, such that we end up with a sort of random walk of the KAM
structures around the location for the unperturbed parameters. See the illustration in
Fig. 10.2. This random motion can be thought to cause orbits to acquire motion in
the direction transversal to the tori. The magnitude of this transversal component of
the motion is proportional to the intensity of the perturbation (in the case of small
perturbations). Since only the component of the motion transversal to the tori can
cause an orbit initialised in a KAM island to escape, we focus on this component
of the motion alone, which can be idealised as a one-dimensional random walk.
158 C.S. Rodrigues et al.
The size of the step of the random walk is proportional to the amplitude of the
perturbation. After n steps,
the typical distance D from the starting position reached
by the walker is D n [16]. Let D0 be a typical transversal distance a particle
needs to traverse in order to cross the last KAM surface and escape. Thus the average
time (number of steps) it takes for particles to escape scales as D20 2 . So
scales as 2 . The conclusion is that our simple model predicts an exponential
decay of particles, with a decay rate 1 scaling as [9]
2. (10.3)
Fig. 10.3 Phase space for the map 10.4, for = 6.0. The inset shows a blow-up of the region
(x, y) [2.05, 2.20] [0.44, 0.47]. From the paper [9]. Copyright: American Physical Society 2010
there after a given time T0 should form a Cantor set with fractal dimension d < 1.
On the other hand, given their algebraic decay, non-hyperbolic chaotic scattering
is characterised by maximal value of the fractal dimension, d = 1, or very close
to it, within limited precision [5, 7, 17, 18]. We estimated the fractal dimension of
the time-delay function T (x), for initial conditions chosen inside a KAM structure.
We chose y0 = 0.465, and different values of x0 were randomly chosen to belong
to the interval [2.05, 2.15]. As it is known, the box-counting fractal dimension d is
given by d = 1 , where is the uncertainty exponent [19]. Since the larger the
amplitude of the perturbations, the further the statistical behaviour of our systems is
supposed to be from that of non-hyperbolic behaviour, we expect that the larger the
amplitude of the perturbation, the further the d is from 1, which would correspond
to the non-hyperbolic limit. This is exactly what we have obtained, as shown in
Fig. 10.5b [9].
a 100
= 1.21
72 x 10 -6
= 4.0
10-1 422 x
10 -6
=
1.2
85
3x
10-2 = 10 -5
4.77
04 x
P(t)
= 0.0004
10-3
10
= 0.0008
-5
= 1.7744 x
= 0.0016
= 0.0032
10-4
= 0.0064
= 0.0128
10
= 0.0256
10-5
-4
= 0.0512
= 0.1024
10-6
0 2105 4105 6105 8105 1106
t
b 100
= 0.0064
10-1 = 0.0128
= 0.0256
= 0.0512
= = 0.1024
= 2.00
10-2 1.7
744
=
x1 -
0 4
7.6
62 x 10
P(t)
208
10-3
x1
-3
= 4.8916
0
-4
10-4
= 1.1735 x 10-2
x 10
10-5
-3
10-6
0 20000 40000
t
Fig. 10.4 Probability distribution of escaping time n from the region W = {|x| <
5.0, and |y| < 5.0} for different values of . Initial conditions were randomly chosen in the
line x [2.05, 2.07], y = 0.465 inside the nested KAM structures. For each value of , we present
the exponent that best fits the exponential region of the probability distribution. (a) shows all
used values of , (b) shows small values of n. From the paper [9]. Copyright: American Physical
Society 2010
dynamics, which attract their neighbouring points, that is, f n (x) tends to as
n ; these are the attractors of the system. The points whose orbits eventually
come close and converge to are its basin of attraction, which we denote by
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 161
a 100
0.0001 0.001 0.01 0.1
104
1.8489
102
106
1.7039
104
106
0.0001 0.001 0.01 0.1
b = 0.1024
= 0.0512
1 = 0.0128
= 0.0032
f()
d = 1 - 0.076759
0.1
d= 1 - 0.027166
Fig. 10.5 (a) Different values of exponent as a function of from Fig. 10.4, as well as the
power law that best fits the distribution. In the inset, we consider only values of < 0.02, (b)
Estimated fractal dimension of the T , for different values of . We notice that, when the amplitude
of the perturbation is decreased, the dynamics approach the non-hyperbolic limit, and so does the
estimated values of fractal dimension which approach 1 as 0. The inset shows the value of
as a function of . From the paper [9]. Copyright: American Physical Society 2010
162 C.S. Rodrigues et al.
Although the dynamics may seem more complicated under the presence of noise,
it is actually well characterised from a statistical point of view. In other words,
for small enough amplitude of noise, there exists a distribution of probability
for the orbit to stay close to the original attractor of the deterministic dynamics.
Furthermore, the distribution of probability for the perturbed system converges to
the original distribution of the deterministic systems as the level of noise decreases
to zero [20].
We can picture the noisy dynamics inside the basin of attraction for small enough
amplitude of noise as that of a closed system. The effect of the noise beyond a
threshold is that these perturbations introduce a hole in the basin of attraction, from
where the orbits can escape [20]. Consider our deterministic dynamics xn+1 = f (xn ).
As in Sect. 10.2.1, we shall add some random perturbation of maximum size . The
hole in the noisy dynamics will be defined in terms of points which might escape.
To see that, consider the subset I of the phase space neighbouring the boundary of
the basin of attraction. We define it by the set of points x whose f (x) is within a
distance from some point in the boundary of the basin of attraction ,
I = {x M; B( f (x), ) = 0},
/ (10.5)
where B( f (x), ) is the ball of radio around f (x). See the illustration in Fig. 10.6a.
Finally, imagine from the deterministic dynamics that we begin to add noise of a
very small amplitude. As the amplitude of noise increases, the density of probability
becomes more spread around the deterministic one. When the amplitude is large
enough, meaning the density of probability is well spread that it comes close to the
boundary, it eventually overlaps the set I . This is exactly how the hole I is defined,
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 163
Fig. 10.6 (a) A basin of attraction W s ( ) and its basin boundary (dashed line). We illustrate
that the iteration z f (z), from the point z initially in W s ( ) brings the orbit within a distance
from the boundary. Therefore, the random perturbation applied to f (z) with some || || < could
push the random orbit outside the basin. On the other hand, for the point x W s ( ) the iteration
x f (x) brings it farther than the maximum perturbation away from the boundary . Therefore,
in our illustration z I but x
/ I . (b) The inverse of the mean escape time scaling with amplitude
of noise for the Map (10.9)black circlesand for the Map (10.10)blue squares. For each map,
the values of mean escape time were obtained by iterating 103 random orbits for each value of .
The dashed lines show the expected scaling ( c )3/2 and the thick continuous lines show the
best fitting for the Rotor map, 1.7, and for the Henon map, 1.6. In the insets, we show the
transition. For < c , we have T = , the random orbits do not escape, therefore 1/T = 0. For
c , the escape time scales as Eq. (10.8). For the used parameters c = 0.086 0.006 for the
Rotor mapinset (a)and c = 0.021 0.002 for the Henon mapinset (b). From the paper [19].
Copyright: American Physical Society 2010
I = I supp , (10.6)
where supp is the support of the measure. Given that we use bounded noise,
for very small noise amplitudes I = 0. / As the amplitude of the noise is increased
beyond a critical amplitude = c , we have I = 0,
/ and escape takes place for any
> c . We call I the conditional boundary. The importance of I ( ) stems from
the fact that it represents the set of points which one iteration of the map f can
potentially send close enough to the boundary , such that a random perturbation
with amplitude may send them out of the basin of attraction. The dynamics of
escape is thus governed by this set, and it can be understood as a hole that sucks
orbits from the basin if they land on it.
Under some assumptions, it is actually possible to estimate the size of this hole,
or its measure, (I ) > 0, in terms of and c ; see the details in [20]. For noise
amplitude greater than the critical value, it scales as [20]
164 C.S. Rodrigues et al.
(I ) ( c ) , (10.7)
T ( c ) . (10.8)
where x and y are real numbers and again, represents the dissipation parameter. We
used = 0.02 for both maps, as for such value they present very rich dynamics [33].
For each map, we computed the time that random orbits took to escape from their
respective main attractors for a range of noise amplitudes. In each case, the mean
escape time was obtained for 103 random orbits for each value of . The results
are shown in Fig. 10.6b. For the parameter used here, we obtained c = 0.086
0.006 for the perturbed Rotor map and c = 0.021 0.002 for the perturbed Henon
map, which is shown in the insets. In both cases, we obtained a good agreement
between our simulations and the predictions of our theory over a range of various
decades; see the details in [20]. It has been proved that similar power laws in the
unfolding parameters are in fact lower bounds for the average escape time scale for
the one-dimensional case [34]. From the numerical perspective, it is also difficult to
accurately estimate the value of c .
As a last remark, we note that the idea of opening a closed system by adding an
artificial hole6 to it have been first considered in [38] and used by many others; see
other references in [20]. In the case of artificial holes, the authors in [37] show that
the random perturbations may have an interesting decreasing effect to the escape
rate, by making the trajectories miss the hole.
6 We say artificially placed hole when the hole is defined as a region [3537]. Our intention is just
to contrast to the case discussed in [20] and presented in Sect. 10.4, where the hole is given by the
random perturbations.
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 165
Billiards are a very interesting class of Dynamical systems used to model many
different physical phenomena. These paradigmatic dynamics connect many areas
of mathematics with questions ranging over several levels of difficulty. Some of
them, however, are suitable to be treated also numerically. In a recent paper [39],
the authors considered the effects of random perturbations to a billiard with mixed
phase-space where a hole is placed. They combined the possible effect of missing
the hole [37] and the random walk model to characterise the possible decay regimes
of survival probability. They used, however, uncorrelated noise as in Sect. 10.2.1,
and chose initial conditions outside KAM structures. Since the use of random
maps are more likely to distinguish between structural characteristics given by the
dynamics itself from those added by the uncorrelated noise, it would be interesting
to see the effect of using random maps as well as choosing initial conditions
inside invariant islands. Furthermore, it may be possible to capture changes in the
behaviour from hyperbolic to non-hyperbolic measuring how recurrence properties
are modified using random maps.
166 C.S. Rodrigues et al.
A more subtle mathematical topic, but somehow with strong implications to applied
dynamics and modelling, consists in understanding the relation between Markov
chains and random maps. Consider the framework from Sect. 10.2.1. Then, for
perturbations of maximum size , a Markov chain is defined by a family {p ( |x)}
of probability distributions, such that, every p ( |x) is supported inside the -
neighbourhood of f (x). In other words, given a subset U of M, we say that each
p (U|x) is a conditional probability telling us that given x, what the probability of
f (x) to be found in U is. It turns out that it is actually possible to consider an intrinsic
distance and probability in the collection of maps we choose as a perturbation of
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 167
our deterministic system. The idea of representing this Markov chain in terms of
randomly perturbed systems consists in finding the equivalence of this probability
in our collection of maps and the Markov chain. For any sequence of randomly
perturbed systems one can prove that it is always possible to find a Markov chain
which is represented by this scheme. The opposite question, however, relies in many
subtle mathematical issues. In other words, given a Markov chain model, can one
find a sequence of random maps such that the points evolving under both schemes
coincide? For example, it strongly depends on the shape of the distribution of our
probability density for the Markov chain. Some of this issues have recently been
addressed from a rigourous point of view in [43].
References
1. Falconer, K.J.: The Geometry of Fractal Sets. Cambridge University Press, Cambridge (1986)
2. Arnold, L.: Random Dynamical Systems. Springer, New York (1998)
168 C.S. Rodrigues et al.
3. Romeiras, F.J., Grebogi, C., Ott, E.: Phys. Rev. A 41, 784 (1990)
4. Ott, E.: Chaos in Dynamical Systems, 2nd edn. Cambridge University Press, Cambridge (2002)
5. Lau, Y.T., Finn, J.M., Ott, E.: Phys. Rev. Lett. 66, 978 (1991)
6. Robinson, C.: Dynamical Systems: Stability, Symbolic Dynamics, and Chaos, 2nd edn. CRC
Press, FL (1999)
7. Motter, A.E., Lai, Y.-C., Grebogi, C.: Phys. Rev. E 68, 056307 (2003)
8. Motter, A.E., Lai, Y.-C.: Phys. Rev. E 65, 015205 (2002)
9. Rodrigues, C.S., de Moura, A.P.S., Grebogi, C.: Phys. Rev. E 82, 026211 (2010)
10. Poon, L., Grebogi, C.: Phys. Rev. Lett 75 4023 (1995)
11. Feudel, U., Grebogi, C.: Chaos 7, 597 (1997)
12. Seoane, J.M., Huang, L., Suanjuan, M.A.F., Lai, Y.-C.: Phys. Rev. E 79, 047202 (2009)
13. Kraut, S., Grebogi, C.: Phys. Rev. Lett. 92, 234101 (2004)
14. Kraut, S., Grebogi, C.: Phys. Rev. Lett. 93, 250603 (2004)
15. Altmann, E.G., Kantz, H.: Europhys. Lett. 78, 10008 (2007)
16. Feller, W.: Introduction to Probability Theory and Applications. Wiley, New York (2001)
17. de Moura, A.P.S., Grebogi, C.: Phys. Rev. E 70, 36216 (2004)
18. Seoane, J.M., Sanjuan, M.A.F.: Int. J. Bifurcat. Chaos 20, 2783 (2008)
19. Grebogi, C., McDonald, S.W., Ott, E., Yorke, J.A.: Phys. Lett 99A, 415 (1983)
20. Rodrigues, C.S., Grebogi, C., de Moura, A.P.S.: Phys. Rev. E 82, 046217 (2010)
21. Hanggi, P.: J. Stat. Phys. 42, 105 (1986)
22. Demaeyer, J., Gaspard, P.: Phys. Rev. E 80, 031147 (2009)
23. Kramers, H.A.: Phys. (Utrecht) 7, 284 (1940)
24. Grasberger, P.: J. Phys. A 22, 3283 (1989)
25. Kraut, S., Feudel, U., Grebogi, C.: Phys. Rev. E 59, 5253 (1999)
26. Kraut, S., Feudel, U.: Phys. Rev. E 66, 015207 (2002)
27. Beale, P.D.: Phys. Rev. A 40, 3998 (1989)
28. Nagao, N., Nishimura, H., Matsui, N.: Neural Process. Lett. 12, 267 (2000); Schiff, S.J., Jerger,
K., Duong, D.H., et al.: Nature 370, 615 (1994)
29. Peters, O., Christensen, K.: Phys. Rev. E 66, 036120 (2002); Bak, P., Christensen, K., Danon,
L., Scanlon, T.: Phys. Rev. Lett 88, 1785011 (2002); Anghel, M.: Chaos Solit. Fract. 19, 399
(2004)
30. Billings, L., Bollt, E.M., Schwartz, I.B.: Phys. Rev. Lett 88, 234101 (2002); Billings, L.,
Schwartz, I.B.: Chaos 18, 023122 (2008)
31. Kac, M.: Probability and Related Topics in Physical Sciences, Chap. IV. Intersciences
Publishers, New York (1959)
32. Zaslavskii, G.M.: Phys. Lett. A 69, 145 (1978); Chirikov, B.: Phys. Rep. A 52, 265 (1979)
33. Rodrigues, C.S., de Moura, A.P.S., Grebogi, C.: Phys. Rev. E 80, 026205 (2009)
34. Zmarrou, H., Homburg, A.J.: Ergod. Theor. Dyn. Sys. 27, 1651 (2007); Discrete Cont. Dyn.
Sys. B10, 719 (2008)
35. Altmann, E.G., Tel, T.: Phys. Rev. Lett. 100, 174101 (2008)
36. Altmann, E.G., Tel, T.: Phys. Rev. E 79, 016204 (2009)
37. Altmann, E.G., Endler, A.: Phys. Rev. Lett. 105, 255102 (2010)
38. Pianigiani, G., Yorke, J.A.: Trans. Am. Math. Soc. 252, 351 (1979)
39. Altmann, E.G., Leitao, J.C., Lopes, J.V.: pre-print: arXiv:1203.1791v1 (2012) -To appear in
Chaos special issue: Statistical Mechanics and Billiard-Type Dynamical Systems
40. Rodrigues, C.S., Grebogi, C., de Moura, A.P.S., Klages, R.: Pre-print (2011)
41. Kruscha, A., Kantz, H., Ketzmerick, R.: Phys. Rev. E 85, 066210 (2012)
42. Schelin, A.B., Karolyi, Gy., de Moura, A.P.S., Booth, N.A., Grebogi, C.: Phys. Rev. E 80,
016213 (2009)
43. Jost, J., Kell, M., Rodrigues, C.S.: Pre-print: arXiv:1207.5003
44. Lamb, J.S.W., Rasmussen, M., Rodrigues, C.S.: Pre-print: arXiv:1105.5018
Part III
Bounded Stochastic Fluctuations in Biology
Chapter 11
Bounded Stochastic Perturbations May Induce
Nongenetic Resistance to Antitumor
Chemotherapy
Abstract Recent deterministic models suggest that for solid and nonsolid tumors
the delivery of constant continuous infusion therapy may induce multistability in
the tumor size. In other words, therapy, when not able to produce tumor eradication,
may at least lead to a small equilibrium that coexists with a far larger one. However,
bounded stochastic fluctuations affect the drug concentration profiles, as well as
the actual delivery scheduling, and other factors essential to tumor viability (e.g.,
proangiogenic factors). Through numerical simulations, and under various regimens
of delivery, we show that the tumor volume during therapy can undergo transitions
to the higher equilibrium value induced by a bounded noise perturbing various
biologically well-defined parameters. Finally, we propose to interpretate the above
phenomena as a new kind of resistance to chemotherapy.
A. dOnofrio ()
Department of Experimental Oncology, European Institute of Oncology,
Via Ripamonti 435, I20141 Milan, Italy
e-mail: alberto.donofrio@ieo.eu
A. Gandolfi
Istituto di Analisi dei Sistemi ed Informatica A. Ruberti - CNR
Viale Manzoni 30, I00185 Roma, Italy
e-mail: alberto.gandolfi@iasi.cnr.it
11.1 Introduction
Clonal resistance (CR) to chemotherapy, i.e. the emergence through fast mutations
of drug-insensitive cells in a tumor under therapy, was up to the recent past, and to
some extent it is still at present, the main paradigm used to explain the high rate of
relapses during chemotherapeutic treatments of tumors [1, 2].
However, in the last decade, a number of investigations [35] revealed that a
significant fraction of cases of resistance to therapy is actually linked to phenomena
that may, broadly speaking, be defined as physical resistance (PR) to drugs [6, 7].
This means that resistance cannot only be imputed to a sort of Darwinian evolution
of the cancerous population through the birth of new clones, but also to the
dynamics of the molecules of the drug in the tumor. A non-exhaustive list of such
physical phenomena is the following: (a) limited ability of the drug to penetrate into
the tumor tissue because of uneffective vascularization [8] and poor or nonlinear
diffusivity [9]; (b) binding of drug molecules to the surface of tumor cells or to the
extracellular matrix [10]; (c) prevalence of lowly proliferating and quiescent tumor
cells [11]; (d) collapse of blood vessels [12].
We recently proposed [1315] two deterministic population-based models to
describe the chemotherapy of vascularized solid tumors and also of nonsolid
tumors that may exhibit multistability under constant continuous drug infusion,
unlike other models of tumor growth, which in such a case predict unimodality.
The multistability is the consequence of the interplay between the nonlinear
pharmacodynamics of the drug at the tissue level and the population dynamics of
the tumor cells. In particular, we have shown that multistability can also derive from
the well-known NortonSimon hypothesis [16].
In [14, 15] we suggested the possible existence of a third path for the insurgence
of the resistance, different from CR and having some relationships with PR, due
to the interaction between the multistability of the tumor and the unavoidable fluc-
tuations of the blood concentration of the delivered drug, through the well-known
mechanisms of equilibrium metastability [17] and of noise-induced transitions [18].
This novel kind of resistance thus comes from the complex interplay among
the pharmakodynamics and pharmacokinetics of the agent and the phsyiological
condition of the patient. In case of vascular solid tumors a major role is played by
the physical barriers caused by the abnormal nature of tumor blood vessels, and by
the interaction between the tumor and the endothelial cell populations.
However, in contrast to the classical non-equilibrium statistical physics, we shall
not assume that the noise affecting the drug concentration is gaussian. In [1921]
we stressed that possible biological inconsistencies might derive from the use
of gaussian noise, and here we shall then consider only bounded noises, whose
theoretical study has recently attracted a number of physicists [2126].
Concerning the origin of those fluctuations, we shall consider in this chapter three
separate and different settings. In the first, we shall consider a therapy periodically
delivered by means of boli. Here we may have two different irregularities: the first
is inherent to intra-subject temporal variability of pharmacokinetics parameters,
among them the clearance rate constant(s) of the drug. The other source of
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 173
fluctuations is linked with irregularities of the time of delivering. Note that in case
of boli-based therapy there is the copresence of both stochastic fluctuations and
periodic deterministic fluctuations due to the periodicity of the administration of the
agent. In the second, random changes occur in the nonlinear effectiveness of the
antitumor drugs. Finally, the third scenario involves oscillations in the proliferation
rate of vessels.
Let us consider a tumor whose size (biomass, number of viable cells, etc.) at time t
is denoted as V , and which is growing according to a saturable growth law [27]:
V
V =f V,
K
where K > 0 and f (u) is a decreasing function of u such that f (1) = 0. The constant
K is usually called carrying capacity, and it depends on the available nutrients and/or
space for which the tumor cells compete. Another important parameter is the value
= f (0), which we shall call the baseline growth rate (BGR). can be read as a
measure of the intrinsic growth rate of the tumor, in absence of any competition. Of
course, since f (u) is decreasing, the BGR is also the maximal growth rate. Although
very simple, the above class of models revelead to be very effective in capturing the
main qualitative [2731] and quantitative [27, 3234] aspects of tumor growth.
Two well-known growth laws are the Gompertz law, where f (V /K) = log
(V /K), and the generalized logistic f (V /K) = (1 (V /K) ) with > 0. Note,
however, that in the Gompertz case, the BGR is infinite, which is not realistic, as
pointed out in [27, 31] (and references therein).
Let the tumor be under the delivery of a cytotoxic therapy with a drug whose
blood concentration, denoted by c(t), may be periodic or constant. What is the effect
of c(t) on the tumor growth? The log-kill hypothesis [35] prescribes that the killing
rate of tumor cells is proportional to the product c(t)V (t):
V
V
= f V c(t)V (t). (11.1)
K
In the case of a bounded intrinsic growth rate, i.e. f (0) < , the condition c(t) >
f (0)/ implies that V (t) 0, independently of V (0) > 0.
However, since seventies Norton and Simon [16] stressed as a potential pitfall
of the log-kill hypothesis the fact that the relative killing rate is simply taken pro-
portional to c(t). According to the log-kill hypothesis, the same drug concentration
is indeed able to kill the same relative number of cells per unit time independently
of the tumor burden. Moreover, the absolute velocity of regression caused by c(t)
would be greater in the larger tumors. This is often unrealistic. On the contrary,
in clinics it is often observed that the effort to make a large tumor regress is
174 A. dOnofrio and A. Gandolfi
It is trivial to verify that if c(t) > / (0; p), then the tumor-free equilibrium Ve =
0 is locally stable, whereas in case of constant continuous infusion, c(t) = C, if
(V ; p)C > f (V /K), then the tumor free equilibrium Ve = 0 is globally stable. In
the general case, since (K; p) > f (1) = 0, if > (0; p)C there will be an odd
number N 1 of equilibria: V1 (C, K, p),. . . , VN (C, K, p), with Vi < V j if i < j. It is
easy matter to verify that the odd-numbered equilibria are locally stable, whereas
the even-numbered points are unstable. By varying C or K or p one may get one or
more hysteresis bifurcations.
Solid tumors in their first phase of growth are small aggregates of proliferating
cells that receive oxygen and nutrients only through diffusion from external blood
vessels. In order to grow beyond 12 mm3 , the formation of new blood vessels inside
the tumor mass is required. Poorly nourished tumor cells start producing a series
of molecular factors that stimulate and also control the formation of an internal
vascular network [3638]. This process, called neo-angiogenesis, is sustained by
a variety of mechanisms [3638], such as the cooption of existing vessels and the
formation of new vessels from the pre-existing ones. As far as the tumor-driven
control of the vessel growth is concerned, endogenous antiangiogenic factors have
been both evidenced experimentally [3941] and studied theoretically [4244].
To describe the interplay between the tumor and its vasculature, we further
generalize a family of models previously proposed in [13] that includes as par-
ticular cases the models in [42, 43, 4548] (for different modeling approaches, see
[4958]).We assume that (a) the carrying capacity mirrors (through a proportionality
coefficients, or in any case through an increasing function) the size of the tumor
vasculature, and as such it is a state variable K(t); (b) the specific growth and
apoptosis rates of the tumor and the specific proliferation rate of vessels depend
on the ratio = K/V between the carrying capacity and the tumor size. Following
Hahnfeldt et al. [42, 43], the growth of the neo-vasculature is antagonized by
endogenous factors that depends on the tumor volume. Since the ratio may be
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 175
interpreted as proportional to the tumor vessel density, assumption (b) agrees with
the model proposed in [59]. As a consequence, we can write in absence of therapy
K K
V
= P( )V ( )V, (11.3)
V V
K
K
= K ( ) (V ) , (11.4)
V
where P(u) is the (specific) proliferation rate of the tumor with P(0) = 0, P
(u) > 0,
P(+) < ; (u) is the apoptosis rate with
(u) < 0, (+) = 0; (u) is the
proliferation rate of the vessels with (0) +,
(u) < 0, (+) = 0; (V ),
with
(V ) < 0, models the vessels loss induced by endogenous anti-angiogenic
factors secreted by the tumor cells, and represents the natural loss of vessels.
We prescribe P(1) = (1) so that at the equilibrium Ke /Ve = 1.
As an example of possible expressions of the net proliferation rate F(u) =
P(u) (u) we may consider the generalized-logistic: F(u) = (1 u ), > 0.
The function (u) may include power laws (u) = buw , w > 0, functions such as
(u) = M /(1+kun ), n 1, i.e. Hill functions in the variable u1 , and combinations
of the above two expressions: (u) = 1 uw + 2 /(1 + kun ). The power law
with w = 1 yields K (K/V ) = bV , as proposed in [42, 43]. The combination
function with w = 1 is such that K (K/V ) distinguishes between the endothelial
cell proliferation and the input of new endothelial cells from the tumor outside.
Concerning the function , we recall that (V ) = dV 2/3 has been assumed in
[42, 43].
The model predicts for the system, as it is easy to show, a unique equilibrium
point, which is globally attractive.
The antiproliferative or the cytotoxic efficacy of a blood-borne agent on the
tumor cells will depend on its actual concentration at the cell site, and thus it
will be influenced by the geometry of the vascular network and by the extent of
blood flow. The efficacy of a drug will be higher if vessels are close to each other
and sufficiently regular to permit a fast blood flow; it will be lower if vessels
are distanced but even if they are irregular and tortuous so to hamper the flow.
To represent simply these phenomena, we assumed in [13] that the drug action to be
included in Eq. (11.3) is dependent on the vessel density, i.e. on the ratio = K/V .
If c(t) is the concentration of the agent in blood, we assumed that its effectiveness is
modulated by an increasing or an initially increasing and then decreasing function
( ).
In case of delivery of cytotoxic drugs, Eq. (11.3) will then be modified by adding
the log-kill term ( )c(t)V (t), but also Eq. (11.4) has to be modified since often
cytotoxic agents may also disrupt the vessels [60]. So, it leads to the following
model [13]:
K K
V
= V F( ) ( )c(t) , (11.5)
V V
K
K
= K ( ) (V ) c(t) , (11.6)
V
176 A. dOnofrio and A. Gandolfi
provided, of course, that M(C; ) = (i (C)) C > 0. Thus, also here there
is a threshold drug level C , defined by M(C; ) = 0, and such that C > C implies
tumor eradication. We note that if > 0, the eradication is more easy to be reached,
whereas if = 0 the eradication is difficult or impossible since appears to be
very small. The vessel-disrupting action of a chemotherapic agent so appears very
important for the cure.
Also for such a therapy model it is an easy matter to show that under constant
continuous chemotherapy the system exhibits multistability [13, 15], as shown in
Fig. 11.1.
For the tumor dynamics, we assume in Fig. 11.1 and in all the simulations the
following kinetic
functions: F( ) = (ln(2)/1.5)(1 0.5 ), ( ) = 4.64/ , = 0,
( ) = / 1 + (( 2)/0.35) and (V ) = 0.01V 2/3 . With these values, there
2
It might seem that the two different models we proposed in the previous sections
are somewhat unrelated. Our aim here to show that for solid vascularized tumors,
and under a well-defined approximation, the NortonSimon hypothesis, and a
generalization of it, can be derived by the assumptions we stated for the model
(11.5)(11.6).
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 177
Indeed, most often in humans the dynamics of vessels is faster than tumor
dynamics. As a consequence, we may consider K(t) at quasi-equilibrium. Setting
K
0 in Eq. (11.6) yields:
K
( ) (V ) c(t) 0,
V
and in turn:
&
1 ( (V ) + + c(t)) if (0) (V (t)) c(t) 0
(V, , c(t)) =
0 if V > V (t),
where:
As a consequence, for targeted drugs such that = 0 and supposing that (0) is
smaller than the value maximizing , one gets:
V = V f (V ) (V ) c(t)V
where both the net growth rate f (V ) and the effectiveness of the drug (V ) are
decreasing functions of V .
Quite interestingly, if > 0 one gets that the cytotoxic chemotherapeutic drug
hasthanks to its side effect of killing vesselsnot only its main direct effect but
also an indirect antiproliferative and proapoptotic action.
Note that in the case where (0) is larger than the maximum of , the
approximation here employed suggests that for small tumors the cytotoxic effect
is initially an increasing function of V .
However, there is an important difference between the reduced unidimensional
model (11.7), which we recall is valid for solid vascularized tumors, and the Norton
Simon-like (NSL) model of Sect. 11.2. Indeed, in the model of Eq. (11.7) the effects
of fluctuations in the parameters affecting the carrying capacity are present both
in the net tumor growth rate and in the pharmacodynamics term describing drug
effectiveness. On the contrary in the NSL model, the carrying capacity uniquely
influences the net growth rate. In other words, growth and drug pharmacodynamics
are independent.
178 A. dOnofrio and A. Gandolfi
The hysteresis bifurcations, as that in Fig. 11.1, are characterized by the existence of
two values of the bifurcation parameter such that infinitesimal changes around these
values of the parameter imply that the behavior of the solution has a sudden change.
This means that near those two points the behaviour of the system is extremely
sensitive to any kind of perturbations. . . . As a result the treatment. . . . requires that
the fluctuations be explicitly incorporated into the model [18, 61].
These and other observations led Horsthemke and Lefever to define the theory of
noise-induced transitions (NITs) [18] that study the stochastic bifurcations that are
induced by zero-mean noises in nonequilibrium systems. Those transitions depend
on characteristics of the noise, such as its variance, and have the effect of changing
the nature of the stationary probability density functions of state variables, for
example from unimodal to bimodal, or vice versa.
The NIT theory is of the utmost interest in biomedicine, since in-vivo the envi-
ronmental situations are. . . extremely complex and thus likely to present important
fluctuations [62]. For applications in the field of oncology see [62, 63].
The properties of our models strongly suggest that also in the therapy of tumors
such noise-induced transitions may occur, because of the inavoidable presence of
stochastic fluctuations in some parameters. The most remarkable point is that such
transitions would correspond to sudden tumor relapses during therapy that are not
due to genetic causes or to physical resistance.
These transitions may be caused by any of the parameters appearing in the
equation modeling the dynamics of V (t). In particular, fluctuations strongly affect
chemotherapy. For example, the case of constant infusion therapy, c(t) = C is an
idealization. Moreover, also nonconstant therapies are affected by various kinds
of noises, in particular by perturbations in drug pharmacokinetics. Finally, also
other parameters may randomly fluctuate, e.g. parameters in the drug effectiveness
function.
Thus, in order to give a more realistic description, given a parameter we set
(t) = m (1 + (t)),
where W (t) is the Wiener process. The stationary density for this process is [65,66]:
S
PSW ( ) = ,
B2 2
where (t) is a gaussian white noise with zero mean and unitary variance. The
stationary density of the above noise is a Tsallis q-statistics [2225]
1q
1
2
PT S ( ) = A(q, B) 1 2 ,
B +
The third noise we shall consider is the CaiLin noise [65, 66], which is defined
by the following Langevin equation:
(t) = + (B2 2 ) (t), (11.10)
+1
with > 1. As a consequence, if (0) [B, +B], then the noise is non-
gaussian with zero mean, and such that B < (t) < B. Moreover, the process (t)
has exactly the same autocorrelation function of the OrnsteinUhlenbeck process,
and thus its autocorrelation time is = 1/ . The stationary density of the CaiLin
noise is:
2
Pst ( ) = N 1 2 ,
B +
where N is a normalization constant. Note that the density is unimodal for > 0
and bimodal for < 0.
With a slight abuse of notation1 we shall call such qualitative changes noise-induced
transitions at time tre f .
In all simulations (if not explicitly noted) we set (V (0), K(0) = (3900, 8000),
which is a point belonging to the basin of attraction of the smaller equilibrium state
of system (11.5)(11.6) in the case of a continuous constant therapy c(t) = C = 0.36,
i.e. Ve 3315. As reference time tre f , we set tre f = 365 day.
As far as the drug administration is concerned, although continuous infusion
therapies are increasingly important from the biomedical point of view, the majority
of therapies are still scheduled by means of periodic delivery of boli of an
antitumor agent. Thus, if the agent has monoexponential pharmacokinetics, then
1 Indeed, the noise-induced transitions theory usually refers to transitions to/from multimodality in
c
= ac (11.11)
c(nT + ) = c(nT ) + S, n = 0, 1, 2, . . . , (11.12)
where S is the ratio between the delivered dose and the distribution volume of the
agent, T is the constant interval between two consecutive boli, and a is the clearance
rate constant [67].
We start our analysis by examining the major stochastic factors that could perturb
system (11.11)(11.12), apart drug dosing, which is nowadays very accurate.
The first relevant phenomenon we shall consider is the presence of stochastic
fluctuations in the clearance rate of the drug [68], which are due to changes that
affect the physiologic mechanisms of drug elimination by the body. The reasons
underlying this kind of noises are due to manifold factors of disparate endogenous
and exogeneous nature, including, for example, the meals [69].
As a consequence, we consider a stochastic time-varying clearance rate
a(t) = am + a (t),
where a (t) is a bounded noise such that am + a (t) > 0. Moreover, we suppose that
am , T, S are such that, in absence of noise, the tumor size asymptotically oscillates
around a low value, i.e. in the deterministic setting there is a steady control of the
tumor.
Note that, given the structure of the pharmacokinetic equations, the noise here is
filtered, which might superficilly lead one to think that noise-induced phenomena
are not possible.
We started by simulating a cytotoxic therapy characterized by am = 1/7 day1 ,
T = 6 day, and Cm = 0.18, so that the delivered bolus is S = am TCm = 0.154. The
initial conditions of the tumor were V (0) = 3, 900, K(0) = 8, 000. In case of Tsallis
noise with q = 0 and corr = 0.5 days, we observed the onset of NIT at B 0.1am .
The bimodal PDF of the r.v. V (365) for B = 0.2am is shown in the right upper
panel of Fig. 11.2, whereas in the left upper panel it is shown the unimodal PDF for
B = 0.08am . In case of sine-Wiener noise, the density is bimodal for B = 0.11am
(not shown).
In a second simulation, we changed the scheduling passing to a more time dense
(metronomic [70,71]) scheduling, without decreasing the total quantity of delivered
drug. Namely, we halved both the period, T = 3, and the dose of the bolus, S =
0.077. The effect obtained is the almost total suppression of the bimodality in the
PDF at B = 0.2am , as illustrated in the lower panel of Fig. 11.2. Suppression of
the bimodality was also observed in case of sine-Wiener noise where at B = 0.2am
the PDF turned to be unimodal.
This result suggests that metronomic schedulings might have not only the
beneficial effects of reducing the side effects as well as of being more effective
182 A. dOnofrio and A. Gandolfi
30 500 60
Density
Density
Density
0 0 0
3200 V(365) 3245 3000 V(365) 9000 3080 V(365) 3200
Fig. 11.2 Stochastically varying clearance rate a(t) = am + (t) of a cytotoxic agent. Parameters:
T = 6, am = 1/7, S = 0.154. (t) is a Tsallis noise with q = 0 and corr = 0.5 days. Left panel:
plot of the PDF of the tumor volume at 1 year for B = 0.08am ; central panel: plot of the bimodal
PDF for B = 0.2am ; right panel: suppression of the bimodality for B = 0.2am by metronomic
scheduling with T = 3 and S = 0.077. Tumor volumes in mm3 . From the paper [15]. Copyright:
American Physical Society 2010
in reducing the tumor mass, but they even might reduce the possibility of relapse,
here suggested, due to the nonlinear interplay between tumor and vessels.
We now pass to consider another major phenomenon which is more directly
related to the human behavior: the irregularities of the drug delivery. Indeed, it is
well known that the times of delivering may be subject to unpredictables delays and
anticipations [72] Here we shall assume that the clearance rate is constant, whereas
the time of delivering is slightly irregular, which implies that Eq. (11.12) become:
0.025 0.25
0.020 0.20
0.015 0.15
0.010 0.10
0.005 0.05
0.000 0.00
3250 3300 3350 3400 4000 5000 6000 7000 8000
V V
( ) = .
1 + u (t) + (( 2)/0.35)2 (1 + w (t))
In the simulations of this section we assumed a drug profile such that c(t) =
0.15, whose associated equilibrium points are E1 =(3323, 6924), E2 =(4053, 7398),
and E3 =(8794, 9577). Also in this case we assumed as initial condition (V0 , K0 ) =
(3900, 8000), which belongs to the basin of attraction of E1 .
In Fig. 11.3 it is illustrated the statistical response (tumor size) to the system for
the case w (t) = 0 and u (t) is a CaiLin noise with corr = 1 days and = 1. In left
panel (where B = 0.125) the smaller deterministic equilibrium is simply perturbed,
and the density is unimodal; in right panel (where B = 0.25), one may observe a
second mode roughly centered at the second and larger deterministic equilibrium
size. The transition threshold is at B 0.155.
No transition is instead observed in the case where u (t) = 0 and w (t) is a Cai
Lin noise with = 1 and corr = 1 or 5 days, and also when w (t) is a sine-Wiener
noise (with corr = 1 or 5 days).
Up to now we have dealt with the impact on the outcome of antitumor chemothrapies
of perturbations concerning the pharmacokinetic, or the drug delivery, or the drug
effectiveness in killing the neoplastic cells.
Here, instead, we are interested to assess which are the consequences of irregular
oscillations (around an average value) of the proliferation rate of vessels due to
irregular production of the related proangiogenic factors.
184 A. dOnofrio and A. Gandolfi
0.030
0.025 0.20
0.020 0.15
0.015
0.10
0.010
0.05
0.005
0.000 0.00
3200 3250 3300 3350 3400 3450 3000 4000 5000 6000 7000 8000 9000
V V
To this aim, we performed some simulations where we assumed that the tumor is
undergoing a constant continuous therapy similar to the one considered in Sect. 11.7,
and that, as a consequence of the aforementioned random oscillations, the growth
rate of vessels is given by
( ,t) = (1 + (t))m ( ),
where the noise is bounded and such that 1 + (t) > 0. Namely we assumed that
Cm = 0.15 and that m ( ) = 4.64/ .
Moreover, since the biochemical oscillations are by no means faster than the
tumor dynamics, and the vessel growth is also faster than the tumor cell proliferation
(note that (4.64)1 0.21 days), we assumed that the autocorrelation time of the
noise (t) is small, taking corr = 0.1.
Both in case of CaiLin and of sine-Wiener noise, we obtained that also for small
B there are noise-induced transitions (see Fig. 11.4). The transition thresholds are
B = 0.15 for CaiLin noise, and B = 0.1 for the sine-Wiener noise. This suggests
that not only the average value of the proangiogenic factors production rate matters
but also their random variability.
The assumption of boundedness for the noise, in contrast to the use of gaussian
noises, allows a more faithful modeling of real biological phenomena and allows to
avoid artifact results deriving from the temporary negativity of parameters.
The interplay of the stochastic fluctuations with the intrinsic multistability
of the system may generate noise-induced transitions at the end of the therapy.
In other words, stochastic perturbations may induce a form of resistance to therapies
potentially able of leading to a stable disease in a variety of biologically meaningful
scenarios, which can be divided into some classes: (a) drug delivery-related fluctu-
ations (continuous infusion therapy and bolus-based therapy irregularly delivered);
(b) stochasticity of pharmacokinetics; (c) stochasticity of nonlinear pharmacody-
namics; (d) fluctuations in the production of pro-angiogenic factors.
In all the above cases multistability in our models origins from the drug
effectiveness that, based on some biological background, is nonlinear and unimodal.
Concerning the control of the effects of fluctuations in the drug clearance rate,
in order to reduce the possibility of relapse (i.e., of noise-induced transitions) our
simulations suggest that a possible benificial option is the so-called metronomic
scheduling of the therapeutical agent. Moreover, our simulations of the irregular
intake of the therapy show that a rigorous adherence to the prescribed scheduling
can avoid therapeutic failures. More difficult appears the control of other fluctuation
sources, such as the distribution volume of the drug, which should probably require
a feedback adaptation of the delivered dose.
Summarizing, we may say that the possible multistability of tumors under
constant continuous infusion chemotherapy, suggested by our models, calls for more
efforts in monitoring the drug delivery, also in view of therapy optimization.
Acknowledgments The work of A. dOnofrio was conducted within the framework of the EU
Integrated Projects Advancing Clinico-Genomic Trials on Cancer ACGT and P-Medicine. This
work was also partially supported by MIUR-Italy, PRIN 2008RSZPYY.
References
1. Tuerk, D., Szakacs, G.: Curr. Op. Drug Disc. Devel. 12, 246 (2009)
2. Kimmel, M., Swierniak, A.: Lect. Note Math. 1872, 185 (2006)
3. Tunggal, J.K., Cowan, D.S.M., Shaikh, H., Tannock, I.F.: Clin. Cancer Res. 5, 1583 (1999)
4. Cowan, D.S.M., Tannock, I.F.: Int. J. Cancer 91, 120 (2001)
5. Jain, R.K.: Ann. Rev. Biom. Eng. 1, 241 (2001)
6. Jain, R.K.: J. Contr. Release 74, 7 (2001)
7. Wijeratne, N.S., Hoo, K.A.: Cell Prolif. 40, 283 (2007)
8. Carmeliet, P., Jain, R.K.: Nature 407, 249 (2000)
9. Tzafriri, A.R., Levin, A.D., Edelman, E.R.: Cell Prolif. 42, 348 (2009)
10. Netti, P.A., Berk, D.A., Swartz, M.A., Grodzinsky, A.J., Jain, R.K.: Cancer Res. 60,
2497 (2000)
11. Cosse, J.P., Ronvaux, M., Ninane, N., Raes, M.J., Michiels, C.: Neoplasia 11, 976 (2009)
12. Araujo, R.P., McElwain, D.L.S.: J. Theor. Biol. 228, 335 (2004)
13. dOnofrio, A., Gandolfi, A.: J. Theor. Biol. 264, 253 (2010)
186 A. dOnofrio and A. Gandolfi
14. dOnofrio, A., Gandolfi, A., Gattoni, S.: Phys. A 391, 64846496 (2012)
15. dOnofrio, A., Gandolfi, A.: Phys Rev E 82, Art.n. 061901 (2010)
16. Norton, L., Simon, R.: Cancer Treat. Rep. 61, 1303 (1977)
17. Kramers, H.A.: Physica 7, 284 (1940)
18. Horstemke, W., Lefever, H.: Noise-Induced Transitions in Physics, Chemistry and Biology.
Springer, Heidelberg (2007)
19. dOnofrio, A.: Noisy oncology. In: Venturino, E., Hoskins, R.H. (eds.) Aspects of Mathemati-
cal Modelling. Birkhauser, Boston (2006)
20. dOnofrio, A.: Appl. Math. Lett. 21, 662 (2008)
21. dOnofrio, A.: Phys. Rev. E 81, 021923 (2010)
22. Fuentes, M.A., Toral, R., Wio, H.S.: Phys. A 295, 114 (2001)
23. Fuentes, M.A., Wio, H.S., Toral, R.: Phys. A 303, 91 (2002)
24. Revelli, J.A., Sanchez, A.D., Wio, H.S.: Phys. D 168169, 165 (2002)
25. Wio, H.S., Toral, R.: Phys. D 193, 161168 (2004)
26. Bobryk, R.B., Chrzeszczyk, A.: Phys. A 358, 263 (2005)
27. Wheldon, T.: Mathematical Models in Cancer Research. Hilger Publishing, Boston (1989)
28. Castorina, P., Zappala, D.: Phys. A Stat. Mech. Appl. 365, 14 (2004)
29. Molski, M., Konarski, J.: Phys. Rev. E 68, Art. No. 021916 (2003)
30. Waliszewski, P., Konarski, J.: Chaos Solit. Fract. 16, 665674 (2003)
31. dOnofrio, A.: Phys. D 208, 220235 (2005)
32. Kane Laird, A.: Br. J. Cancer 18, 490502 (1964)
33. Marusic, M., Bajzer, Z., Freyer, J.P., Vuk-Pavlovic, S.: Cell Prolif. 27, 7394 (1994)
34. Afenya, E.K., Calderon, C.P.: Bull. Math. Biol. 62, 527542 (2000)
35. Skipper, H.E.: Bull. Math. Biol. 48, 253 (1986)
36. Folkman, J.: Adv. Cancer Res. 43, 175 (1985)
37. Carmeliet, P., Jain, R.K.: Nature 407, 249 (2000)
38. Yancopoulos, G.D., Davis, S., Gale, N.W., Rudge, J.S., Wiegand, S.J., Holash, J.: Nature 407,
242 (2000)
39. OReilly, M.S., et al.: Cell 79, 315 (1994)
40. OReilly, M.S., et al.: Cell 88, 277 (1997)
41. Folkmann, J.: Ann. Rev. Med. 57, 1 (2006)
42. Hahnfeldt, P., Panigrahy, D., Folkman, J., Hlatky, L.: Cancer Res. 59, 4770 (1999)
43. Sachs, R.K., Hlatky, L.R., Hahnfeldt, P.: Math. Comput. Mod. 33, 1297 (2001)
44. Ramanujan, S., et al.: Cancer Res. 60, 1442 (2000)
45. dOnofrio, A., Gandolfi, A.: Math. Biosci. 191, 159 (2004)
46. dOnofrio, A., Gandolfi, A.: Appl. Math. Comput. 181, 1155 (2006)
47. dOnofrio, A., Gandolfi, A., Rocca, A.: Cell Prolif. 43, 317 (2009)
48. dOnofrio, A., Gandolfi, A.: Math. Med. Bio. 26, 63 (2008)
49. Capogrosso Sansone, B., Scalerandi, M., Condat, C.A.: Phys. Rev. Lett. 87, 128102 (2001)
50. Scalerandi, M., Capogrosso Sansone, B.: Phys. Rev. Lett. 89, 218101 (2002)
51. Arakelyan, L., Vainstein, V., Agur, Z.: Angiogenesis 5, 203 (2003)
52. Stoll, B.R., et al.: Blood 102 2555 (2003); Tee, D., DiStefano III, J.: J. Cancer Res. Clin. Oncol.
130, 15 (2004)
53. Chaplain, M.A.J.: The mathematical modelling of the stages of tumour development. In:
Adam, J.A., Bellomo, N. (eds.) A Survey of Models for Tumor-Immune System Dynamics.
Birkhauser, Boston (1997)
54. Anderson, A.R.A., Chaplain, M.A.J.: Bull. Math. Biol. 60, 857 (1998)
55. De Angelis, E., Preziosi, L.: Math. Mod. Meth. Appl. Sci. 10, 379 (2000)
56. Jackson, T.L.: J. Math. Biol. 44, 201 (2002)
57. Forys, U., Kheifetz, Y., Kogan, Y.: Math. Biosci. Eng. 2, 511 (2005)
58. Kevrekidis, P.G., Whitaker, N., Good, D.J., Herring, G.J.: Phys. Rev. E 73, 061926 (2006)
59. Agur, Z., Arakelyan, L., Daugulis, P., Ginosar, Y.: Discr. Cont. Dyn. Syst. B4, 29 (2004)
60. Kerbel, R.S., Kamen, B.A.: Nat. Rev. Cancer 4, 423 (2004)
61. Horstemke, W., Lefever, R.: Phys. Lett. 64A, 19 (1977)
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 187
62. Lefever, R., Horsthemke, H.: Bull. Math. Biol. 41, 469 (1979)
63. dOnofrio, A., Tomlinson, I.P.M.: J. Theor. Biol. 24, 367 (2007)
64. Deza, R., Wio, H.S., Fuentes, M.A.: Noise-induced phase transitions: effects of the noises
statistics and spectrum. In: Nonequilibrium Statistical Mechanics and Nonlinear Physics: XV
Conference on Nonequilibrium Statistical Mechanics and Nonlinear Physics, AIP Conf. Proc.
913, pp. 6267 (2007)
65. Cai, G.Q., Wu, C.: Probabilist. Eng. Mech. 87, 17203 (2004)
66. Cai, G.Q., Lin, Y.K.: Phys. Rev. E 54, 299203 (1996)
67. A. Rescigno, Pharm Res 35, 363 (1997)
68. Lansky, P., Lanska, V., Weiss, M.: J. Contr. Release 100, 267 (2004); Ditlevsen, S., de Gaetano,
A.: Bull. Math. Biol. 67, 547 (2005)
69. Csajka, C., Verotta, D.: J. Pharmacokin. Pharmacodyn. 33, 227 (2006)
70. Browder, T., Butterfiled, C.E., Kraling, B.M., Shi, B., Marshall, B., OReilly, M.S., Folkman,
J.: Cancer Res. 60, 1878 (2000)
71. Hahnfeldt, P., Folkman, J., Hlatky, L.: J. Theor. Biol. 220, 545 (2003)
72. Li, J., Nekka, F.: J. Pharmacokin. Pharmacodyn. 34, 115 (2007)
Chapter 12
Interplay Between Cross Correlation and Delays
in the Sine-Wiener Noise-Induced Transitions
12.1 Introduction
Here 1 and 2 are the correlation times of 1 (t) and 2 (t), respectively; A and
B are their noise intensities; 1 (t) and 2 (t) are two standard Wiener processes,
d 1 = 1 (t)dt and d 2 = 2 (t)dt. Two Gaussian white noises, 1 and 2 , satisfy the
fluctuationdissipation relation:
1 (t)1 (t ) = 2 (t)2 (t ) = (t t ),
1 (t)2 (t ) = 1 (t )2 (t ) = (t t ), (12.2)
1 (t) = (t),
2 (t) = (t) + 1 2 (t), (12.4)
where (t) and (t) are two independent Gaussian white noises with unitary
intensity. Note that Eq. (12.2) is still satisfied.
By substituting Eq. (12.4) into the left of Eq. (12.3), expanding the exponential
function, one can obtain
t t
exp(a1 (t) + b2 (t )) = 1 + a dt1 (t1 ) + b dt2 [ (t2 )
0 0
t t
1
+ 1 2 (t2 )] + {a dt1 (t1 ) + b dt2 [ (t2 )
2! 0 0
t t
1
+ 1 2 (t2 )]}2 + . . . + {a dt1 (t1 ) + b dt2 [ (t2 )
(2n)! 0 0
+ 1 2 (t2 )]}2n + . . . (12.5)
1 1 (2n)!
exp(a1 (t) + b2 (t )) = 1 + f (t) + . . . + f (t)n + . . . ,
2! (2n)! 2n n!
n = 1, 2, . . . (12.6)
with
t t t
t
f (t) = a 2
dt1 dt2 (t1 ) (t2 ) + b 2
dt1 dt2 [ 2 (t1 ) (t2 )
0 0 0 0
t t
+(1 2 )] (t1 ) (t2 )] + 2ab dt1 dt2 (t1 ) (t2 ). (12.7)
0 0
1
exp(a1 (t) + b2 (t )) = exp{ f (t)}, (12.8)
2
and using the integral formula
t t
dt1 dt2 i (t1 )i (t2 ) = min(t,t ),
0 0
Fig. 12.1 as a function of and t /3 from Eq. (12.13). From paper [42] (C) Elsevier Science
Ltd (2012)
with
2(1 )t
1 exp(4 t3 )
= exp( )
. (12.13)
3 1 exp(4 t 3 )
as a function of and t 3 is plotted in Fig. 12.1 from Eq. (12.15). It can be
seen that the values of are influenced greatly by and t
3 in the case of
t
3< 2, and influenced tinily for > 5 but the curves pass three points ( , ) =
t
3
(1, 1; 0, 0; 1, 1) constantly. Generally, a long time is needed for the system to
reach the stationary state that results in t
3
5. Namely, may be approximately
treated as an independent variable on and t 3 . Morover, the cross-correlated
statistical properties should include cross correlation time (independent on the self-
correlation times 1 and 2 ). 3 in Eq. (12.14) may play the role.
Consequently, and 3 are redefined as two new variables, [1, 1] and 3
0, which are the cross-correlation intensity and cross-correlation time, respectively.
In this way Eq. (12.14) can be defined as the cross-correlated statistical properties
of the noises following the definition of cross-correlated colored noises. It should
be noted that the cross-correlation time 3 must be zero when the intensity is 0.
We will consider the CCSW noises with the statistical properties Eqs. (12.9)(12.11)
and (12.14) in the following section.
194 W. Guo and D.-C. Mei
12.3.1 Model
dx x xx
= [r + 1 (t)](1 )x [ + 2 (t)] , (12.14)
dt k 1 + x2
where x is the number of tumor cells (the same meaning with a tumor volume or
mass) at time t; r is per capita birth rate in the presence of innate immunity and
r > 0, which means weak innate immunity or highly aggressive tumor; k (> 0) is
the largest intrinsic carrying capacity allowed by the environment; ( 0) is specific
immune coefficient; x = x(t ) and x = x(t ). Two constant delay times,
and , are used to simulate a reaction time of tumor cell population to their
surrounding environment constraints, and a time taken by both the tumor antigen
identification and tumor-stimulated proliferation of effectors (e.g., effector cells and
effector molecules), respectively.
Now, the main reasons for the introduction of CCSW noises in the model are
presented. First, fluctuation of a Gaussian noise (e.g., the white noise) is large and,
in certain situation, it is questionable to make a positive parameter subject to it.
In Fig. 1 of [2], a positive parameter (equals 1.8) under a white Gaussian noise of
unitary intensity is negative with a large percentage ( 37.9%). In our model, r > 0
and 0, after r and are affected by external perturbations, 2r [r + 1 (t)] > 0
and 2 [ + 2 (t)] 0 are always ensured by taking the values of A and B as
0 A r and 0 B in Eqs. (12.6) and (12.7). Second, since 1 (t) and 2 (t) are
assumed as having a common origin (the external disturbance mentioned above),
the noises may be correlated [12].
12.3.2 Algorithm
The transitions between the unimodal and bimodal SPD are termed the nonequi-
librium phase transitions [43, 44]. The studies of the dynamical systems with
cross-correlated bounded noises are complicated and the research in this field is
rare. Generally, since the cross-correlated noises cannot be treated directly (it is
difficult), it is mandatory to develop a transformation, i.e., a decoupling scheme
[40], or a stochastic equivalent method [45]. In order to investigate the transitions in
the system, we present here, SPD is simulated from Eq. (12.14).
For simplicity, we limit that 0 and all the correlation times take the same
value (i.e., 1 = 2 = 3 = ). Here, the CCSW noises are obtained by the following
transformations (similar to Eq. (12.4)),
12 Interplay Between Cross Correlation and Delays in the Sine-Wiener Noise... 195
Fig. 12.3 The SPD as a function of x in the case (a) = 0.8; = 0.2, 0.3, 0.4 and 0.5, and
(b) = 0.9; = 0.5, 0.65, 0.85 and 0.9. The other parameter values are the same as in Fig. 12.2.
From paper [42] (C) Elsevier Science Ltd (2012)
2
1 (t) = A sin( (t)),
2 2
2
2 (t) = B sin( (t)) + B 1 sin( (t)), (12.15)
where and are two independent standard Wiener processes. These transfor-
mations do not change the statistical properties of Eqs. (12.9)(12.12).
By substituting Eq. (12.15) into Eq. (12.1), we integrate Eq. (12.14) with the
BoxMueller algorithm for generating the Gaussian white noise and the Euler
forward procedure [46, 47]. For each value of the delay times and the noise
parameters, SPD is calculated as an ensemble average of independent realizations.
Every realization spans 2.5 106 integration steps to allow the system reaching a
stationary state. We employed as initial value x(t 0) (0, 0.1) and the integration
step was t = 0.001. The results are presented as follows.
196 W. Guo and D.-C. Mei
The SPD as a function of x for different values of the correlation time ( ) is plotted in
Figs. 12.2 and 12.3. Figure 12.2a shows that unimodal SPD centered at a low value
of x becomes bimodal structure with the second maximum centered at larger values
of the tumor size x, as is increased. There is a critical value of = 0.05 (denoted by
cr1 = 0.05), near which a transition appears. Figure 12.2b reveals that the unimodal
SPD becomes bimodal with increasing and there is a transition next to = 0.1
(denoted by cr2 = 0.1). Likewise, in Fig. 12.3a,b the transitions from the unimodal
SPD to the bimodal SPD occur close to two critical value cr3 = 0.3 and cr4 = 0.65,
12 Interplay Between Cross Correlation and Delays in the Sine-Wiener Noise... 197
Fig. 12.6 The xst as a function of and for = 0.5 and = 0.5. The other parameter
values are the same as in Fig. 12.2. From paper [42] (C) Elsevier Science Ltd (2012)
respectively. From Figs. 12.2 and 12.3, the critical correlation time cr increases
with the rising correlated intensity . Namely, an increase in the correlation degree
between noises can suppress the transitions caused by , that is, the escape of tumor
is suppressed by .
In Figs. 12.4 and 12.5, the SPD as a function of x for different time delays and
is plotted, respectively. Figure 12.4 depicts that the left peak of the SPD become
higher as 1 and the right peak become lower until disappears at about = 1.8
(denoted by cr = 1.8) with increasing , i.e., a transition arises near cr . A sim-
ilar transition also appears in [28] where the system is driven by a Gaussian white
noise. In Fig. 12.5, the emergence of bimodal SPD is in the vicinity of = 0.8 (de-
noted by cr = 0.8) as is increased. Namely, a transition can be induced by .
In Fig. 12.6, xst as a function of two time delays and is plotted. It displays
that xst decreases obviously with increasing when approaches 0, and xst
increases obviously with increasing as comes close to 2. Namely, large
promotes the transitions induced by and small promotes the transitions
induced by . Now, the behavior in Figs. 12.412.6 is discussed. The equilibrium
phase is unstable from the view of mathematical physics [39] and lasts for a longest
time among the three phases in tumorigenesis [29]. For a tumor in the equilibrium
phase, large means the low adaptive capacity to current surrounding environment
and in this case the tumor transfers to the escape phase if the immune response is
198 W. Guo and D.-C. Mei
Fig. 12.7 The xst as a function of and . The other parameter values are the same as in
Fig. 12.2. From paper [42] (C) Elsevier Science Ltd (2012)
blunted enough (see the emergence of bimodal SPD in Fig. 12.5 and xst vs in
Fig. 12.6 for 1.6). On the contrary, in the case of the rapid immune response
(small ), the adaptive capacity at a low level leads to the suppression of the escape
(see, the emergence of unimodal SPD in Fig. 12.4 and xst vs in Fig. 12.6 for
0.5).
In Fig. 12.7, xst as a function of two noise parameters and is plotted.
xst rises pronouncedly first and changes slightly then with respect to increasing
correlation time for fixed correlated intensity . Meanwhile, the critical value
of correlation time cr , i.e., corresponding to the significantly increased xst , is
increased as the noise correlation degree increased from 0 (uncorrelated noises)
to 1 (the strongest correlated noises). This also confirms the results of Figs. 12.2
and 12.3.
12.4 Conclusions
We report a study on interplay between cross correlation and delays in the sine-
Wiener noise-induced transitions in a model of the tumorimmune system interplay.
The CCSW noises are defined and it is worth to note that they are useful
for modeling dynamical systems. Moreover, although the corresponding system
12 Interplay Between Cross Correlation and Delays in the Sine-Wiener Noise... 199
exhibits rich dynamical behaviors, at the best of our best knowledge, the interplay
between cross-correlated bounded noises and delays implies that it is difficult to
obtain the analytical results, due to the complexity in the systems. We expect that
these numerical findings will trigger new investigations on this topic.
Acknowledgments This work was supported by the National Natural Science Foundation of
China (Grant No. 11165016) and the program for Innovative Research Team (in Science and
Technology) in University of Yunnan province.
References
13.1 Introduction
In biomolecular networks, multiple locally stable equilibria allow for the presence
of multiple cellular functionalities [16]. This key role for multistability was
immediately understood by the first pioneering investigations in what is now known
as Systems Biology [7, 8].
A second key concept is that deterministic modeling of biomolecular networks
is only a quite coarse-grained approximation. Indeed, real dynamics of biochemical
signals exhibits stochastic fluctuations due to their interplay with many unknown
intracellular and extracellular cues. For long time, these stochastic effects were
interpreted as a disturbances masking the true signals. In other words, external
stochasticity was seen as in communication engineering: a disturbance to be reduced
by modules working as low-pass filters [912].
If noises were only pure nuisances, a monostable network in presence of noise
should exhibit unbiased fluctuations around the unique deterministic equilibrium,
so that probability distribution of the total signal (noise plus deterministic signal)
should be unimodal. However, at the end of seventies the Bruxelles school of
nonlinear statistical physics seriously challenged the above-outlined correspondence
between deterministic monostability and stochastic monomodality in presence of
external noise [13].
Indeed, they showed that many systems that are monostable in absence of
external stochastic noises have, in presence of random Gaussian disturbances,
multimodal equilibrium probability densities. This counter-intuitive phenomenon
was termed noise-induced transition by Horsthemke and Lefever [13], and it has
been shown relevant also in biomolecular networks [14].
In the meantime, experimental studies revealed another and equally important
role of stochasticity in these networks by showing that many important transcription
factors, as well as other proteins and mRNA, are present in cells with a small number
of molecules [1517]. Thus, a number of investigations have focused on this internal
stochasticity effect, termed (with a slight abuse of meaning) intrinsic noise
[18, 19]. In particular, it was theoretically shown and experimentally confirmed
that also the intrinsic noise may induce multimodality in the discrete probability
distribution of proteins [20, 21]. Note, however, that since early eighties these
effects had been theoretically predicted in Statistical and Chemical Physics by
approximating the exact Chemical Master Equations with an appropriate Fokker
Planck equation [2224], and then searching for noise-induced transitions.
More recently it has finally been appreciated that noise-related phenomena
may in many cases have a constructive, functional role [25, 26]. For example,
noise-induced multimodality allows a transcription network for reaching states
unaccessible in absence of noise [20, 25, 26]. Phenotype variability in cellular
populations is probably the most important macroscopic effect of intracellular noise-
induced multimodality [25].
In Systems Biology, Swain and coworkers [16] were among the first to study
the co-presence of both intrinsic and extrinsic randomness, in the context of the
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 203
basic linear network for the production and consumption of a single protein, in
absence of feedbacks. Important effects were shown, although nonlinear phenomena
such as multimodality were absent. The above study is also remarkable since it
stressed the role of the autocorrelation time of the external noise and, differently
from other investigations, it pointed out that modeling the external noise by means of
a Gaussian noise, either white or colored, may induce artifacts such as the temporary
negativity of a reaction kinetic parameter.
From the data analysis point of view, You and collaborators [27] and Hilfinger
and Paulsson [28] proposed interesting methodologies to infer the contributions
of extrinsic noise also in some nonlinear networks, such as a synthetic toggle
switch [27].
In [29] we investigated the co-presence of both extrinsic and intrinsic randomness
in nonlinear biomolecular networks in the important case where the external
perturbations are not only non-Gaussian but also bounded. Indeed, by imposing
the boundedness of the random perturbations the degree of realism of a model
is increased, since the external noises must not only preserve the positiveness of
reaction rates but must also be bounded (i.e. they must not be excessively large).
Moreover, it has also been shown in other contexts such as oncology and statistical
physics that: (a) bounded noises deeply impact on the transitions from unimodal to
multimodal probability distribution of state variables [3034] and (b) under bounded
noise the statistical outcome of a nonlinear system may be dependent on initial
conditions [31, 33], whereas the response to gaussian noises is globally attractive,
i.e. the stationary probability density is independent on initial conditions.
In the paper [29], we first identified a suitable mathematical framework based on
differential ChapmanKolgomorov equation (DCKE) [22, 35]to represent mass-
action biochemical networks perturbed by bounded noises (or simply left-bounded).
Once established the master equation, we proposed a combination of the Gillespies
Stochastic Simulation Algorithm (SSA) [18, 36] with a state-dependent Langevin
system, affecting the model jump rates, to simulate these systems. An important
issue was the possibility of extending, in this doubly stochastic context, the
MichaelisMenten Quasi Steady State approximation (QSSA) for enzymatic reac-
tions [37, 38]. In line with recent work by Gillespie and colleagues on systems that
are not affected by extrinsic noises [39], we numerically investigated the classical
Enzyme-Substrate-Product network. Our results suggested that it is possible to
apply QSSA under the same constraints to be fulfilled in the deterministic case.
In the first part of the present work, we review our above-outlined recent
contribute in Systems Biology. In the second part, we focus on the stochastic
dynamics of a genetic toggle switch [1], which is a fundamental motif for cellular
differentiation and for other decisions-related functions. In particular, we investigate
the interplay between intrinsic randomness and extrinsic harmonic noise, i.e.
sinusoidal perturbations that are imperfect due to noisy phase.
204 G. Caravagna et al.
We refer to systems where the jump rates are time-constant as stochastic noise-free
systems. These are here modeled by the Chemical Master Equation (CME) and
the Stochastic Simulation Algorithm (SSA) [18, 36], thus allowing to account for
the intrinsic stochasticity of such systems.
A well-stirred solution of molecules is considered where the (discrete) state of
the target system is X(t) = (X1 (t), . . . , XN (t)) to count Xi (t) molecules of the ith
species at time t. A set of M chemical reactions
R1 , . . . , RM is represented as a N M
stoichiometry matrix D = 1 2 . . . M where to each reaction R j a stoichiometric
vector j is associated. In j the vector component i, j is the change in the Xi due
to one R j reaction thus, given X(t) = x, the firing of reaction R j yields the new
state x + j . Besides, a propensity function a j (x) is associated with each R j so that
a j (x)dt is the probability of R j to fire in state x, in the infinitesimal interval [t,t +dt).
The propensity functions relate to the reaction order as follows [40]:
k
(0-th order) R j : 0/
A a j (X(t)) = k,
k
(1-th order) R j : A
B a j (X(t)) = kXA (t),
k
(2-th order) R j : A + B
C a j (X(t)) = kXA (t)XB (t),
k XA (t)[XA (t) 1]
R j : 2A
B a j (X(t)) = k
2
where k 0 is the reaction kinetic constant.
Noise-free systems obey the so-called CME [18, 36]
M
t P[x,t | ] = P[x j ,t | ]a j (x j ) (13.1)
j=1
j L j ( (t)) j
min , max + , min
j > 0.
max
j
So, both bounded and left-bounded noises are considered. Further, unitary mean
perturbations are considered, i.e. L j ( (t)) = 1 yielding a j (x,t) = a j (x).
In Eq. (13.2) L j : R R is a continuous function and (t) R is a colored
and, in general, state-dependent non-gaussian noise, whose dynamics is described
by a -dimensional ItoLangevin system
the probability of X(t) = x and (t) = , given X(t0 ) = x0 and (t0 ) = 0 , i.e. the
probability of being in a certain state of the joint NN R state space.
The general DCKE (for a state z and an initial condition ) reads as
1
t P[z,t | ] = z j A j (z,t)P[z,t | ] +
2
zi ,z j Bi, j (z,t)P[z,t | ]
j i, j
. /
+ W (z | h,t)P[z,t | ] W (h | z,t)P[h,t | ] dh. (13.4)
This joint process is a particular case of the general Markov process where diffusion,
drift, and discrete finite jumps are all co-present for all state variables [22, 35].
Specifically for the systems we consider here it is shown in [29] that the
drift vector for z is A j = f ( , x) and the diffusion matrix is Bi, j (z,t) = gT g,
where gT denotes the matrix transposition operator. Also, since only finite jumps
are considered, then jump and diffusion satisfy zi z j Bi, j (z,t) = 0 and W [(x, ) |
(x, ),t] = 0 for any i, j = 1, . . . , N, and noise R . As a consequence,
Eq. (13.4) reads as
M M
1
t P[(x, ),t] = z j f j ( , x)P[(x, ),t] + 2 zi z j Bi, j ( , x)P[(x, ),t]
j=1 i, j=1
M M
+ P[(x j , ),t]a j (x j ,t) P[(x, ),t] a j (x,t) . (13.5)
j=1 j=1
Solving Eq. (13.5) is even more difficult than solving the CME; however, a
Stochastic Simulation Algorithm with Bounded Noise (SSAN) has been defined
to sample from such a distribution [29]. The SSAN merges ideas from other
SSA variants by generalizing the SSA jump equation to a time inhomogeneous
distribution [4246].
The key steps in the mathematical derivation of the SSAN are hereby recalled.
By defining the stochastic process counting the number firings of R j in [t0 ,t], i.e.
{N j (t) | t t0 } with initial condition N j (t0 ) = 0, the evolution equation for X(t) is
M
dX(t) = j N j (t) . (13.6)
j=1
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 207
which evaluates as a j (x)dt for the SSA-based systems, yielding a time homoge-
neous Poisson process. In the case considered here this is a Cox process since the
intensity itself depends on the stochastic noise [47, 48].
In [29] a unitary-mean Poisson transformation is applied to a monotonic
(increasing) function of determining the putative time for R j to fire in (x,t), which
is then generalized to account for the next jump of the overall system as
M t+
1
a j (x, w)dw = ln
r1
(13.8)
j=1 t
with r U[0, 1] [49, 50]. This equation is the result of defining N j (t) by a
sequence of unitary-mean independent exponential random variables, and picking
the smallest jump time among all reactions [29]. In evaluating this equation term
a j (x) is constant, thus only integration of the noise is required which, we remark,
is a conventional Lebesgue since the perturbation L j ( (t)) is a colored stochastic
process. It also important to note that A j (t, ) = a j (x) for a noise-free reaction.
Given a system jump , the next reaction to fire is a random variable following
a j (x,t + )
P[ j | ; x,t] = . (13.9)
i=1 ai (x,t + )
M
is given by P( ) = 1/( 2 2 ).
We consider a model where two genes G1 and G2 , two RNAs R1 and R2 and two
proteins P1 and P2 are considered. So, synthesis and degradation correspond to
G1
G1 + R1 R1
R1 + P1 R1
P1
G2
G2 + R2 R2
R2 + P2 R2
P2
.
Here R , R , P and P are the rate constants of the reactions involved, term
[K/(K + Pi )]2 is the probability that 2 regulatory sites are free and K is the
association constant for protein P. Before introducing a realistic noise, we perform
some analysis of this model.
210 G. Caravagna et al.
Table 13.1 The bistable model of gene expression in [51]: the stoichiometry
matrix (rows in order R1 , R2 , P1 , P2 ) and the propensity functions
1 1 0 0 0 0 0 0 a1 (t) = (t)R [K/(K + P2 )]2 a2 = R R1
0 0 1 1 0 0 0 0 a3 (t) = (t)R [K/(K + P1 )]2 a4 = R R2
0 0 0 0 1 1 0 0 a5 = P R1 a6 = P P1
0 0 0 0 0 0 1 1 a7 = P R2 a8 = P P2
a 200
R1
0
2000
P1
0
200
R2
0
2000
P2
0
1.5
noise
0.5
0 200 400 600 800 1000
min
b 250
R1
0
2500
P1
0
250
R2
0
2500
P2
0
2
noise
0
0 200 400 600 800 1000
min
Fig. 13.1 Toggle switch with periodic perturbation. Single simulation of model (13.12) with
= 0.5 in (a) and = 1 in (b). Parameters are R = 100 min1 , P = 10 min1 , R = P =
1 min1 , K = 100, and T = 100 min1 and the initial configuration x0 is (R1 , P1 , R2 , P2 ) =
(10, 0, 0, 0). RNAs, proteins and the noise are plotted. Taken from Ref. [33]: G Caravagna, G
Mauri, A dOnofrio PLoS ONE 8(2), e51174 (2013)
212 G. Caravagna et al.
a R2 P2
0.05 0.05
0.025 0.025
b R2 P2
0.025 0.06
0.015 0.03
c R2 P2
0.025 0.06
0.012 0.03
d R2 P2
0.05 0.05
0.025 0.25
e R2 P2
0.06 0.06
0.03 0.03
f R2 P2
0.05 0.05
0.025 0.025
Fig. 13.2 Toggle switch with periodic perturbation. Empirical evaluation of P[x,t | x0 , 0] re-
stricted to R2 and P2 at t {900, 950, 1, 000}, as of 1, 000 simulations of model (13.12) with the
parameters as in Fig. 13.1. In (a) t = 900 and = 0.5, in (b) t = 900 and = 1, in (c) t = 950 and
= 0.5, in (d) t = 950 and = 1, in (e) t = 1, 000 and = 0.5 and in (f) t = 1, 000 and = 1.
Taken from Ref. [33]: G Caravagna, G Mauri, A dOnofrio PLoS ONE 8(2), e51174 (2013)
where 0 < 1, and W a Wiener process. Here simulations are performed by using
the SSAN where the reactions in Table 13.1 are left unchanged, and the propensity
functions a1 (t) and a3 (t) are modified with this new definition of (t).
For the sake of comparing the simulations with those in Figs. 13.113.3, we used
the same initial conditions and parameters of the previous examples.
214 G. Caravagna et al.
a b
1000 1000
980 980
960 960
940 940
920 920
900 900
0 50 100 150 250 65 130 195 259
Fig. 13.3 Toggle switch with periodic perturbation. Empirical evaluation of P[xR2 ,t | x0 , 0] in
900 t 1, 000. We used data collected with 1, 000 simulations of model (13.12) and = 0.5
in (a) and = 1 in (b), other parameters are as in Fig. 13.1. In the x-axis the concentration of R2
is represented, in the y-axis minutes are given, the light gradient denotes high probability values.
Taken from Ref. [33]: G Caravagna, G Mauri, A dOnofrio PLoS ONE 8(2), e51174 (2013)
Our simulations suggest that the scenario induced by the idealized sinusoidal
perturbation is deeply affected by the presence of the noisy phase. For example,
for the case = 0.5 in the time-series shown in Fig. 13.4 we observe that the pair
(R2 , P2 ) undergoes small stochastic fluctuation around small values, whereas the pair
(R1 , P1 ) exhibits large oscillations for large values. If one increases up to = 1,
then the time-series shown in Fig. 13.5 have the following features: (a) (R2 , P2 )
exhibits large oscillations; (b) (R1 , P1 ) undergoes large oscillations for = 10 and
= 100, and small oscillations (and around small average values) for = 1, 000.
The change of scenario can be fully appreciated when comparing Fig. 13.6,
where the heatmaps for, with the homologous Fig. 13.3. Indeed, for = 10 and =
100 the characteristic roughly periodic pattern of Fig. 13.3 has disappeared. Instead,
for ( , ) = (1000, 0.5) it is visible but it is coupled with an anti-phase pattern,
which however might only be a transient effect, given that the autocorrelation time
is here very large.
In this paper we investigated the effects of joint extrinsic and intrinsic randomness in
nonlinear genetic networks, under the assumption of non-gaussian bounded external
perturbations. Our applications have shown that the combination of both intrinsic
and extrinsic noise-related phenomena may have a constructive functional role also
when the extrinsic noise is bounded. This is in line with other researchesonly
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 215
a 200
R1
0
1750
P1
0
25
R2
5
0
200
P2
0
0 200 400 600 800 1000
min
b 200
R1
0
1750
P1
0
30
R2
5
0
200
P2
0
0 200 400 600 800 1000
min
c 160
R1
0
1750
P1
0
30
R2
0
250
P2
0
0 200 400 600 800 1000
min
Fig. 13.4 Toggle switch with Harmonic Bounded Noise. Single simulation of model (13.12) with
HBN where = 0.5. In all cases R = 100 min1 , P = 10 min1 , R = P = 1 min1 , K = 100.
T = 100 min1 and the initial configuration is (R1 , P2 , R2 , P2 ) = (10, 0, 0, 0), as in Fig. 13.1.
In (a) = 10, in (b) = 100 and in (c) = 1, 000. RNAs and proteins are plotted
a 250
R1
0
2500
P1
0
250
R2
0
2000
P2
0
0 200 400 600 800 1000
min
b 250
R1
0
2500
P1
0
250
R2
0
2500
P2
0
0 200 400 600 800 1000
min
c 50
R1
0
500
P1
0
250
R2
0
2500
P2
0
0 200 400 600 800 1000
min
Fig. 13.5 Toggle switch with Harmonic Bounded Noise. Single simulation of model (13.12) with
HBN where = 1. In all cases R = 100 min1 , P = 10 min1 , R = P = 1 min1 , K = 100. T =
100 min1 and the initial configuration is (R1 , P2 , R2 , P2 ) = (10, 0, 0, 0), as in Fig. 13.1. In (a) =
10, in (b) = 100 and in (c) = 1, 000. RNAs and proteins are plotted
Fig. 13.6 Toggle switch with Harmonic Bounded Noise. Empirical evaluation of P[xR2 ,t | x0 , 0]
in 900 t 1, 000. We used data collected with 1, 000 simulations of model (13.12). In (a) = 0.5
and = 10, in (b) = 0.5 and = 100 and (c) = 0.5 and = 1, 000. In (d) = 1 and = 10,
in (e) = 1 and = 100 and (f) = 1 and = 1, 000. All the parameters are as in Fig. 13.1. In
the x-axis the concentration of R2 is represented, in the y-axis minutes are given
The second is the role of the stationary density of the extrinsic noise. Indeed,
in other systems affected by bounded noises one of us showed that the effects of a
bounded extrinsic noise may depend on its model [3133, 60], and not only on its
amplitude and autocorrelation time. For example, the response of a system perturbed
by a sine-Wiener noise may be different from that induced by the CaiLin noise
[61]. This might imply that a same motif could exhibit many different functions
depending on its locations in the host organisms, because the stochastic behavior
of the module depends on fine details of extrinsic noise.
References
1. Gardner, T.S., Cantor, C.R., Collins, J.J.: Nature 403, 339 (2000)
2. Markevich, N.I., Hoek, J.B., Kholodenko, B.N.: J. Cell Biol. 164, 353 (2004)
3. Wang, K., Walker, B.L., Iannaccone, S., Bhatt, D., Kennedy, P.J., Tse, W.T.: PNAS 106(16),
6638 (2009)
4. Xiong, W., Ferrell Jr., J.E.: Nature 426, 460465 (2003)
5. Zhdanov, V.P.: Choas Solit. Fract. 45, 577 (2012)
6. Zhdanov, V.P.: J. Phys. A Math. Theor 42, 065102 (2009)
7. Griffith, J.S.: J. Theor. Biol. 20, 209 (1968)
8. Simon, Z.: J. Theor. Biol. 8, 258 (1965)
9. Detwiler, P.B., Ramanathan, S., Sengupta, A., Shraiman, B.I.: Biophys. J. 79, 2801 (2000)
10. Rao, C.V., Wolf, D., Arkin, A.P.: Nature 420, 231 (2002)
11. Becskei, A., Serrano, L.: Nature 405, 590593 (2000)
220 G. Caravagna et al.
12. Thattai, M., Van Oudenaarden, A.: Biophys. J. 82, 29432950 (2001)
13. Horsthemke, W., Lefever, R.: Noise-Induced Transitions: Theory and Applications in Physics,
Chemistry, and Biology. Springer, New York (1984)
14. Hasty, J., Pradines, J., Dolnik, M., Collins, J.J.: PNAS 97(5), 2075 (2000)
15. Becskei, A., Kaufmann, B.B., van Oudenaarden, A.E.: Nature Gen. 37, 937 (2000)
16. Elowitz, M.B., Levine, A.J., Siggia, E.D., Swain, P.S.: Science 298, 1183 (2002)
17. Ghaemmaghami, S., Huh, W., Bower, K., Howson, R.W., Belle, A., Dephoure, N., OShea,
E.K., Weissman, J.S.: Nature 425, 737 (2003)
18. Gillespie, D.T.: J. Phys. Chem. 81, 23402361 (1977)
19. Thattai, M., Van Oudenaarden, A.: Intrisic noise in Gene Regulatory Networks. PNAS 98, 8614
(2001)
20. Samoilov, M., Plyasunov, S., Arkin, A.P.: PNAS 102(7), 2310 (2005)
21. Tze-Leung, T., Mahesci, N.: Stochasticity and cell fate. Science 327, 1142 (2010)
22. Gardiner, C.W.: Handbook of Stochastic Methods, 2nd edn. Springer, New York (1985)
23. Gillespie, D.T.: J. Phys. Chem. 72, 5363 (1980)
24. Grabert, H., Hanggi, P., Oppenheim, I.: Phys. A 117, 300 (1983)
25. Eldar, A., Elowitz, M.B.: Nature 467, 167 (2010)
26. Losick, R., Desplan, C.: Science 320, 65 (2008)
27. Hallen, M., Li, B., Tanouchi, Y., Tan, C., West, M., You, L.: PLoS Comp. Biol. 7, e1002209
(2011)
28. Hilfinger, A., Paulsson, J.: PNAS 108, 1216712172 (2011)
29. Caravagna, G., Mauri, G., dOnofrio, A.: PLoS ONE 8(2), e51174 (2013)
30. Bobryk, R.V., Chrzeszczyk, A.: Phys. A 358, 263272 (2005)
31. dOnofrio, A.: Phys. Rev. E 81, 021923 (2010)
32. dOnofrio, A., Gandolfi, A.: Phys. Rev. E 82, 061901 (2010)
33. de Franciscis, S., dOnofrio, A.: Phys. Rev. E 86, 021118 (2012)
34. Wio, H.S., Toral, R.: Phys. D 193, 161 (2004)
35. Ullah, M., Wolkhenauer, O.: Stochastic Approaches for Systems Biology. Springer, New York
(2011)
36. Gillespie, D.T.: J. Comp. Phys. 22, 403 (1976)
37. Murray, J.D.: Mathematical Biology. Springer, New York (2002)
38. Segel, L.A., Slemrod, M.: SIAM Rev. 31, 4467 (1989)
39. Sanft, K.R., Gillespie, D.T., Petzold, L.R.: IET Syst. Biol. 5, 58 (2011)
40. Gillespie, D.T., Petzold, L.R.: Numerical simulation for biochemical kinetics. In: Szallasi, S.,
Stelling, J., Periwal, V. (eds.) System Modeling in Cell Biology: From Concepts to Nuts and
Bolts. MIT Press, Boston (2006)
41. Feller, W.: Trans. Am. Math. Soc. 48, 4885 (1940)
42. Anderson, D.F.: J. Chem. Phys. 127, 214107 (2007)
43. Alfonsi, A., Cances, E., Turinici, G., Di Ventura, B., Huisinga, W.: ESAIM Proc. 14, 1 (2005)
44. Alfonsi, A., Cances, E., Turinici, G., Di Ventura, B., Huisinga, W.: INRIA Tech. Report 5435
(2004)
45. Caravagna, G., dOnofrio, A., Milazzo, P., Barbuti, R.: J. Theor. Biol. 265, 336 (2010)
46. Caravagna, G., dOnofrio, A., Barbuti, R.: BMC Bioinformatics 13(Supp 4), S8 (2012)
47. Cox, D.R.: J. Roy. Stat. Soc. 17, 129 (1955)
48. Bouzas, P.R., Ruiz-Fuentes, N., Ocana, F.M.: Comput. Stat. 22, 467 (2007)
49. Daley, D.J., Vere-Jones, D.: An Introduction to the Theory of Point Processes, vol. I:
Elementary Theory and Methods of Probability and Its Applications, 2nd edn. Springer, New
York (2003)
50. Todorovic, P.: An Introduction to Stochastic Processes and Their Applications. Springer, New
York (1992)
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 221
14.1 Introduction
Viscoelastic materials exhibit stress relaxation and creep, with the former character-
ized by a decrease in the stress with time for a fixed strain and the latter characterized
by a growth of the strain under a constant stress. The KelvinVoigt model, which
simply consists of a spring and a dashpot connected in parallel, is often used to
characterize the viscoelasticity. This model can be used to form more complicated
models [1, 7].
The constitutive equation of KelvinVoigt model is
d (t)
(t) = E (t) + , (14.1)
dt
where (t) and (t) are the stress and strain, respectively, E is the spring constant or
modulus, is the Newtonian viscosity or the coefficient of viscosity. For a constant
applied stress 0 , the creep function is
0
(t) = 1 et/ , (14.2)
E
where = /E is called the retardation time. When the stress is removed, the
recovery function R (t) is
d (t)
(t) = E (t) + = E (t) + RL0D t (t) , (14.4)
dt
where RL0D t is an operator for RiemannLiouville (RL) fractional derivative of
order given by
t
RLD
1 dm f ( )
f (t) = d , m 1 < m, (14.5)
0 t (m ) dt m 0 (t )(m )+1
It can be seen that RLD 0
0 t
f (t) = f (t) and RLD m
0 t
f (t) = f (m) (t) [18]. Specifically,
for 0 < 1, assuming f (t) is absolutely continuous for t > 0, one has (see
Eq. (2.1.28) of [9])
t
t
RLD f (t) = 1 d f ( ) 1 f (0) f
( )
d = + d .
0 t
(1 ) dt 0 (t ) (1 ) t 0 (t )
(14.7)
It is interesting to note that this is actually the solution of Abels integral equation
[9]. Further assuming f (0) = 0 gives the fractional derivative in Caputo (C) form
t
CD
1 f
( )
f (t) = d . (14.8)
0 t (1 ) 0 (t )
The fractional KelvinVoigt model in Eq. (14.4) can be separated into two parts:
Hooke element E (t) and fractional Newton (Scott-Blair) element RL0D t (t) .
When = 0, the fractional Newton part becomes a constant , which is the
case for the Hooke model. It is further observed that, only when = 0, this model
exhibits transient elasticity at t = 0.
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 229
14.3 Formulation
Consider the stability of a column of uniform cross section under dynamic axial
compressive load F(t). The equation of motion is given by [27]
where
0 EI n 4 n 2
= , n2 = , Pn = EI . (14.19)
2 A A L L
If only the nth mode is considered, and the damping, viscoelastic effect, and the
amplitude of load are all small, and if the function F(t)/Pn is taken to be a stochastic
process (t), the equation of motion of a single degree-of-freedom system can be
written as, by introducing a small parameter 0 < 1,
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 231
+ 2 q(t)
q(t) + 2 1 + (t) + RL0D t q(t) = 0, = . (14.20)
E
where W (t) is a standard Wiener process. It is well known that (t) is a normally
distributed random variable, which is not bounded and may take arbitrarily large
values with small probabilities, and hence may not be a realistic model of noise in
many engineering application.
A bounded noise (t) is a more realistic and versatile model of stochastic
fluctuation in engineering applications and is normally represented as
(t) = cos t + 1/2 W (t) + , (14.22)
where is the noise amplitude, is the noise intensity, W (t) is the standard Wiener
process, and is a random variable uniformly distributed in the interval [0, 2 ]. The
inclusion of the phase angle makes the bounded noise (t) a stationary process.
Equation (14.22) may be written as
(t) = cos Z(t), dZ(t) = t + dW (t), (14.23)
where the initial condition of Z(t) is Z(0) = . The small circle denotes the term in
the sense of Stratonovich. This process is bounded between and + for all time
t and hence is a bounded stochastic process. The auto-correlation function of (t)
is given by
1 1
R( ) = E[ (t) (t + ) ] = 2 cos e 2 | | , (14.24)
2 2
and the spectral density function of (t) is
+
2 2 2 + 2 + 14 4
S( ) = R( )ei d = , (14.25)
2 ( + )2 + 14 4 ( )2 + 14 4
When the noise intensity is small, the bounded noise can be used to model
a narrow-band process about frequency . In the limit as approaches zero, the
bounded noise reduces to a deterministic sinusoidal function. On the other hand,
in the limit as approaches infinite, the bounded noise becomes a white noise of
constant spectral density. However, since the mean-square value is fixed at 12 , this
constant spectral density level reduces to zero in the limit.
In investigation of stochastic systems, one is generally most interested in the
almost-sure sample behavior of the response process. The largest Lyapunov expo-
nent is one of the most important characteristic numbers in the modern theory of the
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 233
1 1 1/2
= lim log q2 (t) + 2 q2 (t) . (14.26)
t t
In this paper, the method of stochastic averaging is used to obtain the Lyapunov
exponents of fractional viscoelastic systems and then the stability property is
studied.
1
= 2 a(t) sin2 (t) + (t)a(t) sin 2(t) + U ss ,
a(t)
2
(14.29)
(t) = sin 2(t) + cos2 (t) (t) + U cs ,
a(t)
where
t t
sin (t) a(s) sin (s) cos (t) a(s) sin (s)
U ss = ds, U cs = ds,
(1 ) 0 (t s) (1 ) 0 (t s)
234 J. Deng et al.
and, for ease of presentation, the fractional derivative in Eq. (14.8) is rewritten as
t
RLD
g(t) f
(s)
f (s)g(t) = ds. (14.30)
0 t (1 ) 0 (t s)
The bounded noise can be written as, by assuming that the magnitude is small and
then introducing a small parameter 1/2 ,
(t) = cos t + (t) , (t) = 1/2 W (t) + . (14.31)
Equation (14.32) are exactly equivalent to (14.20) and cannot be solved exactly.
It is fortunate, however, that the right-hand side is small because of the presence of
the small parameter . This means that both a and change slowly. Therefore one
can expect to obtain reasonably accurate results by averaging the response over one
period. This may be done by applying the averaging operator given by
+T
1
M () = lim () d .
T T
When applying the averaging operator, the integration is performed over explicitly
appearing only.
The averaging method of Larionov [11] can be applied to obtain the averaged
equations as follows, without distinction between the averaged and the original non-
averaged variables a and ,
1
= a + a sin(2 ) + M {U ss } ,
a(t)
4 t
(14.33)
1
(t) = + cos(2 ) + M {U cs } ,
4 a t
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 235
M (cos 2) = M (sin 2) = 0,
t t
. / 1
M cos t + (t) sin 2 = sin(2 ),
t 2
. / 1
M cos t + (t) cos2 = cos(2 ). (14.34)
t 4
T t
a 1
= lim sin (t) sin (t )d dt
(1 ) T T t=0 =0
a a c
= cos d = H , (14.35)
2(1 ) 0 2 2 2
a s
M {U cs } = H , (14.36)
t 2 2
where
1
1
Hc = cos d = sin ,
2 (1 ) 0 2 2 2
(14.37)
1
1
Hs = sin d = cos .
2 (1 ) 0 2 2 2
Upon the transformation = log a and = 12 and using (t) = 1/2 W (t),
Eq. (14.38) results in two Ito stochastic differential equations
236 J. Deng et al.
1
d (t) = + sin 2(t) dt, (14.40)
4
1 1
d(t) = + cos(2 ) dt 1/2 W (t). (14.41)
4 2
Substituting Eq. (14.27) into (14.26) yields the Lyapunov exponent
1 1 1
= lim log q2 (t) + 2 q2 (t) = lim (t). (14.42)
t t t t
The stochastic process (t) defined by Eq. (14.41) can be shown to be ergodic, in
which case one can write
t
1
lim sin 2(t)dt = E sin 2(t) , w.p.1, (14.45)
t t 0
1
= E sin 2(t) . (14.46)
4
The remaining task is to evaluate E sin 2(t) in order to obtain . For this purpose,
the FokkerPlank equation governing the stationary probability density function
p() is set up
1 1/2 2 d 2 p() d 1
+ cos 2 p() = 0. (14.47)
2 2 d2 d 4
where
2
f () = 2 + r sin 2, = , r= , (14.49)
2 2
and the normalization constant C is given by
where
1 d
FI ( , r) = log Ii (r) + log Ii (r) ,
2 dr
which can be written as, by making use of the property of the Bessel function,
1 I1+i (r) I1i (r)
FI ( , r) = + .
2 Ii (r) Ii (r)
1
= FI ( , r) . (14.52)
4
1
FI ( , r) = . (14.53)
4
Depending on the relations among the parameters , r, and unity, various
asymptotic expansions of the Bessel functions involved in FI ( , r) can be employed
to simplify Eq. (14.52). For example, when the noise amplitude 1/2 1 is so
small that
1 and r > , one can obtain
1 4 2 2
= 1 . (14.54)
4 4 2
8 1
238 J. Deng et al.
1 2
= . (14.56)
4 8
leads to
&
1 q(0) 1
RLD q(t) = +
0 tn (1 ) (nh) 1
'
n1 . /
1
h
q (n j)h q (n j 1)h ( j + 1)1 j1
j=0
(1 )q(0) n1
= + aj , (14.60)
n j=0
where
1 . /
= , a j = q (n j)h q (n j 1)h ( j + 1)1 j1 .
h (2 )
(14.61)
In a quadrature form, Eq. (14.60) can be written as
n
RLD 1
0 tn
q(t) =
h
j q( jh), (14.62)
j=0
1 1
0 = (n 1)1 n1 + (1 )n , n = ,
(2 ) (2 )
(14.63)
1
j = ( j + 1)1 2 j1 + ( j 1)1 , 1 j n 1.
(2 )
Letting
q1 (t) = q(t), q2 (t) = q(t),
q3 (t) = t + 1/2 W (t) + , Q(t) =RL0D t q(t) ,
(14.64)
the equation of motion (14.20) can be written as a three-dimensional system
q1 (t) = q2 , q2 (t) = 2 q2 2 1 + cos q3 q1 + Q ,
qk+1
1 = qk1 + qk2 t,
.
/
qk+1
2 = q k
2 2 q k
2 + 2
1 + cos qk k
s 1q + Qk
t,
(14.66)
qk+1
3 = qk3 + t + 1/2 W k ,
n+1
1
Qk =
h j q1j .
j=1
After the discretization, a time series of the response variable q(t) can be obtained
for given initial conditions. It is clear that the fractional equation of motion (14.20)
depends on all historical data of q(t).
Consider two special cases first. From the equation of motion (14.20), suppose
= 0 and = 0, it becomes the damped Mathieu equation. If further = 0 is
assumed, the equation of motion reduces to undamped Mathieu equation.
From Eq. (14.55), the boundary for the case of = 0, = 0, = 0 is
= 1 , (14.67)
2 2
which is the same as the first-order approximation of the boundary for the
undamped Mathieu equation obtained in Eq. (2.4.11) of [27]. However, if damping
is considered, the equation of motion in Eq. (14.20)
becomes
the damped Mathieu
equation. Substituting Eq. (14.39) and = 1 12 into (14.55) leads to the
stability boundaries
2 2 2
1 = 2 , (14.68)
2 16 2
which is similar to the first-order approximation of the boundary for the damped
Mathieu equation [27] in the vicinity of = 2 . This is due to that when the
intensity approaches 0, the bounded noise becomes a sinusoidal function.
Consider the effect of bounded noises on system stability. From Eq. (14.56), it is
found that, by introducing noise ( = 0) in the system, stability of the viscoelastic
system is improved in the vicinity of = 0, because the term containing is
negative. This result is also confirmed by Fig. 14.3, where in the resonant region,
with the increase of noise intensity 1/2 , the unstable area of the system dwindles
down and so becomes more stable. One probable explanation is that, from the
power spectrum density function of bounded noise, the larger the value of , the
wider the frequency band of the power spectrum of the bounded noise, as shown in
Fig. 14.2. When approaches infinite, the bounded noise becomes a white noise.
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 241
As a result, the power of the noise is not concentrated in the neighborhood of the
central frequency , which reduces the effect of the primary parametric resonance.
The effect of noise amplitude on Lyapunov exponents is shown in Fig. 14.4.
The results of two noise intensities, 1/2 = 0.8 and 1/2 = 0.2, are compared for
242 J. Deng et al.
various values of . It is seen that, in the resonant region, increasing the noise
amplitude destabilizes the system. The maximum resonant point is not exactly
at = 2 , but in the neighborhood of = 2 . This may be partly due to the
viscoelasticity, partly due to the noise.
In the numerical simulation of Lyapunov exponents, the embedding dimension
is m = 50, the reconstruction delay J = 30, the number of data points is N = 20, 000,
and the time step is t = 0.01, which yields the total time period T = Nt = 200.
Typical results are shown in Fig. 14.5 along with the approximate analytical results.
It is found that the approximate analytical result in Eq. (14.52) agrees with the
numerical result very well.
Finally, consider the effect of fractional order and damping on system stability.
The fractional order of the system has a stabilizing effect, which is illustrated in
Fig. 14.6. This is due to the fact that when changes from 0 to 1, the property of the
material changes from elastic to viscous, as shown in Fig. 14.1. The same stabilizing
effect of damping on stability is shown in Fig. 14.7.
14.7 Conclusions
Acknowledgments The research for this paper was supported, in part, by the Natural Sciences
and Engineering Research Council of Canada.
References
15.1 Introduction
Stochastic processes, random fields, and other random functions are often used
to model phenomena that occur randomly in nature. Properties of real, physical
systems, whether deterministic or random, always take values in bounded sets.
For example, material properties and time-varying inputs to, and outputs from, a
physical system cannot be infinitely large. Time series of financial, geological, and
other physical systems do not exhibit arbitrarily large jumps. Other examples from
nature include experimental measurements of wind forces on structures [19, 29],
ocean wave elevation [24], soil particle size [2], highway/railway elevation [20, 25],
and Euler angles of atomic lattice orientation (see [4] and [16], Sect. 8.6.2.2).
While each of these quantities is known to be bounded, the values for the
bounds themselves are often unknown. Gaussian models, which have unbounded
support and therefore cannot accurately represent real, physical phenomena, are
often used in practice. The Gaussian model can be an adequate choice for many
applications, but this is not always the case. For example, engineering systems
designed to conform with the Gaussian assumption may be needlessly over-
conservative. Herein, we limit the discussion to non-Gaussian stochastic models
that take values in a bounded set, where the boundary itself may or may not be
known. In the case of the latter, the boundary must be estimated, together with other
model parameters, from the available information.
Let X denote a random function describing a particular physical quantity, and
suppose that information on X is limited to one or more samples of this function,
as well as some features of it, e.g., its second moment properties and/or support.
A ranking procedure is used for selecting the optimal model for X from a finite
collection of model candidates, i.e., models that are consistent with all available
information and, therefore, cannot be neglected. The model candidates we consider
in the report are translation random functions, that is, memoryless mappings of
Gaussian random functions. Our objectives are to: (a) find a probabilistic model
for X that is optimal in some sense and (b) illustrate the proposed method for model
selection by example. Applications include optimal models for stationary stochastic
processes taking values on a bounded interval, where the bounds may or may not
be known, and optimal models for homogeneous random fields used to represent
material property variability within an aerospace system.
It is shown that the solution of the model selection problem for random functions
X with a bounded support differs in a significant way from that of functions with
unbounded support. For example, the performance of the optimal model for X
depends strongly on the accuracy of the estimated range of this function. Predictions
of various properties of X can be inaccurate even if its range is only slightly in error.
Satisfactory estimates for the range require much larger samples of X than those
needed to estimate, for example, the parameters of the correlation function or the
marginal distribution of X.
In Sect. 15.2 we review essentials of translation random functions, including how
to calibrate these models to available data. Two general methods for model selection
are briefly summarized in Sect. 15.3, then applied to a series of example problems
in Sect. 15.4.
15 Model Selection for Random Functions with Bounded Range 249
15.2.1 Generalities
and
1
e(x +y 2 xy)/(2(1 )) .
2 2 2
2 (x, y; ) = (15.4)
2 1 2
E [Xi (v + ) X j (v)] i j
i j ( ) = , (15.5)
i2j
where E [Xi (v + ) X j (v)] is given by Eq. (15.3), i = E[Xi ], and i2j= E[Xi (v)X j (v)]
i j . By Eq. (15.5), i j ( ) takes values in [1, 1] and, by Eq. (15.3), depends on
i j ( ). Closed-form expressions for i j in terms of i j are difficult to obtain, but
it can be shown that (see [15], Sect. 3.1.1): (a) i j ( ) = 0 and i j ( ) = 1 imply
i j ( ) = 0 and i j ( ) = 1, respectively; (b) i j is an increasing function of i j ;
(c) i j ( ) and i j ( ) satisfy |i j ( )| |i j ( )|, D; and (d) i j is bounded from
below by ij 1, where
Random functions with covariance functions smaller than ij cannot be represented
by translation models. Further, even if all covariance functions for X are in
the range [ij , 1], it does not mean that its image in the Gaussian space is a
covariance function, since ii may not be positive definite (see [15], Sect. 3.1.1).
Hence arbitrary combinations of marginal CDFs {Fi } and covariance functions
{i j } are not permissible. If, however, we postulate {i j }, the resulting {i j } are
always proper covariance functions.
The definition of the translation random function in Eq. (15.1) holds for distri-
butions {Fi } with probability mass in bounded intervals, intervals bounded to the
left/right, or the entire real line. If all components of X(v) take values in bounded
intervals, the function is said to have a bounded range. There is conceptually
no difference between the treatment of translation function with and without
bounded range.
15 Model Selection for Random Functions with Bounded Range 251
15.2.2 Calibration
As demonstrated by Sect. 15.2.1, the probability law of the translation model defined
by Eq. (15.1) is completely defined by marginal CDFs {Fi } and the covariance
functions {i j } of its Gaussian image. Motivated by the discussion above, we do not
specify the covariance functions of X directly, but rather the covariance of G so as to
guarantee X has a proper covariance function. The calibration of the marginal CDF
and covariance function are discussed in Sects. 15.2.2.1 and 15.2.2.2, respectively.
For clarity, we will assume for the remainder of the discussion that: (a) X(v) =
X(v), v D, is a scalar-valued random function; (b) D R so that X(v) = X(v),
< v < ; and (c) X is an ergodic process, so that model calibration can be
performed using a single sample, denoted by x = (x1 , x2 , . . . , xm )
, where xk = x(vk )
and v = vk+1 vk , k = 1, . . . , m 1, is assumed constant. The generalization of the
results of this section to the case of Rd -valued random functions where D Rd ,
d, d > 1, is straightforward.
Let X be a translation random function with marginal CDF F that depends on a set of
parameters . We denote this dependence by F(x; ); the corresponding marginal
PDF of X is f (x; ) = dF(x; )/dx. Calibration of the marginal probability law
for X to the available data x = (x1 , x2 , . . . , xm )
requires two steps: (a) choose the
functional form for F and (b) calibrate , the associated parameters of F.
The objective of step (a) is to select a marginal distribution function F that
is sufficiently flexible to capture any desired behavior observed in the data x and is
consistent with the known physics. For example, in climate modeling, if X models
precipitation rate, the distribution function must have support on the positive real
line with positive skewness; the lognormal distribution satisfies these constraints
and is often used to model precipitation rate [28]. Herein we consider the class of
beta translation models to represent random phenomena with bounded range. Hence,
mapping h defined by Eq. (15.1) is such that the marginal distributions of X are that
of a beta random variable, meaning that for each fixed v D, random variable X(v)
is equal in distribution with a beta random variable, i.e.
d
X(v) = a + (b a)Y, v D, (15.7)
where Y is a standard beta random variable taking values in [0, 1]. The probability
density function (PDF) of Y is [3]
1
f (y; q, r) = yq1 (1 y)r1 , 0 y 1, (15.8)
B(q, r)
where q, r > 0 are deterministic shaping parameters, and B(q, r) = (q) (r)/ (q+
r) and () denote the beta and gamma functions, respectively (see [1], Sects. 6.1
and 6.2).
252 R.V. Field, Jr. and M. Grigoriu
F(x; ) = P (X(v) x)
y
xa 1
=P Y = vq1 (1 v)r1 dv = Iy (q, r), (15.9)
ba B(q, r) 0
for x [a, b], where y = (x a)/(b a) and Iy (, ) denotes the incomplete beta
function ratio (see [21], Sect. 25.1). By Eq. (15.9), the marginal probability law of
X is completely defined by parameters = (a, b, q, r)
. A wide variety of symmetric
(q = r) and asymmetric (q = r) distributions are possible; the flexibility of the
PDF defined by Eq. (15.8) makes the beta distribution a very useful model for
representing random phenomena with bounded range.
Suppose first that parameters a and b defining the range of X are known.
Maximum likelihood estimators q and r for parameters q and r are readily available
(see [21], Sect. 25.4). If the range of X is unknown, the identical estimators can be
used for a collection of trial values for a and b. For example, consider
ai = a1 (i 1)
bi = b1 + (i 1) (15.10)
Suppose that: (a) the marginal distribution of the translation model X defined by
Eq. (15.1) is known so that its Gaussian image is G(v) = 1 F(X(v); ); and
(b) ( ; ) = E[G(v) G(v + )], the covariance function of the Gaussian image of X,
has known functional form but unknown parameter vector . We next provide two
methods to estimate values for .
For method #1, we choose vector for that minimizes the following error
e1 ( ) = 0 ( ) [ ( ) ( ; )]2 d , (15.11)
where ( ) denotes an estimate of the covariance function of G (see [5], Sect. 11.4)
obtained from vector g = (g1 , . . . , gm )
with elements
15 Model Selection for Random Functions with Bounded Range 253
gk = 1 F(xk ; ), (15.12)
By Eq. (15.14), ( ) is positive definite and invertible. The likelihood that g was
drawn from a zero-mean Gaussian vector with covariance matrix ( ), for a fixed
, is given by (see [30], Chap. 8)
1
(g | (; )) = [(2 )m det( ( ))]1/2 E p g
( )1 g (15.15)
2
where det( ( )) > 0 denotes the determinant of matrix ( ). For method #2, we
choose for that minimizes
e2 ( ) = 2 ln (g | (; ))
m
= m ln 2 + ln (k ) + g
( )1 g, (15.16)
k=1
where (gi | j (; i, j )), defined by Eq. (15.15), is a measure of the likelihood that
sample gi was drawn from a zero-mean Gaussian process with covariance function
j (; i, j ), and
n n
1 = gi | j (; i, j ) (15.19)
i=1 j=1
is a scaling factor. We can interpret pi, j to be the probability that candidate model
Xi, j C is the best available model for X since, by Eqs. (15.18) and (15.19), each
pi, j 0 and i, j pi, j = 1.
Our objective is to rank the members of C and select the candidate model for
X with the highest rank; the winning model is referred to as the optimal model and
denoted by X C . We present two procedures for ranking the candidate models in
Sects. 15.3.1 and 15.3.2. Both methods make use of the model probabilities, {pi, j }
defined by Eq. (15.18), to assign rankings.
15 Model Selection for Random Functions with Bounded Range 255
The optimal model under the classical method for model selection depends on the
available information on X, as well as the collection of candidate models considered
in the analysis, C . It has been demonstrated that estimates for pi, j can be unstable
when the available data is limited [12].
where
n n
ui, j = E[U(Xi, j , C )] = U(Xi, j , Xk,s ) pk,s (15.22)
k=1 s=1
denotes the expected utility of candidate model Xi, j , and pk,s is defined by
Eq. (15.18). Hence, by Eq. (15.22), the winning model achieves a good fit to the
available data while being most appropriate for the intended purpose. The utility,
U, is sometimes referred to as the opportunity loss (see [27], p. 60) so that the
solution to Eq. (15.21) agrees with intuition, i.e., X C minimizes the expected
loss. Utility is the traditional term used in the literature, so we will use it herein.
256 R.V. Field, Jr. and M. Grigoriu
The optimal model under the decision-theoretic method depends on the available
information on X and the collection of candidate models, C ; unlike the classical
method, X also depends on U, the utility function. Hence, we expect X to be
different when different utility functions are used, i.e., when the model purpose
changes. We note that there can be significant uncertainty in the definition of
the utility function when the consequences of unsatisfactory system behavior are
not well understood; the decision-theoretic method for model selection may be
inappropriate in this case [12].
15.4 Applications
Fig. 15.1 Available data, x(t), for stochastic process model selection. Taken from [13]
c Elsevier
Science Ltd (2009)
Table 15.1 Calibrated model parameters for stochastic process with known
covariance assuming = 1/10 and m = 200
Candidate model,
Xi C ai bi qi ri
X1 0 1 2.9261 3.8930
X2 0.11592 0.81877 0.7785 0.9071
X3 0.08077 0.85391 1.6641 1.9960
X4 0.04563 0.88906 2.1051 2.4996
X5 0.01049 0.92420 2.5687 3.0193
X6 0.02466 0.95934 3.0627 3.5662
where Fi is the distribution of a beta random variable on interval [ai , bi ] with shape
parameters qi , ri , for i = 1, . . . , n. Five intervals are considered for the range of X
using the trial values defined by Eq. (15.10) in Sect. 15.2.2.1 assuming = 1/10
and m = 200. Added to beginning of this collection of intervals is the correct range,
[a, b] = [0, 1], so that a total of n = 6 candidate models are considered. The values
for all model parameters are listed in Table 15.1. Candidate model X1 has the correct
range; we therefore refer to X1 as the correct model for X. The ranges of candidate
models X2 , . . . , X6 depend on the available data and form a monotone increasing
sequence of intervals.
The likelihoods and model probabilities, defined by Eqs. (15.15) and (15.18),
respectively, are illustrated in Fig. 15.2 for each candidate model in C . Note that
258 R.V. Field, Jr. and M. Grigoriu
Fig. 15.2 Log-likelihoods (a) and model probabilities (b) for each candidate model in C as a
function of sample length, m. Taken from [13]
c Elsevier Science Ltd (2009)
the natural log of the likelihood is shown in Fig. 15.2a for clarity. Both results are
shown as a function of the length m of the available sample. For example, m =
100 and m = 500 correspond to the first 100 t = 5 s and the first 500 t = 25 s,
respectively, of x(t) illustrated in Fig. 15.1. Three important observations can be
made. First, for short samples (m < 100), the estimates for the range of X are highly
inaccurate and can change dramatically when the sample size increases. As a result,
the optimal model for short samples, i.e., the model with the greatest probability, can
be any Xi C . Second, the log-likelihood of candidate model X2 is very small when
compared to the log-likelihoods of the other candidate models for X. The values
computed are out of the range of Fig. 15.2a; the corresponding model probability,
p2 , is near zero for all values for m. We therefore conclude that candidate model X2
defined by range [a2 , b2 ] = [min x(t), max x(t)] is a poor choice for all values of m
considered. Third, as the sample length increases, the model with the correct range,
X1 , becomes optimal.
Results are very sensitive to our estimates for the range of X. Overly large or
small values for defined by Eq. (15.10) can deliver unsatisfactory results. Consider
Fig. 15.3, which shows the probability p1 of model X1 C as a function of sample
length, m, and interval size . Recall that the range of model X1 is correct, i.e.,
[a1 , b1 ] = [a, b]. For large values for , the correct model for X has a very low
probability of being selected since p1 is near zero. Small values for can also be
problematic since they can result in very inaccurate estimates for the range of X.
In this case, the image of x(t), denoted by g(t) and defined by Eq. (15.12), can
be highly non-Gaussian and a poor model for X will result. To illustrate, consider
Fig. 15.4a, which shows the image of x(t) assuming the range of X is given by
[min x(t) /1, 000, max x(t) + /1, 000], where = (max x(t) min x(t))/2; this
corresponds to = 1/1, 000 as defined by Eq. (15.10). A normalized histogram
of g(t) is shown in Fig. 15.4b together with the distribution for a N(0, 1) random
variable. The sample coefficient of kurtosis of g(t) is 4 = 4.8. It is clear from
Fig. 15.4a,b that with = 1/1, 000, image g(t) is far from Gaussian.
15 Model Selection for Random Functions with Bounded Range 259
Fig. 15.3 Probability p1 of model X1 C as a function of sample length m and interval size
parameter . Contours of p1 are shown in panel (b). Taken from [13]
c Elsevier Science Ltd
(2009)
Fig. 15.4 The image of x(t), panel (a), assuming range [min x(t) /1, 000, max x(t) + /1, 000],
and the corresponding normalized histogram of g(t), panel (b). Taken from [13] c Elsevier
Science Ltd (2009)
Fig. 15.6 Log-likelihoods (a) and model probabilities (b) for each candidate model in C as a
function of sample length m. Taken from [13]
c Elsevier Science Ltd (2009)
The log-likelihoods and model probabilities, defined by Eqs. (15.15) and (15.18),
respectively, are illustrated in Fig. 15.6 for each candidate model in C . For short
samples (m < 250), the length of the available sample is of the same order as the
estimated correlation length of X. In this case, highly inaccurate estimates for j
can occur and the optimal model, i.e., the model with the greatest probability, can
be any X j C . It is therefore critical in this case to consider covariance models that
are sufficiently flexible to describe a broad range of dependence structures. As m
grows large, the length of the sample is much longer than the estimated correlation
length of X, and accurate estimates for j , j = 1, . . . , 4, are possible. The model
with the correct covariance function, i.e., model X1 , becomes optimal in this case.
There is no requirement that the correct covariance function be a member of C ; it is
included in the example to demonstrate that, assuming a large enough sample size,
the correct model will be selected if available.
15 Model Selection for Random Functions with Bounded Range 261
Fig. 15.7 Model probabilities for each candidate model in C obtained by a sample of length:
(a) m = 200, and (b) m = 1, 000. The probability of the optimal model is shaded. Taken from [13]
c Elsevier Science Ltd (2009)
We next consider the general case where the parameters defining the range and shape
of the marginal distribution of X, as well as the covariance function of its Gaussian
image, are unknown. The candidate models for this case are defined by Eq. (15.17)
with n = 5 and n
= 4; we therefore consider n n
= 20 candidate models for X.
Parameters = 0.1 and = 1, defined by Eqs. (15.10) and (15.13), respectively,
are used for calculations. Figure 15.7a,b illustrate the model probabilities defined
by Eq. (15.18) assuming a sample of length m = 200 and m = 1, 000, respectively.
As indicated by the shaded bars, models X5,1 and X3,1 have the greatest probability
for m = 200 and m = 1, 000, respectively, and are therefore optimal.
The model probabilities illustrated in Fig. 15.7 define the optimal model for X
without regard to any specific purpose. Suppose now that we are interested in models
for X that provide accurate estimates of the following properties
t
1
W = max X(t) and Z = X(u)du, (15.25)
t0 t 0
where fW |Xi, j and fZ|Xi, j denote the PDFs of random variables W and Z, respectively,
with X replaced by Xi, j .
262 R.V. Field, Jr. and M. Grigoriu
Fig. 15.8 Performance of each candidate model in C for sample size m = 200 (panels (a) and (c))
and m = 1, 000 (panels (b) and (d)). Taken from [13]
c Elsevier Science Ltd (2009)
Values for W and Z are illustrated in Fig. 15.8 for short (m = 200) and long
(m = 1, 000) samples. Results from 250 independent Monte Carlo samples of each
Xi, j C were used to estimate PDFs fW |Xi, j and fZ|Xi, j . Sampling was also used
to estimate densities fW and fZ ; an approximation for fW can be obtained by the
crossing theory for translation processes [14]. Four important observations can be
made. First, the model with the best performance under either measure defined
by Eq. (15.26) is not necessarily optimal as defined by the model probabilities
illustrated in Fig. 15.7. Hence, the use of the model probabilities defined by
Eq. (15.18) to rank the candidate models in C does not necessarily yield the most
accurate models for the extremes and/or time-average of X; this is especially true
when available data is limited. Second, metric W is sensitive to estimates for the
range of X for m = 200 and m = 1, 000, meaning that accurate estimates for [a, b]
are essential to achieve accurate estimates of the extreme of X. Third, metric Z
is sensitive to estimates for the covariance function for short samples. Fourth, the
sensitivity of W and Z decreases with increasing sample size, m. For example,
with m = 1, 000 any model in C can be used to provide accurate representations
for the time average of X. These observations provide motivation for the decision-
theoretic method for model selection presented in Sect. 15.3.2 and used in the
following example.
15 Model Selection for Random Functions with Bounded Range 263
where Y (t) and (t) denote the vertical displacement and in-plane rotation of the
center of the internal electronics component, respectively. By Eq. (15.27), W and
Z are random variables that correspond to the maximum vertical acceleration and
264 R.V. Field, Jr. and M. Grigoriu
Fig. 15.10 Measurements of foam density at 8 locations taken from five nominally-identical
specimens. Taken from [13]
c Elsevier Science Ltd (2009)
maximum in-plane rotation of the internal component due to the applied load z(t).
Herein, we assume that the survivability of the internal component illustrated in
Fig. 15.9 directly depends on output properties W and/or Z, i.e., system failure
occurs if one or both of these properties exceed known thresholds.
Let D R2 denote the domain of the epoxy foam illustrated in Fig. 15.9, and let
v = (v1 , v2 )
be a vector in D. We assume the following random field model for the
density of the epoxy foam,
Fig. 15.11 Two samples of candidate random field model X1 C . Taken from [13]
c Elsevier
Science Ltd (2009)
where each Xi (v) is a beta translation process defined by Eq. (15.28) with marginal
CDF Fi equal to the distribution of a beta random variable on interval [ai , bi ] with
shape parameters qi , ri . For this study, we consider n = 10 candidate models for
X, let [a1 , b1 ] = [mink,l {xk |sl }, maxk,l {xk |sl }], and follow the procedure discussed in
Sect. 15.2.2.1 to provide trial values for the range of X with = 1/2. However,
available data is extremely limited so that estimates for the shape parameters
provided by standard maximum likelihood estimators can be extremely unreliable.
We therefore set qi = ri = 1, i = 1, . . . , n, to reflect this. Two independent samples
of candidate random field X1 (v) C are illustrated in Fig. 15.11.
As discussed in Sect. 15.3.1, the model probabilities defined by Eq. (15.18)
provide one means to rank the members of C . Let pi |sl denote the probability
associated with candidate model Xi C , when calibrated to data provided by
specimen sl ; pi |sl is the solution to Eq. (15.18) with data gi replaced by gi |sl
and j = defined by Eq. (15.29). Values for pi |sl are illustrated in Fig. 15.12
demonstrating that, because of the limited data set, all candidate models have nearly
identical ranking, regardless of the specimen we choose. Assuming each specimen
to be equally likely, pi = 1/5 5l=1 pi |sl 1/10 is the unconditional probability that
candidate model Xi C is the best available model for X in the collection. Hence,
the classical method for model selection discussed in Sect. 15.3.1 cannot distinguish
among the candidate models for X and, therefore, cannot provide an optimal model
for foam density in this case.
266 R.V. Field, Jr. and M. Grigoriu
Fig. 15.12 Conditional model probabilities for each candidate model in C . Taken from [13]
c
Elsevier Science Ltd (2009)
The output properties of interest, namely the maximum vertical acceleration and
maximum in-plane rotation of the internal component defined by Eq. (15.27), are
sensitive to the model we use for X. This sensitivity can be observed by Fig. 15.13,
where histograms of 250 samples of W defined by Eq. (15.27) are illustrated in
Fig. 15.13a,b assuming the random field X is represented by candidate models X1
C and X10 C , respectively. Similar histograms of 250 independent samples of Z
are illustrated in Fig. 15.13c,d. In general, as the assumed range for X increases, so
does the range of outputs W and Z. The finite element code Salinas [26], which has
the capability to accept as input realizations of the foam density, was used for all
calculations.
Recall that internal component survivability depends on its maximum accel-
eration and rotation during the shock event. The results illustrated in Fig. 15.13
demonstrate that it is therefore critical for this application to achieve estimates for
the range of X that are optimal in some sense. Let
W (Xi ) = P (W w | X = Xi )
Z (Xi ) = P (Z z | X = Xi )
W,Z (Xi ) = P (W w Z z | X = Xi ) (15.31)
denote three metrics of system performance. Metrics W (Xi ) and Z (Xi ) correspond
to the probabilities that the component responses defined by Eq. (15.27) indepen-
dently do not exceed thresholds w and z, assuming candidate model Xi C for the
foam density; metric W,Z (Xi ) is the joint probability that both outputs do not exceed
15 Model Selection for Random Functions with Bounded Range 267
Fig. 15.13 Sensitivity of internal component response to model for foam density: histograms of
250 samples of (a) W |(X = X1 ), (b) W |(X = X10 ), (c) Z|(X = X1 ), and (d) Z|(X = X10 ). Taken
from [13]
c Elsevier Science Ltd (2009)
their respective thresholds. Our objective is to select a model for random field X such
that we achieve accurate but conservative estimates for the three metrics defined by
Eq. (15.31).
In Sect. 15.4.1, we applied the classical method for model selection to choose
optimal models for a stochastic process under limited information. For this applica-
tion, we instead apply the decision-theoretic method for model selection discussed
in Sect. 15.3.2, which is useful when considering high risk systems under limited
information, where a fair understanding of the consequences of unsatisfactory
system behavior is available. These consequences are quantified via the following
utility function
&
1 [ (Xi ) (X j )]2 if (Xi ) (X j ),
U(Xi , X j ) = (15.32)
2 [ (Xi ) (X j )]2 if (Xi ) > (X j ),
where (Xi ) is one of the metrics defined by Eq. (15.31). For example, if we assume
internal component survival depends only on its acceleration response, = W is
appropriate. For the general case where survival depends on both acceleration and
rotation response, we use = W,Z . By Eqs. (15.31) and (15.32), non-conservative
predictions of component survival are penalized, and overly conservative predic-
tions are also subject to penalty. For 2
1 , non-conservative predictions of
component survival are heavily penalized with respect to conservative predictions;
268 R.V. Field, Jr. and M. Grigoriu
Fig. 15.14 Model selection for foam density based on metric W : (a) log of expected utility, u, as
a function of acceleration threshold w, and (b) optimal model, X , as a function of w. Taken from
[13] c Elsevier Science Ltd (2009)
Fig. 15.15 Model selection for foam density based on metric Z : (a) log of expected utility, u, as
a function of rotation angle threshold z, and (b) optimal model, X , as a function of z. Taken from
[13] c Elsevier Science Ltd (2009)
15.5 Conclusions
Methods were developed for finding optimal models for random functions under
limited information. The available information consists of: (a) one or more samples
of the function and (b) knowledge that the function takes values in a bounded set, but
whose actual boundary may or may not be known. In the latter case, the boundary
of the set must be estimated from the available samples. Numerical examples were
presented to illustrate the utility of the proposed approach for model selection,
including optimal continuous time stochastic processes for structural reliability,
and optimal random fields for representing material properties for applications in
mechanical engineering.
The class of beta translation processes, a particular type of non-Gaussian stochas-
tic process or random field defined by a memoryless transformation of a Gaussian
process or field with specified second-moment properties, was demonstrated to
270 R.V. Field, Jr. and M. Grigoriu
be a very useful and flexible model for representing physical quantities that take
values in bounded intervals. In practice, the range of possible values of these
quantities can be unknown and therefore must be estimated, together with other
model parameters, from the available information. This information consisted of
one or more measurements of the quantity under consideration, as well as some
knowledge about its features and/or purpose.
It was shown that the solution of the model selection problem for random
functions with a bounded support differed significantly from that of functions with
unbounded support. These differences depended on the intended purpose for the
model. For example, the performance of the optimal model depended strongly on the
accuracy of the estimated range of this function, particularly when the extremes of
the random function are of interest. The use of the sample minimum and maximum
for the range was clearly inadequate in this case; overly wide estimates for the range
were also problematic. However, when accurate time-averages of the process were
needed, for example, to quantify damage accumulation within a structural system,
the range became less important.
References
1. Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. Dover Publications, New
York, NY (1972)
2. Andrews, D.F., Herzberg, A.M.: Data: A Collection of Problems and Data from Many Fields
for the Student and Research Worker. Springer, New York, NY (1985)
3. Ang, A., Tang, W.: Probability Concepts in Engineering Planning and Design: Vol. 1 - Basic
Principles. Wiley, New York, NY (1975)
4. Arwade, S.R., Grigoriu, M.: J. Eng. Mech. 130(9), 9971005 (2004)
5. Bendat, J.S., Piersol, A.G.: Random Data: Analysis and Measurement Procedures, 2nd edn.
Wiley, New York, NY (1986)
6. Chernoff, H., Moses, L.E.: Elementary Decision Theory. Dover Publications, New York, NY
(1959)
7. Field, Jr., R.V.: J. Sound Vib. 311(35), 13711390 (2008)
8. Field, Jr., R.V., Constantine, P., Boslough, M.: Statistical surrogate models for prediction of
high-consequence climate change. Int. J. Uncertainty Quant. 3(4), 341355 (2013)
9. Field, Jr., R.V., Epp, D.S. J. Sensor Actuator A Phys. 134(1), 109118 (2007)
10. Field, Jr., R.V., Grigoriu, M.: Probabilist. Eng. Mech. 21(4), 305316 (2006)
11. Field, Jr., R.V., Grigoriu, M.: J. Sound Vib. 290(35), 9911014 (2006)
12. Field, Jr., R.V., Grigoriu, M.: J. Eng. Mech. 133(7), 780791 (2007)
13. Field, Jr., R.V., Grigoriu, M.: Probabilist. Eng. Mech. 24(3), 331342 (2009)
14. Grigoriu, M.: J. Eng. Mech. 110(4), 610620 (1984)
15. Grigoriu, M.: Applied Non-Gaussian Processes. PTR Prentice-Hall, Englewood Cliffs, NJ
(1995)
15 Model Selection for Random Functions with Bounded Range 271
16. Grigoriu, M.: Stochastic Calculus: Applications in Science and Engineering. Birkhauser,
Boston, MA (2002)
17. Grigoriu, M., Garboczi, E., Kafali, C.: Powder Tech. 166(3), 123138 (2006)
18. Grigoriu, M., Veneziano, D., Cornell, C.A.: J. Eng. Mech. 105(4), 585596 (1979)
19. Gurley, K., Kareem, A.: Meccanica 33(3), 309317 (1998)
20. Iyengar, R.N., Jaiswal, O.R.: Probabilist. Eng. Mech. 8(34), 281287 (1993)
21. Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Univariate Distributions, Vol. 2, 2nd
edn. Wiley, New York, NY (1995)
22. Keeping, E.S.: Introduction to Statistical Inference. Dover Publications, New York, NY (1995)
23. Nour, A., Slimani, A., Laouami, N., Afra, H.: Soil Dynam. Earthquake Eng. 23(5), 331348
(2003)
24. Ochi, M.K.: Probabilist. Eng. Mech. 1(1), 2839 (1986)
25. Perea, R.W., Kohn, S.D.: Road profiler data analysis and correlation, Research Report 92-30,
The Pennsylvania Department of Transportation and the Federal Highway Administration
(1994)
26. Reese, G., Bhardwaj, M., Segalman, D., Alvin, K., Driessen, B.: Salinas: Users notes.
Technical Report SAND99-2801, Sandia National Laboratories (1999)
27. Robert, C.P.: The Bayesian Choice, 2nd edn. Springer Texts in Statistics. Springer, New York
(2001)
28. Sauvageot, H.: J. Appl. Meteorol. 33(11), 12551262 (1994)
29. Stathopoulos, T.: J. Struct. Div. 106(ST5), 973990 (1980)
30. Zellner, A.: An Introduction to Bayesian Inference in Econometrics. Wiley, New York, NY
(1971)
Chapter 16
From Model-Based to Data-Driven Filter Design
Abstract This paper investigates the filter design problem for linear time-invariant
dynamic systems when no mathematical model is available, but a set of initial
experiments can be performed where also the variable to be estimated is measured.
Two-step and direct approaches are considered within both a stochastic and a
deterministic framework and optimal or suboptimal solutions are reviewed.
16.1 Introduction
This paper examines different approaches for designing a filter that, operating on the
measured output of a linear time-invariant (LTI) dynamic system, gives a (possibly
optimal in some sense) estimate of some variable of interest. In particular, a discrete-
time, finite-dimensional, LTI dynamic system S is considered, for example described
in state-space form as:
M. Milanese
Modelway srl, Via Livorno 60, Torino, I-10144, Italy
e-mail: mario.milanese@modelway.it; mario.milanese@polito.it
F. Ruiz
Pontificia Universidad Javeriana, Jefe Seccion de Control Automatico, Departamento de
Electronica, Carrera 7 No. 40-62, Bogota D.C., Colombia
e-mail: ruizf@javeriana.edu.co
M. Taragna ()
Politecnico di Torino, Dipartimento di Automatica e Informatica, Corso Duca degli
Abruzzi 24, I10129, Torino, Italy
e-mail: michele.taragna@polito.it
where for a given time instant t N: xt X Rnx is the unknown system state;
yt Y Rny is the known system output, measured by noisy sensors; zt Z Rnz
is the variable to be estimated; w t Rnw is an unknown multivariate signal that
collects all the process disturbances and measurement noises affecting the system;
A, B, C1 , C2 and D are constant matrices of suitable finite dimensions.
Such an estimation problem has been extensively investigated in the literature
over the last five decades, since it plays a crucial role in control systems and signal
processing, and optimal solutions have been derived under different assumptions
on noise and optimality criteria. In the beginning, a probabilistic description of
disturbances and noises has been adopted and a stochastic approach has been
followed, leading to the standard Kalman filtering where the estimation error
variance is minimized, see, e.g., [1, 7, 12, 14]. Later, assuming that the noise and
the variable to be estimated belong to normed spaces, the subject of worst-case or
Set Membership filtering has been treated and three well-established approaches
have been developed, aiming to minimize the worst-case gain from the noise signal
to the estimation error, measured in p and q -norm, respectively: the H filtering,
in the case p = q = 2, see, e.g., [5, 911, 23, 28, 35]; the generalized H2 filtering, in
the case p = 2 and q = , see, e.g., [8, 33]; the 1 filtering, in the case p = q = ,
see, e.g., [22, 3032].
The previously mentioned methodologies relied initially on the exact knowledge
of the process S under consideration and later were extended to uncertain systems,
thus leading to the so-called robust filtering techniques. These works substantially
follow a model-based approach, assuming systems with state-space descriptions,
possibly affected by norm-bounded or polytopic uncertainties in the system matrices
or uncertainties described by integral quadratic constraints, see, e.g., [6, 34] and the
references therein.
However, in most practical situations, the system S is not completely known and
a data-driven approach to the filter design problem is usually obtained by adopting
a two-step procedure:
1. An approximate model of the process S is identified from prior information
(physical laws,. . . ), making use of a sufficiently informative noisy dataset;
2. On the basis of the identified model, a filter is designed whose output is an
estimate of the variable of interest.
Note that, except for peculiar cases (i.e., C2 actually known), the first step typi-
cally requires measurements y and z = z + v collected during an initial experiment
of finite length N, being v an additive noise on z.
This procedure is in general far from being optimal, because only an approximate
model can be identified from measured data and a filter which is optimal for the
identified model may display a very large estimation error when applied to the
16 From Model-Based to Data-Driven Filter Design 275
actual system. Evaluating how this approximation source affects the filter estimation
accuracy is a largely open problem. Note that robust filtering does not provide at
present an efficient solution to the problem. Indeed, the design of a robust filter
is based on the knowledge of an uncertainty model, e.g., a nominal model plus a
description of the parametric uncertainty. However, identifying reliable uncertainty
models from experimental data is still an open problem.
To overcome all these issues for such general situations, an alternative data-
driven approach has been proposed in [15, 16, 19, 20, 2426], where the initial data
y and z needed in step 1 of the two-step procedure are used for the direct design
of the filter, thus avoiding the model identification. Indeed, the desired solution
of the filtering problem is a causal filter mapping y zt , t, producing as
output an estimate zt of zt , enjoying some optimality property of the estimation
error zt zt . Thus, the idea is to directly design a filter from the available data,
via identification of a filter that, using yt as input, gives an output zt which
minimizes the desired criterion for evaluating the estimation error zt zt . Such a
filter is indicated as Direct Virtual Sensor (DVS) and allows to overcome critical
problems such as the model uncertainty. In [15, 24], the advantages of such a
direct design approach with respect to the two-step procedure have been put in
evidence within a parametric-statistical framework, assuming stochastic noises, a
parametric filter structure and the minimization of the estimation error variance as
optimality criterion, and using the Prediction Error (PE) method [13] to design
the DVS. It has been proven that even in the most favorable situations, e.g., if
the system S is stable and no modeling error occur, the filter designed through
a two-step procedure performs no better than the DVS. Moreover, in the case of
no modeling errors, the DVS is optimal even if S is unstable, while this is not
guaranteed by the two-step filter. More importantly, in the presence of modeling
errors, the DVS, although not optimal, is the minimum variance estimator among
the selected filter class. A similar result is not assured by the two-step design,
whose performance deterioration caused by modeling errors may be significantly
larger. In [19, 20], the direct design approach has been investigated within a linear
Set Membership framework, assuming norm-bounded disturbances and noises. For
classes of filters with exponentially decaying impulse response, approximating sets
that guarantee to contain all the solutions to the optimal filtering problem are
determined, considering experimental data only. A linear almost-optimal DVS is
designed, i.e., with guaranteed estimation errors not greater than twice the minimum
achievable ones. The previously listed advantages of the direct design approach
over the two-step procedure are still preserved in this case, since the two-step filter
design does not guarantee similar optimality properties, due to the discrepancies
between the actual process and the identified model. A complete design procedure
is developed, allowing the user to tune the filter parameters, in order to achieve
the desired estimation performances in terms of worst-case error. In [25, 26],
the direct approach has been developed within a Set Membership framework for
LPV (linear parameter-varying) systems. In [16], the direct design approach has
been investigated in a nonlinear Set Membership setting, considering as optimality
criterion the minimization of the worst-case estimation error. Under some prior
276 M. Milanese et al.
assumptions, directly designed filters in nonlinear regression form are derived that
not only give bounded estimation errors, but are also almost-optimal. Some practical
DVS applications in the automotive field can be found in [2, 4, 1719, 27].
In this section, a statistical framework is considered and the two-step and the direct
approaches to the data-driven filtering problem are described and compared.
Basic Assumptions:
The matrices A, B, C1 , C2 and D defining the system S are not known.
The couple (A,C1 ) is observable.
A finite dataset {yt , zt = zt + vt , t = 0, 1, . . . , N 1} is available.
The noises wt and vt are unmeasured stochastic variables.
.
t =
Let Er limN N1 t=0 N1
Ert , where E is the mean value (or expectation)
symbol and it is assumed that the limit exists whenever the symbol E is
used.
Under these assumptions, the filter design problem can be formulated as follows.
Statistical Filtering Problem: Design a causal filter that, operating on y , t,
gives an estimate zt of zt , having minimum estimation error variance E zt zt 2
for any t.
The two-step design consists in model identification from data and filter design
from the identified model. In the model identification step, a parametric model
structure
M(M ) : M M
where (yt , zt ) are considered as the outputs of the autonomous model M(M ). The
obtained as
Prediction Error (PE) method, see, e.g., [13], is used to identify M,
M = M(M )
M = arg min JN (M )
M M
N1
JN (M ) = 2N
1
et (M )
2
t=0
16 From Model-Based to Data-Driven Filter Design 277
where et (M ) = (yt , zt ) (ytM , ztM ) is the prediction error of the model M(M ), being
(ytM , ztM ) the prediction given by M(M ), and is the 2 norm.
In the filter design step, a (steady-state) minimum variance filter
K K(M )
is designed to estimate zt on the basis of the identified model M = M(M ). The filter
K gives as output an estimate ztK of zt , using measurements y , t, thus providing a
Model-based Virtual Sensors (MVS). Note that the filter structure cannot be chosen
in the two-step procedure, since it depends on the structure of the identified model.
The alternative approach to the data-driven filtering problem is based on the
direct identification of the filter from data. In such an approach, a linear parametric
structure (e.g., ARX, OE, ARMAX, state-space)
V (V ) : V V
is selected for the filter to be designed, where V RnV and nV is the number of
parameters of the filter structure. This filter structure defines the following filter set:
.
V = {V (V ) : V V }
where yt is considered as the input of the filter V (V ) and zt as its output. Thus, V
is obtained by means of the PE method as
V = V (V )
V = arg min JN (V )
V V
N1
JN (V ) = 2N1
et (V )
2
t=0
where et (V ) = zt zVt is the estimation error of the filter V (V ), which has input yt
and output zVt . The filter V = V (V ) can be used as a virtual sensor to generate an
estimate zVt of zt from measurements y , t. Thus, V is a Direct Virtual Sensor
(DVS), designed directly from data without identifying a model of the system S, and
Result 1 [15, 24]. The following results hold with probability (w.p.) 1 as N :
i) V = arg minV (V ) E zt zVt 2 .
ii) If K V , then E zt zVt 2 E zt ztK 2 .
iii) If S = M(M o ) M and K( o ) V , then V
M
is a minimum variance filter among
all linear causal filters mapping y z , t.
t
worst-case gain from w to the estimation error, measured in some q -norm. To this
purpose, let us recall the definition of p -norm for a one-sided discrete-time signal
s = {s0 , s1 , . . .}, st Rns and p N:
1/p
ns p
. .
s p = sti , 1 p < ; s = max max sti
t=0,.., i=1,..,ns
t=0 i=1
W p , V q
It has to be pointed out that, without loss of generality, W p 1 can be assumed if
the matrices B and D of the dynamic system S are properly scaled. For this reason,
= 1 will be considered in the sequel.
In order to allow the user to suitably design the filter, the following H subsets of
filters with bounded and exponentially decaying impulse response are considered:
, 2 2 2 2 -
K (L, , ) = F H : 2htF 2 L t [0, ], 2htF 2 L t t ,t N
, -
K m (L, , ) = F K (L, , ) : htF = 0, t > m K (L, , )
280 M. Milanese et al.
where the triplet (L, , ) is a design parameter,, with L > -0, 0 < < 1, N,
the order m N is such that m and hF = h0F , h1F , . . . is the filter impulse
response with htF Rny . These sets represent a filter design choice, allowing the
user to require acceptable effects of the fast dynamics of the filter, occurring in the
first instants of the impulse response, and an exponentially decaying bound on the
slow dynamics due to the dominating poles.
Within the above context, the following filtering problem can be defined.
Optimal Worst-Case Filtering Problem: Given scalars L > 0, 0 < < 1 and
integers , p and q, design an optimal filter Fo K (L, , ) such that the estimate
zFo = Fo (y)
achieves a finite gain
applied to data Y .
The worst-case gain o is unknown, since the system matrices are not known. In
order to choose a suitable value of o , an hypothesis validation problem is initially
solved where one asks if, for given filter class K (L, , ) and finite data length N,
the assumption on o leads to a non-empty FFS. However, the only test that can
be actually performed is if such an assumption is invalidated by the available data,
checking if no filter consistent with the overall information exists. This leads to the
following definition.
the scalars L, , and the integers , p, q be
Definition 1. Let the dataset (Y , Z),
given. Prior assumption on o is considered validated if FFS = 0./
The fact that the prior assumption is validated by the present dataset (Y , Z)
does not
exclude that it may be invalidated by future data. Indeed, values much lower than
the true o may be validated if the actual disturbance realization occurred during the
initial experiment is far from the worst-case one. The following result is a validation
test that allows to determine an estimate of o .
the scalars L, , and the integers , p, q, m
Result 2 [20]. Let the dataset (Y , Z),
be given, with m . Let be the solution to the optimization problem:
2 2
= min 2Z Ty HF 2 (16.1)
FK m (L, , ) q
16 From Model-Based to Data-Driven Filter Design 281
o +
L m+1
o + + ny Y q
1
o +
Note that the gap between the two conditions (i) and (ii) can be made as small as
m+1
desired by increasing m and becomes negligible when ny L1 Y q . Indeed,
no gap exists just for m = N 1.
Result 2 can be used for choosing the filter class K m (L, , ). In fact, if the gap
between the conditions (i) and (ii) is negligible, the function
2 2
(L, , ) = min 2Z Ty HF 2
FK m (L, , ) q
where r(FFS) is the so-called radius of information and represents the smallest
worst-case filtering error that can be guaranteed on the basis of the overall
information and the design choice.
It is well known that any central filter FC defined as the Chebyshev center of
FFS, i.e.
is an optimal filter for any q -norm, see, e.g., [29]. However, methods for finding the
Chebyshev center of FFS either are unknown or, when known, are computationally
hard to be determined. This motivated the interest in deriving algorithms having
lower complexity, at the expense of some degradation in the accuracy of the
designed filter. A good compromise is provided by the following family of filters.
Definition 2. A filter FI is interpolatory if FI FFS.
Any interpolatory filter is consistent with the overall information. An important
well-known property of these filters is that E(FI ) 2 r(FFS) for any q -norm,
see, e.g., [29]. Due to such a property, these filters are called 2-optimal or almost-
optimal.
Let us then consider the finite impulse response (FIR) filter F whose coefficients
are given by the following algorithm:
2 2
HF = arg min 2Z Ty HF 2 (16.2)
q
HF R(m+1)ny
such that
t
hF,i L, t = 0, . . . , ; i = 1, . . . , ny
t
hF,i L t , t = + 1, . . . , m; i = 1, . . . , ny
16 From Model-Based to Data-Driven Filter Design 283
where htF,i R denotes the i-th row element of htF . Note that F is the filter
class element that provides as solution to the optimization problem (16.1). The
following result shows the properties of F that hold for any p and q -norms.
Result 3 [20].
(i) If o + , then the filter F is interpolatory and almost-optimal.
(ii) If in addition the system S is asymptotically stable, then the estimate zF =
F (y)
guarantees
2 2 2 2
sup z zF q o + E(F ) 2Sy 2q,p o + 2 r(FFS) 2Sy 2q,p
w p =1
16.4 Conclusions
This paper investigates the problem of filter design for LTI dynamic systems, both
in the stochastic setting where the aim is the minimization of the estimation error
variance, both in the deterministic setting where the aim is the minimization of the
worst-case gain from the process and measurement noises to the estimation error,
measured in p and q -norm, respectively.
Most part of the existing literature focuses on problems that can be denoted as
filter design from known systems, indicating that the filter is designed assuming
the knowledge of equations describing the system generating the signals to be
filtered. In this paper, a more general filtering problem is considered, denoted as
filter design from data, where the system is not known but a set of measured
data is available. Clearly, a solution to this problem can be obtained by identifying
from measurements a model, whose equations are then used by any of the available
methods for filter design from known systems. However, this two-step procedure
is in general not optimal. Indeed, finding optimal solutions to the filter design from
data problem appears to be a not easy task, but this paper reviews methodologies that
284 M. Milanese et al.
allow to design directly from measurements filters which are shown to be optimal (in
the stochastic framework) or almost-optimal (in the deterministic framework), i.e.,
with a worst-case filtering error not greater than twice the minimal one. Moreover,
results are given for the evaluation of the resulting worst-case filtering error, while
such evaluation appears to be a largely open problem in the case a two-step design
procedure is adopted.
References
1. Anderson, B.D.O., Moore, J.B.: Optimal Filtering. Prentice-Hall, Englewood Cliffs, NJ (1979)
2. Borodani, P.: Virtual sensors: an original approach for dynamic variables estimation in
automotive control systems. In: Proc. of the 9th International Symposium on Advanced Vehicle
Control (AVEC), Kobe, Japan (2008)
3. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge
(2004)
4. Canale, M., Fagiano, L., Ruiz, F., Signorile, M.: A study on the use of virtual sensors in vehicle
control. In: Proc. of the 47th IEEE Conference on Decision and Control and European Control
Conference, Cancun, Mexico (2008)
5. Colaneri, P., Ferrante, A.: IEEE Trans. Automat. Contr. 47(12), 2108 (2002)
6. Duan, Z., Zhang, J., Zhang, C., Mosca, E.: Automatica 42(11), 1919 (2006)
7. Gelb, A.: Applied Optimal Estimation. MIT Press, Cambridge, MA (1974)
8. Grigoriadis, K.M., Watson Jr., J.T.: IEEE Trans. Aero. Electron. Syst. 33(4), 1326 (1997)
9. Grimble, M.J., El Sayed, A.: IEEE Trans. Acoust. Speech Signal Process. 38(7), 1092 (1990)
10. Hassibi, B., Sayed, A.H., Kailath, T.: IEEE Trans. Automat. Contr. 41(1), 18 (1996)
11. Hassibi, B., Sayed, A.H., Kailath, T.: IEEE Trans. Signal Process. 44(2), 267 (1996)
12. Jazwinski, A.H.: Stochastic Processes and Filtering Theory, Mathematics in Science and
Engineering, vol. 64. Academic Press, New York (1970)
13. Ljung, L.: System Identification: Theory for the User, 2nd edn. Prentice Hall PTR, Upper
Saddle River, NJ (1999)
14. Maybeck, P.S.: Stochastic Models, Estimation, and Control, Mathematics in Science and
Engineering, vol. 141. Academic Press, New York (1979)
15. Milanese, M., Novara, C., Hsu, K., Poolla, K.: Filter design from data: direct vs. two-step
approaches. In: Proc. of the American Control Conference, Minneapolis, MN, 4466 (2006)
16. Milanese, M., Novara, C., Hsu, K., Poolla, K.: Automatica 45(10), 2350 (2009)
17. Milanese, M., Regruto, D., Fortina, A.: Direct virtual sensor (DVS) design in vehicle sideslip
angle estimation. In: Proc. of the American Control Conference, New York, 3654 (2007)
18. Milanese, M., Ruiz, F., Taragna, M.: Linear virtual sensors for vertical dynamics of vehicles
with controlled suspensions. In: Proc. of the 9th European Control Conference ECC2007, Kos,
Greece, 1257 (2007)
19. Milanese, M., Ruiz, F., Taragna, M.: Virtual sensors for linear dynamic systems: structure
and identification. In: 3rd International IEEE Scientific Conference on Physics and Control
(PhysCon 2007), Potsdam, Germany (2007)
20. Milanese, M., Ruiz, F., Taragna, M.: Automatica 46(11), 1773 (2010)
21. Milanese, M., Vicino, A.: Automatica 27(6), 997 (1991)
22. Nagpal, K., Abedor, J., Poolla, K.: IEEE Trans. Automat. Contr. 41(1), 43 (1996)
23. Nagpal, K.M., Khargonekar, P.P.: IEEE Trans. Automat. Contr. 36, 152 (1991)
24. Novara, C., Milanese, M., Bitar, E., Poolla, K.: Int. J. Robust Nonlinear Contr. 22(16), 1853
(2012)
25. Novara, C., Ruiz, F., Milanese, M.: Direct design of optimal filters from data. In: Proc. of the
17th IFAC Triennial World Congress, Seoul, Korea, 462 (2008)
16 From Model-Based to Data-Driven Filter Design 285
26. Ruiz, F., Novara, C., Milanese, M.: Syst. Contr. Lett. 59(1), 1 (2010)
27. Ruiz, F., Taragna, M., Milanese, M.: Direct data-driven filter design for automotive controlled
suspensions. In: Proc. of the 10th European Control Conference ECC2009 Budapest, Hungary,
4416 (2009)
28. Shaked, U., Theodor, Y.: H -optimal estimation: A tutorial. In: Proc. of the IEEE Conference
on Decision and Control, vol. 2, 2278 (1992)
29. Traub, J.F., Wasilkowski, G.W., Wozniakowski, H.: Information-Based Complexity. Academic
Press, New York (1988)
30. Vincent, T., Abedor, J., Nagpal, K., Khargonekar, P.P.: Discrete-time estimators with guaran-
teed peak-to-peak performance. In: Proc. of the 13th IFAC Triennial World Congress, vol. J,
San Francisco, CA, 43 (1996)
31. Voulgaris, P.G.: Automatica 31(3), 489 (1995)
32. Voulgaris, P.G.: IEEE Trans. Automat. Contr. 41(9), 1392 (1995)
33. Watson, Jr., J.T., Grigoriadis, K.M.: Optimal unbiased filtering via linear matrix inequalities.
In: Proc. of the American Control Conference, vol. 5, 2825 (1997)
34. Xie, L., Lu, L., Zhang, D., Zhang, H.: Automatica 40(5), 873 (2004)
35. Yaesh, I., Shaked, U.: IEEE Trans. Automat. Contr. 36(11), 1264 (1991)