Você está na página 1de 60

SYNCHRONY AND PATTERN

IN NATURE

A project report for partial fulfillment of requirements for degree of


Integrated Master of Science
in
Physics

Submitted by

Abinash Chakraborty
410PH5003

Supervised by
Dr. Biplab Ganguli
Department of Physics and Astronomy
National Institute of Technology Rourkela

Abstract
The phenomenon of synchronization in chaotic systems, has been investigated. We have taken
distinct chaotic systems, and have then established regions of synchronization. Following this,
we have tried to put the phenomenon to some use. Our primary aim has been to study the
phenomenon from as many interdisciplinary aspects as possible. The interdisciplinary nature
of the subject is exploited to fullest this project touches concepts from information theory,
neural networks, mathematical physics and communication theory etc. Since, the problems
are distinct, each of the parts in the project has a detailed description of their own. We have
worked on a network of coupled Mathieu Oscillators, and in the latter part, worked on denoising
a chaotic communication system.

Certificate
This is to certify that the work done in the thesis entitled, "Synchrony and Pattern in Nature", is
submitted by Abinash Chakraborty towards partial fulfillment of the requirements for the award of
Master of Science in Physics degree by National Institute of Technology Rourkela. It is a record of the
work done by him, under my supervision. The results have been partly reproduced and then extended.

Dr. Biplab Ganguli


Department of Physics and Astronomy
National Institute of Technology Rourkela
India

Acknowledgment
This thesis is an assortment of work done over a period of two years. Over the span of four
semesters, my focus on topics has shifted many times. However, the broader topic has remained
the same. This would not have been possible without my supervisors flexibility. Dr. Biplab
Ganguli has not only remained tolerant of my slow progress in the first phase of the project,
but also was flexible in accepting a complete change of topics after the first year.
Deconstruction and digestion of existing literature have been major parts of my work throughout these semesters. It would have been terribly easy to be lost in the plethora of papers and
technical jargon, if it were not for Mr. Satyabrata Satpathy. His experience in the field and
familiarity with the literature has helped me immensely.
There have been days, and weeks at some instances, when the progress in the project had
been stunted. I would not have gotten through those periods of stilted growth, without moral
support from my parents and loved ones. I hope that this thesis is free of errors and finds some
use in future. I have tried to keep typos to the minimum, however, they have a nasty habit of
hiding in the plain sight.

Abinash Chakraborty
410ph5003
National Institute of Technology Rourkela
India

Contents
1 Introduction

2 Baseline Concepts

Synchronization in Network of Oscillators

11

3 Introduction to Part I

12

4 The
4.1
4.2
4.3

.
.
.
.
.
.
.

13
13
14
14
14
15
16
17

5 Synchronization In N Coupled Mathieu Oscillators


5.1 The System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Information Flow and Synchronization . . . . . . . . . . . . . . . . . . . . . . .
5.3 The Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19
19
21
21

4.4

Sub-Systems
Parametric Excitation . . . .
Mathieu Equation . . . . . . .
Characterizing The Dynamics
4.3.1 Lyapunov Exponents .
4.3.2 Poincar`e Map . . . . .
4.3.3 Power Spectral Density
Conclusion . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

6 Results
23
6.1 Case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2 Case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.3 Suppression Of Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7 MATLAB Codes
7.1 Code for Finding Lyapunov Exponents . . .
7.2 Code for Simulating a system consisting of N
7.3 Code for Finding the Poincar`e Section . . .
7.4 The equation of Surface . . . . . . . . . . .
7.5 For finding joint PMF . . . . . . . . . . . .
7.6 Code for Finding Mutual Information . . . .

. . . . . . . . . . . .
Mathieu Oscillators
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

27
27
28
29
30
30
31

8 Summary to Part I

33

II

34

Denoising chaotic communication channel

9 Introduction to Part II

35
2

10 Communication Using Chaos


36
10.1 Chaotic Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
10.1.1 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
11 Implementation of Chaotic Masking
38
11.1 Sending Audio Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
12 Removing Additive Noise
41
12.1 The Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
12.2 Minimization Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
13 Denoising Using Adaptive Filters
13.1 Artificial Neural Networks . . . .
13.2 Results . . . . . . . . . . . . . . .
13.2.1 Additive Noise . . . . . .
13.2.2 Multiplicative Noise . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

44
44
45
45
46

14 Summary to Part II

48

III

50

Appendix - Simulink Models

Chapter 1
Introduction
This thesis is a result of work done in 4 semesters, spread over two academic sessions. The
work broadly relates to the subject of dynamical systems and their control. The philosophy of
this project/thesis and the key outline of this work, have been highlighted here.
Dynamical systems have several diverse topics. This thesis almost exclusively deals with study
of synchronization in chaotic systems. At the start of the thesis, my supervisor and I
made a crucial decision regarding the choices of problems to work on. Instead of taking one
non-trivial problem and trying to solve it in two , we have divided the work in these 4 semesters
into studying multiple not-so-trivial systems. The guiding principle has been to take dynamical systems, and find regimes or settings in which these showed chaos. After identification of
chaos, the next step has been to control the chaos [53] and put it to some use. Controlling
chaos is an interesting and fertile field of study.

Structure
For conceptual purpose thesis is divided into two parts.
1 Synchronization in a network of oscillators
2 Denoising chaotic communication channel
Each of the parts denote a major change in ideas or problems.
The first part deals with a system with of coupled Mathieu oscillators. The object was to find
the value of coupling for which the oscillators synchronize. To do this, aspects of information
theory have been used.
The second part is about a chaotic communication channel. The primary purpose was to
eliminate noise from a communication channel that works on chaos synchronization. The phenomenon of chaos can be used to mask information. The work in this part of the thesis has
been implementation and verification of work done earlier [71].
The first two parts are independent works of their own. However, they derive heavily from a
common vein of concepts. These concepts have been outlined in a chapter named Baseline
Concepts. There is thus no redundant inclusion of literature in the parts.

Methodology
The biological anthropologist Loren Eiseley used to say that there were two kinds of scientists:
big-bone hunters and small-bone hunters. Extension of this to research projects would mean
to either do bottom-up science or top-down science. In the anthropological sense, this work is
among the smallest of bones that are there for collection.
Each of the chapters start from a place where a lot of work has already been done. The whole
synchronization community is moving towards setting up a unified framework which would
explain all the different types of synchronizations as variations a general type. While reading
for the project, I have come across papers which venture out to far and wide concepts, like
multiscale time series analysis, information theoretic analogy, neural networks etc. to establish
a connection between all the different sorts of synchronization which are seen.
This work has been an application of the known theoretical principles. At the end of this, I
have learned immensely about synchronization and controlling dynamical systems. The choice
of problems have been largely towards finding areas where I could learn from diverse fields.
A major part of my work has been to solve systems of ODEs and simulate processes in
Simulink. The solvers used for the differential equations are those that come built-in with
MATLAB. All the plots and comparisons have been done in institute licensed MATLAB.
The illustrations have been done with Inkscape.

Abinash Chakraborty
410ph5003
National Institute of Technology Rourkela
India

Chapter 2
Baseline Concepts
The broadest description that I can give for my project is a study on dynamical systems.
Consequently, there is a set of underlying concepts that would occur recurrently, throughout
the 3 problems that have been dealt in the thesis. Putting these concepts the baseline
concepts in one chapter, help us refrain from digressions in the following chapters. The first
section is about Dynamical Systems and Chaos. Following this, the idea of synchronization is
explained. The chapter ends with a rather special topic of chaotic masking, which is used as
the model for the problem on denoising a chaotic communication system.

Dynamical Systems
The quintessential question is, How does a system evolve? There are two keywords in that
question. Evolve and System. System, for the purpose of the thesis, is a system of
ordinary differential equations (ODEs). Evolution of the system, would then mean to
solve these ODEs. A dynamical system (DS) can be defined more abstractly, which we
would do shortly. However, thinking it as a system of ODEs will not create any problems in
most situations.

Definition
A DS can be defined abstractly as [34]:
Let X be a metric space. Let 6= I R, such that
0I
t, s I = t + s = s + t I
t, s, r I = (t + s) + r = t + (s + r)
So, I is an additive semigroup of real numbers. In physical systems, this I would turn would
be Time. Hence, we have used t as a representative element of the set.
We define a dynamical system as a mapping : X I X. has the following properties:
(a) (x, 0) = x for all x X
(b) ((x, t), x) = (x, t + s) for all x X

Chaos
Chaos has some antecedents in the works of Poincare, however, it was made hugely more
popular after Lorenz published his 1963 paper titled, Deterministic nonperiodic flows. There
are many ways to characterize chaos, however, we will strongly focus on Lyapunov exponents.
6

Lyapunov Exponents
We define the LEs of a system as follows [68]:
Suppose we start with a discrete dynamical system as
xn+1 = f (xn )
The LE would be defined as
f N (x0 + ) f N (x0 )
1
(x0 ) = lim lim log
N 0 N

1
df N (x0 )
log
= lim
N N
dx0
In order words, two trajectories in phase space with a separation of |x0 | at time t = 0, will
have a separation of |xt | at t = t, given by
|xt | = et |x0 |
Since, the rate of divergence of trajectories can be different in different directions in the phase
space, there are as many LEs as is the dimensionality of phase space. LEs quantify how sensitive
the system is to initial conditions. They measure the growth rates of generic perturbations. The
interest is usually on the maximal Lyapunov exponent (MLE). If the phase space is compact, a
positive MLE would mean that the prediction of the system is impossible. The system is then
labeled to be sensitively dependent on initial conditions.
A few properties of Lyapunov Exponents are as follows [63]:
1. The LEs are dynamical invariants i.e. do not depend metric used or the choice of the
variables for the system. So, they can characterise dynamics of a system.
2. A positive MLE implies chaos only when the phase space is bounded.
3. The sum of all Lyapunov exponents give a measure of the contraction of volume in whole
of phase space. For a conservative system, the sum of the Lyapunov exponents is zero,
where as for a dissipative systems it is negative.
4. For a bounded system, if the flow doesnt end at a point, then at least one of the Lyapunov
exponents is zero.
5. Pesins formula relates the LEs to the Kolmogorov-Sinai Entropy. The sum of the positive
Lyapunov exponents gives an upper bound for KS entropy i.e.
HKS <

j:j >0

6. The multiplicative inverse of the MLE is called the Lyapunov time. It defines, the
characteristic e-folding time i.e. the time taken for the trajectory to diverge by a factor
of e. It gives a time for which predictions hold value. For chaotic orbits it is finite, and
for periodic orbit, expectedly, it is infinite.
Having a positive MLE would make errors, which are small to begin with, explode over time.
Take for example the case of predicting the events in the solar system. The assumption that
Earth revolves around the Sun in a perfect circle is certainly inaccurate. However, the inaccuracy doesnt have any effects in the timescale in which we make the predictions for out solar
7

system. Imagine however, that there is a system with an MLE, of say 2, for a system in which
time is measured in the scale of seconds. That means, any prediction beyond 0.5s would not
be accurate enough to make a prediction.
A chaotic system, despite it random-like behavior is not a random system. The underlying
dynamics for a chaotic DS are completely deterministic with no random variables in the dynamical equations. And yet, the trajectory followed by a phase point for a chaotic DS is like
a stochastic system.
In statistical mechanics, there are 3N degrees of freedom. With an enormously large N , the
degrees of freedom for are practically infinite. The underlying dynamics i.e. quantum dynamics
is deterministic. The statistical nature of such a system originates because of ignoring degrees
of freedom. Lorenzs paper discovered chaos in as little as 3 dimensions and thats what makes
chaos interesting. One does not need a complicated system to display chaos. The systems in
this thesis, have 3 dimensions.

On Signals and Oscillators


The term signal is used interchangeably with trajectories in the phase space. The reason for
doing this obvious. The signal that we refer to is a continuous sequence of numbers. In that
sense it might be considered to be equivalent to a continuous analogue signal. However, our
treatment is like a digital signal. The formal route to convert the analogue signal into the
digital one is via considering the symbolic dynamics of the phase space.
By doing so we will be able to place each of the signal, in particular regions of the phase space
and assign 1s or 0s when the signal enters such and such region. However, we do not do this
either. Our approach is kept simple by considering that this can be done.
Since the results are numerical, theres always a resolution, below which our signals are discrete
values. This however, is not the true digital nature of the signal.
In the first section of this chapter, we have defined a DS. For the purposes of this thesis, we
can assume that a system is a system of ODEs. But why have these been called oscillators at
so many places? The reason lies in the confinement of the phase space.
A positive MLE is an indicator of chaos, only if the phase space is finite. For example, if one
solves this equation
dy
= 2y
dt
The solution is of course
y(t) = y0 exp(2t)
Two trajectories, starting with very closely placed y0 s separate out at an exponential rate. This
is clearly not chaos, because the equation is linear, with well behaved solution. The problem
is the phase space. It is unbound. So, the definitions that we have given so far, are used for
bound phase spaces only.
An oscillator is thus a system of ODEs which has a confined phase space.

Power Spectrum for a Chaotic System


One of the recurring themes of the problems that are solved in the thesis is the use of the
broadband nature of chaotic signals.
Periodic signals, have sharp peaks at the frequencies at which they oscillator. However, for
chaotic oscillators, the power spectrum is diffused. There are no sharp peaks at all. For a random signal, for example, the power spectrum looks as if it has been enveloped by the Gaussian
function.
8

The explanation for this is fairly simple. By the Law of Large Numbers, all probability distributions tend to a normal distribution in the N limit. And the power spectrum of a
signal, is the Fourier Transform (FT) of the signal. Since, the Gaussian function is invariant
under FT, weve the Gaussian envelop.
The trajectory of a chaotic system is random. In the long run, it settles into a random signal.
Hence, the power spectrum of a chaotic signal does not have sharp peaks. This is interesting
because, absence of sharp peaks is generally associated with presence of noise. The difference
between this broadband nature and the one that is found in generic noise is the underlying
dynamics. While we have no idea about how the noise is generated, the chaotic signals are
generated by deterministic processes, with explicit expressions for differential equations [41].

Synchronization and Chaos


In the previous chapter we have outlined what dynamical system is. We discussed the phenomenon of chaos, and described one of the ways to qualitatively describe it. Beyond all these,
the phenomenon that is the crux of all these works, is synchronization in chaotic systems.
Definition: Synchronization means agreement or correlation in time of different processes i.e.
systems. The definition rather refers to the process in which two or more DS are coupled or
are somehow forced to achieve a synchronous behavior.
The first systems in which synchronization was observed, were the periodic systems. It started
in 17th century, when Huygens studied the behavior of pendulum clocks, which were weakly
coupled. Later in sciences, synchronization was identified in a variety of systems from various
disciplines. Emergence of synchronized behavior in periodic systems, isnt much of surprise.
The discovery of synchronization [57] in Chaotic systems is, however, very counter-intuitive.
Synchronization in dynamical systems, is intimately related to the study of dynamical order.
Synchronization is possible in a couple of DS or in a cluster of dynamical systems, where identical or nearly identical elements show a stable correlation in time. There are multiple ways to
in which synchronization can come about in dynamical systems. It can be induced by external
agents, or it can emerge as a phenomenon due to interactions of various individual elements.
The evolution of Chaotic systems is very sensitive to the initial conditions. So, even if you
start two identical systems (viz. systems with same parameter settings and same dynamical
equations) from very similar initial conditions, they would have very different evolution in time.
So, chaotic systems in many ways are opposite to the idea of synchronization. And yet, theres
synchronization in chaotic systems.
In the context of dynamical systems, synchronization in chaos is a process where in two chaotic
systems adjust a given property of their motion to a common behavior, owing to coupling or
forcing. The degree to which two or more dynamical systems agree can vary from being
completely identical to locking of phases. The process in which synchronization between two
systems can be achieved, are divided into two broad categories unidirectional coupling
and bidirectional coupling.
In the first case, the synchronization is achieved by creating a larger system comprising of subsystems that conceptualize a drive-response configuration. One of the systems evolves freely,
and the other is driven by the freely evolving system. The response system, is thus slaved
by the dynamics of the driver system. This type of configuration results in external synchronization. In the later part of the thesis we shall deal with chaos communication, where the
synchronization would be a driver-response model.
In bidirectional coupling, both of the subsystems of the larger system influence each other.
Bidirectional coupling is usually more stable and hence more prevalent in Nature than unidirectional coupling. Bidirectional coupling leads to rhythm adjustments and emergence of

patterns in many physiological systems. The bidirectional coupling adjusts the systems to a
stable manifold [80].
Now that we have discussed the differences in synchronization based in the type of coupling,
the next thing will be to describe the difference in the synchronization based on the types of
the system that are subsystems. The subsystems can be identical or they can be different from
each other. Since, all the work in this thesis is on identical synchronization, we highlight the
different types of synchronizations in identical systems.

Synchronization in Identical Systems


Synchronization in identical systems comes about by the equality of dynamical variables over
time. This type of synchronization is often called complete synchronization (CS) or Identical synchronization. We shall describe in detail Pecora and Caroll (PC).
PC Configuration
We start with a general dynamical system whose evolution in time is given by the following
equation
z = F(z)
(2.1)
Here, the z = (z1 , z2 , . . . , zn ), is the state vector for the n-dimensional phase dynamical system.
F is a vector function and it encapsulates the dynamical equations of the system. Since, F is
supposed to describe a physical dynamical system, its mapping will be from the n-dimensional
real space to n-dimensional real space.
The PC scheme of synchronization consists of considering systems which are decomposable to
the following form
u = f (u, v)
v = g(u, v)
= h(u, v)
w

(2.2)
(2.3)
(2.4)

The first two equations are the drivers of the system while the third one is the response. The
vectors u, v and w have dimensions such that
dim(u) + dim(v) + dim(w) = n
The driving signals u and v are chaotic in the PC configuration. The synchronization is
defined as the identity between trajectories of the response system w and of one replica w0 ,
which has identical dynamics. Complete Synchronization (CS) implies that the reponse system
is asymptotically stable. To check for the stability one looks at the error dynamics. The error
is defined as
e(t) = ||w w0 ||
(2.5)
The synchronization is stable if all the Lyapunov exponents of the response system under action
of the driver are negative [58].

10

Part I
Synchronization in Network of
Oscillators

11

Chapter 3
Introduction to Part I
Networks of oscillators are a commonplace in sciences. Before we go on to work on the problem
of coupled oscillators, the significance of the word oscillator, needs an illustrated explanation.
Unless we understand the ubiquity and scope of the term, it is difficult to justify dedicating
the entirety of a semester.
The term oscillator has been used in its broadest term here. It is a dynamical system, which
has confined phase space. Confinement of phase space is essential to define chaos for the
system. Without having a confined phase space, even a linear differential equation can give
an exponentially diverging solution. Hence, oscillators are interesting systems to study the
phenomenon.
The other point to make here is the relation to periodicity. The simplest specimen of an
oscillator is the simple harmonic oscillator. It is of course, periodic. Qualifying an oscillator to
be chaotic might seem an oxymoron at first. However, chaotic systems have time periods too.
It takes an infinite amount of time for a chaotic oscillator to return a previous state.
Mathieu oscillator is part of a broader class of parametric oscillators. Parametric oscillators
are driven harmonic oscillators. The driving frequency and the amplitude of the driving force
are variable factors. Mathieu oscillators have origins from the Mathieu functions. Mathieu
functions are solutions to the following Mathieu differential equation
d2 y
+ [a 2q cos(2x)]y = 0
dx2
Mathieu equation is a special case of the Hills equation
d2 y
+ f (x)y = 0
dx2

(3.1)

(3.2)

f (x) is a periodic function.


The Mathieu oscillator is a physical approximation to many oscillatory systems. Our objectives
were the following
(1) Identify regions of chaos in single Mathieu oscillators
(2) Mutually couple the Mathieu oscillators and find coupling strength for which the oscillators synchronize
The first objective was fairly easy. We have used Lyapunov exponents to quantify chaos. The
variable parameter in the Mathieu oscillator is the a in the eqn. (4.2). The second objective
was more involved. We have used mutual information to identify regions of synchronization.
The idea is, when theres synchronization the flow of information between two oscillators is
maximum. Hence, we have to find the coupling strength for which the mutual information
shows peaks. The idea is explained in greater details in the chapter on study of coupled
oscillators.
12

Chapter 4
The Sub-Systems
In the first section, we are going to give an illustration on Mathieu oscillator in a greater degree. Following this, we will investigate for what values of the parameters for the system we see
chaos. Then we will confirm those values and go ahead on to the next chapter to investigate
the coupling of N such oscillators.

4.1

Parametric Excitation

The usage of the world oscillator has been ubiquitous right from the beginning. As had
been mentioned earlier, the reason for this is the abundance of physical systems which can be
modeled using differential equations which have periodic or oscillatory solutions. As far this
report is concerned, an oscillator is differential equation whose solution is bounded. Being
periodic is not a necessity. As a matter of fact, we shall be looking for systems which are
aperiodic.
Mathieu oscillator can be thought of as an extension to the simple pendulum.
x = 02 sin x

(4.1)

In parametric excitation we vary a parameter. As far as the eqn. (4.1) is concerned, 0


is the parameter. We say so, because, while the structure of the equation determines the
behavior, the natural frequency is what makes one simple harmonic oscillator differ from the
other. We shall parametrically excite this natural frequency i.e. change this parameter as a
function of time. So, we get the following:
x = 02 (t) sin(x)
Depending on (t) we have different types of oscillators. We choose,
(t) = 1 + a cos(t)
So, a is the amplitude of excitation while is the frequency of excitation. Why have we chosen
this form? Well, Mathieu Equation has the canonical for [46]:
y + (b q cos(t)) = 0
Why have we chosen Mathieu Equation? We shall briefly mention the places where Mathieu
Equations are used. But continuing forward, lets complete the equation which we will study,
by adding in dissipation. So, the equation which shall now be referred to as the equation of
Mathieu Oscillator is:
x + 2x + 02 (1 + a cos(t)) sin(x) = 0
13

(4.2)

4.2

Mathieu Equation

The scope of the project is to establish phenomenon in physical systems, by analysing the simplistic mathematical models. The eqn. (4.2) can be approximation to many physical systems.
Mathieu Equations are highly likely to come up in situations, because periodic variation of
parameters is a frequent phenomenon. The results derived from study of this equation, thus,
are highly likely to be useful.
Eqn. (4.2) is describes a non-autonomous system. However, we can apply the mathematical
tools for an autonomous system by defining a variable z = t i.e. phase. So, eqn. (4.2) can be
written as a system of three variables:
x = y
y = 2y 02 (1 + a cos(z)) sin(x)
z =

(4.3)
(4.4)
(4.5)

Physically, the system is still non-autonomous, but mathematically, we can go around treating
z as just another variable. We have 3 dynamical variables. So, from Poincar`e Theorem, we
can have chaos. The parameter space for the system is
(, 0 , a, )
Out of these 4, only a and control the excitation. All the analysis will be done with the
following values of the parameter:
0 = 1, = 0.5, = 2
The above choice is motivated towards making the equations look less daunting. So, now
comes the question, whether the so called Mathieu Oscillator, is chaotic or not. It should be
mentioned here that test for chaos can be a bit tricky. The system is certainly complicated,
with the presence of sin and cosine functions, and their multiplication. But these functions are
all well behaved. So, most of the results will probably hold in their simplest forms.

4.3

Characterizing The Dynamics

We shall use two techniques to conclude if theres chaos. The first one is the obvious- Lyapunov
Exponents. After this, we shall find the Poincar`e Map.
Positive Lyapunov Exponents will indicate sensitive-dependence of initial conditions (SIC).
While some take this as the definition of chaos, we shall consider an extra condition i.e. dense
filling of the phase space. This ergodicity will contrast the fact that we have a deterministic
dynamics, while the behavior turns out to be similar to a random process. Dense filling in the
Poincar`e Map will also be an indication of chaos.
Simply solving the equations, we get an attractor. We see that eventually the phase space gets
densely filled up. See the transition from fig. (4.1) to fig. (4.1).

4.3.1

Lyapunov Exponents

What and how Lyapunov exponents are calculated have been mentioned in quite some length
in the previous chapter. Using the code, which is based on the algorithm given in [77], we have
plotted a graph between the Maximal LE and a. The values of a for which we have a +ve
maximal LE means that we have SIC for those values of a.
However, since we have seen that the trajectory remains bounded in a region in phase space,
14

Figure 4.1: Trajectory starting at [0,.2,2] with a = 3.5. Time of simulation is 200 units

Figure 4.2: Trajectory starting at [0,.2,2] with a = 3.5. Time of simulation is 50000 units
positive LE will strongly indicate chaos. The graph in fig. (4.3 shows the variation. There is this
intermittenacy between chaotic and non-chaotic regimes. This is what is called Intermittent
Chaos. So, fig. (4.1) is in the chaotic region. Whereas, if we take a = 2.5, we get a limit
cycle as shown in fig. (4.4).

4.3.2

Poincar`
e Map

In Poincar`e Map we take a section through the attractor and observe the intersection of the
trajectories with this section.
Chaotic Regime
We shall take the section at the x = 1. a is taken to be 3.5. The result is shown in the fig.
(4.5). Observe how theres a dense filling of the Poincar`e Map. Thats another indication that
we have chaos.
15

Figure 4.3: The initial Point for all the runs is [0,0.2,2]. Each of the LEs are taken after 200
units of run.

Figure 4.4: The initial Point for the run is [2,0.2,2]. Run for 2000 units of time
Limit Cycle Regime
We had confidently concluded that we had a limit cycle, because the Poincar`e Map for a = 2.5
indicates so in fig. (4.6). The points converge onto a point.

4.3.3

Power Spectral Density

The final thing that we point to will the PSD plot in the chaotic regime. For a = 3.5, the
normalized PSD is shown in fig. (4.7). The broad spectrum PSD is another measure of
aperiodicity, and indeed it is aperiodic.

16

Figure 4.5: The initial Point for the run is [0,0.2,2]. Run for 50000 units of time. Poincar`e
section taken at x=1. 1698 intersections.

Figure 4.6: The initial Point for the run is [2,0.2,2]. Run for 2000 units of time. Poincar`e
section taken at x=0.5.

4.4

Conclusion

We complete the chapter by establishing that the eqn. (4.2) is chaotic for a = 3.5. We pick
that value because we are going to do the rest of our calculations with that value. Since, our
focus is on the study of the synchronization in a network of such oscillators, we wouldnt bother
to look into the variation of the regimes wrt other parameters. In the next chapter, we will
work through the study of synchronization of a network of N coupled Mathieu Oscillators. Our
objectives will be to find out the values of coupling for which we shall have synchronization or
maximum correlation between the x of the oscillators.

17

Figure 4.7: Power Spectral Density for x.

18

Chapter 5
Synchronization In N Coupled
Mathieu Oscillators
Now that we have established the properties of a Single Mathieu Oscillator, its about time that
we do something non-trivial. We shall couple N Mathieu Oscillators and Study their behavior.
I shall have two broad categories- even N and odd N. Most of part of the chapter would be
directed towards investigating N = 6, because I generalized the code later, and havent had
the opportunity to work through the even and odd cases.
The aim of the project is to find if synchronized states exist for the oscillators. We shall use
the concept of Mutual Information to establish the synchronized states.

5.1

The System

In the last chapter, we talked about the sub-systems in details, but this time around we shall
look into a network put together by them. We shall setup of the problem in an abstract way
and then move on to establish the problem at hand.
In the language of the network theory each of the oscillators would be nodes. the equation of
the isolated it h node is given by,
x i = F (xi )
xi is an m-dimensional column vector consisting of the dynamical variables of the nodes. In
our problem, i runs from 1 to N, m = 3 and
xi = [xi , y i , z i ]T
Now, that we have established the individual nodes, its time to couple them. The way they
are coupled, is conveyed via a coupling matrix. We shall call that matrix as Gij . In the concrete
form, the network on which we have been working has the following equation,
x i = y i
y i = 2y i 02 (1 + a cos z i ) sin xi + k(xi+1 2xi + xi1 )
z i =

(5.1)
(5.2)
(5.3)

The coupling term is present on in the y i of each oscillator and i = 1, 2, . . . 6. This sort of
coupling is called nearest-neighbour dissipative coupling. This is actually the simplest of the
linear coupling. Notice that if and when there is a synchronization, the coupling term will
vanish. Synchronization in this case is defined by the existence of a trajectory xs such that
19

Figure 5.1: An array of 6 Coupled Mathieu Oscillators


xi = xs for all i. Our objective would be to find the value of k such that xs is stable trajectory.
More on that later.
To express coupling abstractly, three things would be introduced
- k is the strength of coupling
- Gij will be the coupling matrix (describing the type of coupling)
- An arbitrary function H(xi ): Rm Rm as the indicator of where the coupling is occurring.
So, the coupled network dynamics can be described by the following equation
x i = F (xi ) + kGij H(xj )

(5.4)

Here, if we have N = 6,

G=

2 1
0
0
0
1
1 2 1
0
0
0
0
1 2 1
0
0
0
0
1 2 1
0
0
0
0
1 2 1
1
0
0
0
1 2

Also, the vector function


H(xi ) = [0, xi , 0]T
And,
F (xi ) = [y i , 2y i 02 (1 + a cos z i ) sin xi , ]T
20

(5.5)

5.2

Information Flow and Synchronization

The eqn. (5.4) gives the equation for the system. The important parameter is the coupling
strength k. If we make the coupling too strong, it wouldnt be an interesting result, since
we would basically be forcing these oscillators to behave as one. For different values of k we
can simulate the system and see if there is synchronization. We are looking for Complete
Synchronization (CS). Information theory has been a guiding tool across many disciplines.
Establishing the fact that information is physical and can be treated as an entity was a seminal
contribution from Shannon [70]. In this section we use a related concept, Mutual Information,
to search for the critical value of the coupling constant so as to see Synchronization.

5.3

The Information

Information is the average number of yes or no questions which one needs to ask in order to
know the next element in the time series of a random variable (rv). Suppose X(t) is a stochastic
variable. So, we are interested in the time evolution of the rv. Outcome of an observation that
is governed by stochastic dynamics is unexpected i.e. it surprises us. It gives us information.
Shannon gave a definition of a quantity which connects the surprise with randomness. Thats
what we call Shannon Entropy or simple entropy H.
HX =

pi logb pi

Here, X = x1 , x2 , . . . xN and pi := P (X = xi ). b is the base in which information is expressed.


For communication systems b = 2, because it relates directly to the ON and OFF states of a
circuit. We shall use b = e because this is the standard for the theoretical analysis.
Our concern is not a communication channel but analysing physical systems and physical processes. Every process is a communication channel. A physical system is a channel which
communicates its past to its future [15]. The channel is where all the dynamics that are specific
to the system occur.
Mutual Information
Mutual Information is a quantity which defines the reduction in uncertainty in an rv, by if we
know another rv. The definition of Mutual Information or trans-information is as follows,
M (X, Y ) = HX + HY H(X, Y )
This can be also written down as,
P
M (X, Y ) =

xX,yY

p(x,y)
p(x, y) log p(x)p(y)

(5.6)
p(x) log p(x)

xX

So, what do we intend to do with this definition? So, for N Oscillators, we shall have 3N
variables. Suppose, we are looking for the value of k where the positions of each of the oscillators i.e. xi are synchronized. We can use this concept of Mutual Information to ascertain
that value. We shall define the state of synchronization as that where the mutual information
between xi and xi+1 is maximum.
Applying this to the Problem at hand
The first thing to be noticed when it comes to applying this to the problem at hand is a
21

definition of probability mass function. We can easily solve the whole system of equation i.e.
3N equations for say a time T, with a particular coupling strength k. Then, we will have
to choose the positions of two variables, xi and xi+1 for a particular i, and find the Mutual
Information, M (X i , X i+1 ). There are is a change in the outlook here, in the way we look at
the two variables. We now see the variables xi and xi+1 , as instances of an rv. We are going
to come up with the pmf by using the method of binning.
MATLAB has an built-in function for finding the pmf of one-dimensional data i.e. of one rv.
hist2.m given in the end, is my simple extension of the concept to find the joint distribution
of 2 random variables. So, we write a script, which would solve the set of 3N for different
values of k. Then we will look for the peaks in that graph. These peaks will correspond to the
value of coupling for which the flow of information between xi and xi+1 is highest. And thats
all that we need. It should be noted that Mutual Information peaks give us the values of k
for which we have a relationship between the two variables. It need not be linear. The next
section describes the results of this approach.

22

Chapter 6
Results
The results have been very bright. Using this method, I have been able to find the value
of k for which there is Synchronization between the variables. Not only that, we also have
suppression of chaos whenever there is synchronization. The results are best described from
a graph. Before the graphs are discussed, following things should be noted:
Value of a = 3.5
The graph is qualitatively sensitive to the initial conditions. The peaks become dips, for
different initial points. There is of course a safe region, but for particular values of initial
conditions, we can establish the synchronization and control of chaos for a fairly small
value of k. Small here means considerably less than 1 i.e. maybe around 0.2 or so.
Let us look at some specific cases.

6.1

Case 1

The graph of Mutual Information (M ) Vs. k is shown in fig. (6.1). Notice the peaks. The first
one is at k = 0.2424. Then there is a region of k, for which the M is pretty high, between 0.4
and 0.65. The highest Mutual Information is at k = 0.5758. A noticeable feature is the dip in
mutual information at k = 0.3131.
Fig. (6.2) shows the graph between the different oscillators at k = 0.5758. The transient
states have been removed from these graphs. These are plotted after running the system
for 1000 units of time. Only the last 600 points out of the 9125 points. Notice the perfect
synchronization between x1 , x2 , x4 , x5 . x3 and x6 , have died out. Interestingly, they are on the
opposite sides, if you consider these oscillators placed at the vertices of a regular hexagon.

6.2

Case 2

0.5758 is pretty strong. From the graph in fig. (6.1), the first prominent peak is at k = 0.2424.
The results of a similar simulation is shown in fig. (6.3). The synchronization relations among
the variables remain the same, the relationship of x1 with x3 and x6 has now become nontrivial. What we see here is a form of generalized Synchronization. The Mutual Information
function picks out any sort of relationship between two variables, not necessarily linear. To
demonstrate how accurate this method is, let us take k = 0.3131. All the plots are shown
in fig. (6.4). Notice, how bad the relationships are, for the same time of run. So, using this
method, we are successfully able to find the values of k for which there is synchronization.

23

Figure 6.1: For a=3.5 and X0 =[0, 0.3, 0, 0, 0.16, 0, 0, 0.3, 0,1, 0.33, 0, 0, 0.03, 0, 0.2, 0.1, 0]

Figure 6.2: Relation between positions of oscillators at k = 0.5758.

6.3

Suppression Of Chaos

So, we have found the value of k for which there is synchronization. But, why did we do this?
The reason is suppression of chaos. These Mathieu Oscillators are chaotic at a = 3.5. But now,
look at the phase space plots shown in fig. (6.5). These oscillators have the same dynamics as
those of being non-chaotic. They have a limit cycle.

24

Figure 6.3: Relation between positions of oscillators at k = 0.2424.

Figure 6.4: Relation between positions of oscillators at k = 0.3131.

25

Figure 6.5: Phase relationships between the oscillators at k = 0.2424

26

Chapter 7
MATLAB Codes
Following are some of the important MATLAB codes.

7.1
1
2
3
4

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37

Code for Finding Lyapunov Exponents

% Input
%
n number of ODEs
%
f the system should be in the form x'=f(x) [is a function handle]
%
solver solver which should be used for integrating the ODEs. [is a ...
function handle]
%
t0 Starting time
%
step steps on time for GramSchmidt renormalization procedure.
%
tf end time
%
x0 initial point.
%
h=step for the solver
% Output
%
tout the time at which the Lyaponov exponents are found
%
L Lyapunov Exponents
% Example: [t,L]=lyapunovM(18,@n cmo,@ode45,0,.1,200,x0)
% One can say that the Lyapunov Exponents are found if you see the
% spectrum converging.
function [tout,L]= lyapunovM(n,f,solver,t0,step,tf,x0)
n1=n;
n2=n1*(n1+1);
n i = round((tft0)/step); % number times the main loop iterates
y=zeros(n2,1); cum=zeros(n1,1); y0=y;
gsc=cum; zn=cum; %to be used for Gram Schmidt Orthonormalization
y(1:n)=x0(:);
for i=1:n1
y((n1+1)*i)=1.0;
end
t=t0;
for ITERLYAP=1:n i
[T,Y] = feval(solver,f,[t t+step],y);
t=t+step;
y=Y(size(Y,1),:);
for i=1:n1
for j=1:n1
y0(n1*i+j)=y(n1*j+i);
end
end
%Starting the GramSchmidt orthonormalization
zn(1)=0.0;

27

for j=1:n1
zn(1)=zn(1)+y0(n1*j+1)2;
end
zn(1)=sqrt(zn(1));
for j=1:n1
y0(n1*j+1)=y0(n1*j+1)/zn(1);
end
% Core Algo
for j=2:n1
for k=1:(j1)
gsc(k)=0.0;
for l=1:n1
gsc(k)=gsc(k)+y0(n1*l+j)*y0(n1*l+k);
end
end
for k=1:n1
for l=1:(j1)
y0(n1*k+j)=y0(n1*k+j)gsc(l)*y0(n1*k+l);
end
end
zn(j)=0.0;
for k=1:n1
zn(j)=zn(j)+y0(n1*k+j)2;
end
zn(j)=sqrt(zn(j));
for k=1:n1
y0(n1*k+j)=y0(n1*k+j)/zn(j);
end
end
% normalize the vectors and exponents
for k=1:n1
cum(k)=cum(k)+log(zn(k));
end
for k=1:n1
lp(k)=cum(k)/(tt0);
end
if ITERLYAP==1
L=lp;
tout=t;
else
L=[L;lp];
tout=[tout;t];
end
for i=1:n1
for j=1:n1
y(n1*j+i)=y0(n1*i+j);
end
end

38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86

end

7.2

1
2
3
4

Code for Simulating a system consisting of N Mathieu Oscillators

function f=n cmo(t,X)


N=6; % N is the number of oscillators
lambda= .5;
omega=2;

28

a=3.5;
%global k; uncomment this if one wants to use this for mutual ...
information calculation
k=0.2424;
n v=3*N;
f=zeros(n v,1);
%complicated cases
f(1)=X(2);
f(2)=2*lambda*X(2)(1+a*cos(X(3)))*sin(X(1))+k*(X(4)2*X(1)+X(n v2));
f(3)=omega;
f(n v2)=X(n v1);
f(n v1)=2*lambda*X(n v1)(1+a*cos(X(n v)))*sin(X(n v2))+k*(X(1)2*X(n v2)...
+X(n v5));
f(n v)=omega;
f(6:3:n v3)=omega;
f(4:3:n v5)=X(5:3:n v4);
f(5:3:n v4)=2*lambda*X(5:3:n v4)(1+a*cos(X(6:3:n v3))).*sin(X(4:3:n v5))...
+k*(X(7:3:n v2)2*X(4:3:n v5)+X(1:3:n v8));

5
6

7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

end

7.3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

Code for Finding the Poincar`


e Section

% This finds the Poincare Map for 3D phase space. The section is always a
% Plane
% every vector should be row
% count is the number of intersections
% pts is the vector of points
% xout, yout and zout are the points in the phase space.
% S is the function handler for the plane.
% p0 is a point on the plane and n is the vector normal to plane like [1 0 0]
% is i cap if the plane is parallel to yz plane.
% plt is a vector. Suppose the section is x=2, plt=[2 3] i.e. the plane
% parallel to it
function [count,pts]=poincareSec(xout,yout,zout,S,n,p0,plt)
close all;
hold on;
count=0;
% Round the numbers to 4 digits
xout=roundArray(xout,4);
yout=roundArray(yout,4);
zout=roundArray(zout,4);
pts=[];
for j=1:length(xout)1
if(S([xout(j) yout(j) zout(j)])<0&&S([xout(j+1) yout(j+1) zout(j+1)])>0)
% Have a line drawn between the two points. This is not the
% worst of interpolations if you have many points.
x1=[xout(j) yout(j) zout(j)];x2=[xout(j+1) yout(j+1) zout(j+1)];
l=x2x1;
l0=x2;
d=dot((p0l0),n)/dot(l,n);
p=d*l+l0;
plot(p(plt(1)),p(plt(2)),'bx');
pts=[pts;p(plt(1)),p(plt(2))];
count=count+1; hold on;
%pause(.5); % Keep this if you like to see a movie
end
end
hold off;

29

37
38
39
40

end
function ret=roundArray(x,digs)
ret=round(x*(10digs))/(10digs);
end

7.4
1
2
3
4
5

The equation of Surface

% X= [x cordinate y cordinate z cordinate]


function sect=S(X)
X=roundArray(X,4);
sect=X(1)0.2;
end

7.5

For finding joint PMF

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

% function h = hist2(v1,v2,[nbins])
% Computes the joint histogram of two arrays, ignoring the tails of the
% two margingal distributions.
% Retuns a 2D array (nbins X nbins).
% Defaults nbins: sqrt(length(v1)/10)
% Error if the two arrays are unequal sizes
function [h,binWidths,bins] = hist2(v1,v2,nbins)
if any(size(v1) = size(v2))
error('Array sizes must be equal');
end
% Convert input arrays to column vectors
v1 = v1(:);
v2 = v2(:);
% Default nbins
if exist('nbins','var')
nbins = round(sqrt(length(v1)/10));
end
% Initialize result array
h = zeros(nbins);
% Choose range, ignoring the tails of the two marginal distributions
histThresh = length(v1)/1000;
[cnt, val] = hist(v1,100);
goodVals = find(cnt>histThresh);
minVal1 = val(min(goodVals));
maxVal1 = val(max(goodVals));
[cnt, val] = hist(v2,100);
goodVals = find(cnt>histThresh);
minVal2 = val(min(goodVals));
maxVal2 = val(max(goodVals));
% Compute binwidths and bins
binWidth1 = (maxVal1 minVal1)/nbins;
bins1 = [minVal1:binWidth1:maxVal1]';
binWidth2 = (maxVal2 minVal2)/nbins;
bins2 = [minVal2:binWidth2:maxVal2]';
binWidths = [binWidth1 binWidth2];
bins = [bins1, bins2];
% Bin them
v1bin = round((v1 minVal1)/binWidth1);
v2bin = round((v2 minVal2)/binWidth2);
% Count them

30

for id = 1:length(v1)
i = v1bin(id);
j = v2bin(id);
if ((1<=i) && (i<=nbins) && (1<=j) && (j<=nbins))
h(i,j) = h(i,j) + 1;
end
end

42
43
44
45
46
47
48
49

end

7.6
1
2
3
4
5
6

%
%
%
%
%
%

Code for Finding Mutual Information

Computes the mutual information between two vectors. Uses hist2 to


compute the joint histogram (which ignores the tails of the two marginal
distributions. Mutual information is:
I(a,b) = H(a) + H(b) H(a,b)
where
H(a) = sum P(a) log[P(a)]

7
8
9
10
11
12
13
14
15
16
17
18

% Normalized mutual information is:


%
[H(a) + H(b)] / H(a,b)
% Default nbins: sqrt(length(v1)/10)
function [Iab,Pab,Pa,Pb] = my mutualInformation(a,b,normalize,nbins)
if exist('normalize','var')
normalize = 0;
end
% Default nbins
if exist('nbins','var')
nbins = round(sqrt(length(a)/10));
end

19
20
21

% Joint histogram
abHist = hist2(a,b,nbins);

22
23
24
25

% Marginal histograms
aHist = sum(abHist,1);
bHist = sum(abHist,2);

26
27
28
29
30
31

% Probabilities
N = sum(aHist);
Pa = aHist/N;
Pb = bHist/N;
Pab = abHist/N;

32
33
34
35
36
37

% Disable divide by 0 and log of 0 warnings


warning('off');
Ha = (Pa .* log(Pa));
id = isfinite(Ha);
Ha = sum(Ha(id));

38
39
40
41

Hb = (Pb .* log(Pb));
id = isfinite(Hb);
Hb = sum(Hb(id));

42
43
44
45
46
47

Hab = (Pab .* log(Pab));


id = isfinite(Hab);
Hab = sum(Hab(id));
warning('on');
%normalized

31

if normalize
Iab=[Ha + Hb] / (2*Hab);
else
Iab = Ha + Hb Hab;
end

48
49
50
51
52
53

end

32

Chapter 8
Summary to Part I
We started by considering a single Mathieu oscillator. At first we found regions of chaos by
studying the Lyapunov exponents. Calculating Lyapunov exponents is cumbersome and nontrivial because of the continuous need to orthogonalize the chosen directions. The MATLAB
code is developed based on the algorithm outlined by a paper by Wolf et. al. [77]. We find
the chaotic regions by finding the value of a in eqn. (4.2) for which the Maximal Lyapunov
exponents are positive. Following this, we coupled chaotic Mathieu oscillators in a ring. The
chosen number of oscillator is 6, which is a random choice. The codes, illustrated in the last
chapter of this part are generalized for N oscillators.
The next object was to find stable states of synchronization. This was done by defining a
mutual information function between the oscillators. Binning method is used for sampling the
data and defining a probability density. After we had this probability density, finding mutual
information function was relatively easy.
The reliability of the probability density function is a major concern while using information
theoretic approach to solve this problem. To confirm the results, explicit solutions to the differential equations are carried out. These simulations are done using ode45 function. The
my mutualInformation computes the mutual information. It uses hist2 to a large extent,
which is the subroutine that computes the joint histogram of two time series. Synchronization
has been observed for the suggested values of coupling strength.
Most of the literature have the use of mutual information in an abstract form. In this part, we
have used the concept in a concrete form. The results have been very positive. With this, we
conclude the Part I of the thesis.

33

Part II
Denoising chaotic communication
channel

34

Chapter 9
Introduction to Part II
Though it might not be obvious in the first reading, this part has intimate connections with
the first one. The first part was based on information to a large extent. The main result in the
previous part depended on definition and interpretation of mutual information. Information
theory was primarily developed to make quantitative calculations relating to communication.
The content of this part is exploration of chaos-based communication systems.
The problem we shall be dealing with is the elimination of noise from this type of communication system. Even with synchrony being one of the keywords for my overall thesis title, the
problem in the last part did not involve a direct encounter with the phenomenon per se. It has
always been about the consequences of two chaotic systems synchronizing1 . The problem in
this part is based on a similar situation and how the synchronization condition can be exploited
for some good.
The first chapter is on chaotic communication systems. This chapter illustrates how one can
use chaos and synchronization to make a communication system which has encryption built
into it. While there are several methods to these, we have favored the chaotic masking for the
simplicity of its application. Then theres a chapter on the implementation of chaotic masking.
Following this is a chapter on removing additive noise by using downhill simplex method, which
is entirely based on Sharma and Ott [71]. A major part of the effort was dedicated to understand why the authors were doing what they were doing. Digressions to probability theory,
multivariate optimization, wavelet analysis and artificial neural network consumed quite a bit
of time.
The last chapter consists of results from neural networks to eliminate additive noise from the
communication channel. In the thesis, very little literature on neural networks has been included. This is done to ensure that the digressions from the topic of synchronization is kept to
minimum.
While doing these though, I was overwhelmed by the number of new things that Id to learn.
Most of these dont show up in the final thesis, because they were necessary to fine-tune one
block in a Simulink model consisting of 15 blocks.
That brings me to the methodology of solving the problem. Like any problem, there are two
parts to it formulation and solution. Formulation is mostly theoretically, with some heuristic
considerations. The solution is numerical. This thesis however, consists largely of numerical
results.

the phrases chaotic oscillators and chaotic systems, would be used interchangeably

35

Chapter 10
Communication Using Chaos
Producing stochastic-process like behavior even with deterministic dynamics, is perhaps one of
the most fascinating characteristics of chaotic systems. This gave rise to an increasing interest
in implementing secure communication systems using chaotic systems as the basis. There are
very interesting relationships between traditional cryptographic systems and chaotic systems.
In his classic paper, Shannon had alluded to this relationship [69].
Good mixing transformations are often formed by repeated products of two simple
non-commuting operations. Hopf has shown, for example, that pastry dough can
be mixed by such a sequence of operations. The dough is first rolled out into a
thin slab, then folded over, then rolled, and then folded again, etc.
There are some straightforward analogies between the cryptic systems and chaotic systems.
Statistical diffusion and confusion which are quintessential to encryption are built in naturally
into a chaotic system by the virtue of its ergodicity. And while, traditional cryptographic
systems rely on algorithmic complexity to protect data, chaos produces complicated structures
in the dynamics.
Since there is a variety of definitions of chaos, there are many ways in which chaos can be used
for secure communication. Broadly, these can be divided into Analog and Digital chaos-based
cryptosystems. For the purpose of the project, we will talk only about analog systems.
The analog chaos-based cryptosystems are based primarily on chaos synchronization [57]. Digital chaos-based systems on the other hand, are algorithms that are based on chaotic dynamics,
but are used to encrypt plain text to create a cipher.

10.1

Chaotic Masking

The method that will be implemented in this work is Chaotic Masking. It is the simplest of
the communication models. Since our focus is on eliminating noise rather than doing anything
with the way in which message is transmitted, we have chosen this model.
The method is simple and surprisingly robust, at least, for our purpose i.e. to develop an
algorithm for eliminating noise in a channel.

10.1.1

The Algorithm

We start with a transmitter, and send the encrypted signal via a channel (which usually have
noise). Following this, we decrypt the signal at the receiver. The transmitter T and receiver
R are chaotic systems.
36

This type cryptographic implementation is whats termed as symmetric. The key that has to
be shared between T and S, consists of parameters for these chaotic systems and the initial
conditions. Chaotic masking is a synchronization-based method, meaning that, its encryption/decryption efficiency depends crucially on the synchronization between T and R. This
implementation will have a driver-response setup, with the T being the driver.
Suppose that x1 is a trajectory of T . m(t) is a signal that needs to sent out. If we didnt have
this specific task of sending m(t) to synchronize T and S we would have used x1 from T to
evolve S. Then,
x1 (t) x2 (t), t
(10.1)
The trick for chaotic masking is to use s(t) = x1 (t) + m(t) instead of x1 (t) for driving S. S
and T would synchronize only if m(t) doesnt distort the error dynamics of the driver and
response. Its usually not a problem to change the amplitude of the incoming signals because
most of the high fidelity modern day communication channels are phase modulated. At the
receivers end, R is driven using s(t) and then, x2 (t) is subtracted from s(t) to get m(t).

If the
synchronization is complete the approximation is amazingly accurate.
The stepwise description is as follows:
1. Evolve T .
2. Add properly scaled m(t) to x1 (t). The resultant signal would be called s(t).
3. Use s(t) to drive R. Note that, T and R are identical chaotic system. If the signal
being sent is phase modulated, one can use non-identical systems or identical ones with
different parameters, to do the same.
4. R is synchronized, following which x2 (t) would be subtracted from s(t), m(t).

Steps 2 and 4 are encryption and decryption [or addition, and subtraction], respectively.
The information that has to be shared between T and R would be the parameters and initial
conditions. If these are different for T and R the two would not synchronize satisfactorily,
hence giving a bad m(t).

37

Chapter 11
Implementation of Chaotic Masking
There are 6 solvers, because there are six differential equations to solve. The initial conditions
for these solver1 and solver4;solver6 and solver4;solver3 and solver5 should be same.
Blocks x1 out and such contain the differential equations that are being simulated.
Block y sig in takes a vector (an audio signal) and encrypts it by continually adding it to one
of the outputs, x1 out of the chaotic systems. At the receivers end, x2 out is subtracted from
encrpt, to get back the sent signal.

11.1

Sending Audio Signals

To test the implementation, a standard FLAC audio sample(fig. (11.1)) has been used.

Figure 11.1: Waveform for the signal to be sent

The output signal (fig. (11.2)) doesnt look much like the input, but theres a discernible
pattern. And it sounds exactly like the input. However, the situation with added noise is
disastrous. The synchronization graph should be a straight line, which is what it is for the
case of no noise(fig. (11.4)). But with additive noise, the synchronization is all distorted (fig.
(11.5)). This is the situation which the next section addresses.

38

Figure 11.2: Waveform for decrypted signal

Figure 11.3: Waveform for encrypted signal received at the Receiver

Figure 11.4: Synchronization Graph without Noise

39

Figure 11.5: Synchronization Graph with Noise

40

Chapter 12
Removing Additive Noise
There was no noise in the channel that was presented in the previous chapter. When encrpt
was fed into the receiver part of the circuit, it was transmitted from the source without any
particular change. The additive white Gaussian noise (AWGN), distorts the synchronization
process beyond recovery. Hence, for systems that depend on chaotic masking, denoising has to
be done before the received signal is passed through the synchronizing receiver.
There are many types of noises, and the most common way to model the random processes
in nature is the awgn. And this would be the material for this chapter. The problem we
will look at is a chaotic communication system thats affected by awgn, and how it would be
denoised [71].

12.1

The Scheme

We intend to send x(t) from one end to the other. This x(t) is equivalent to the s(t) in the
chaotic masking scheme described earlier. At the end of the communication channel, this x(t) is
supposed to be passed through a synchronizing receiver i.e. the receiver part of our Simulink
model. However, in the communication channel, a noise, n(t) gets added .
n(t) N (0, 2 )

(12.1)

Ideally, after receiving the x(t), the synchronizing receiver should give an output y(t), which
should match x(t).
x(t) y(t)
(12.2)
However, because of the presence of n(t), we wouldnt have this relationship. The problem
would be to find an estimate of n(t), and subtract it from the received signal x(t) + n(t) and
then pass that through the synchronizing receiver to find a trajectory y(t).
It should be noted that x and y are trajectories of chaotic systems. For chaotic masking, these
systems would be identical and will have similar initial conditions to start with. If the noise
were absent, or equivalently, if we can get a correct estimate of n(t), n
(t), then
y(t) = x(t) + n(t) n
(t)

(12.3)

Eq. (12.3) is an important constraint on the estimate for n(t) because if this holds, then y(t)
will pass through chaotic system at the receiver, as just another trajectory. There would be a
synchronization, but it wont be as perfect as the scheme we had developed in the last chapter.
However, it can still be worked out.
Our task of denoising the channel is thus reduced to finding an n
(t), which will make y(t) a
trajectory for the chaotic system. So, how do we find such y? To get some friction on the
41

problem, lets try and find the norm of n


(t).
Vector spaces of random variables are not very different from the usual vector spaces. The
norm of an rv is defined after the probability space is setup. A random experiment is modeled
by a probability space (, F, P), where P is the probability measure.
A vector space V(E, +, ) of rvs consists of all real-valued rvs defined on E = (, F, P). We
define the k-norm for an rv X V, for k 1
1

kXkk = (E[|X|k ]) k

(12.4)

where E[X] is the expectation value operator.


Following [71], we calculate the 2-norm of n
(t)
k
nk22 = E[
n2 ]
= E[n2 ] + E[(x y)2 ] + 2E[x y]E[n]

(12.5)
(12.6)

We have started with E[n2 ] = 2 and E[n] = 0. So, eqn. (12.6) becomes
k
nk2 = 2 + E[(x y)2 ]

(12.7)

Note that, to get the right estimate for the noise, y and x should synchronize. Thats the ideal
case. We may never have that, but we can choose such an n
, so as to reduce the difference
2
2
between x and y. Since, k
nk is minimum when E[(xy) ] is minimum, we reach the conclusion
that, our initial aim of making y a trajectory that the synchronizing receiver can work with,
will be met with sufficient accuracy if we minimize eqn. (12.3) and k
nk2 .

12.2

Minimization Function

We have a multivariable unconstrained minimization problem here. We shall take a sampling


of x and x + n, meaning that we will have a timeseries {x(i) + n(i)}N
i=1 . The function which
we will minimize is
T (
n) =

N
X
[(y(i) + n
(i) x(i) n(i))2 +
n2 (i)]
i=1

is the regularization parameter. It has been added so that we end up with a solution which
minimizes the norm too. The first part of T is to get the right estimate of n so that the
relationship is correct i.e. n
nullifies n. The second part is to ensure that the synchronization
is possible i.e. y and x are trajectories of the chaotic systems of transmitter and receiver, that
are close enough to unmask the signal that was initially sent.
Minimization here is over N variables. Thats a lot of variables especially, if one wants an
accurate reproduction of the signal, the number of samples should be high. The problem of
minimization becomes more and more difficult with increasing N , which is why we have to
reduce the number of variables over which the minimization is done.
We start with M variables and then use a sliding window, similar to one thats used in wavelettype noise elimination methods, to do the minimization over N variables. The function we use
for minimizing M variables is

42

Tj (
n(j), n
(j + 1), . . . , n
(j + M 1)) =

j1
X

e(ij) (y(i) + n
(i) x(i) n(i))2

i=jK
j+M 1

{[y(i) + n
(i) x(i) n(i)]2 +
n2 (i)}

i=j
j+M +K1

e(ji) (y(i) + n
(i) x(i) n(i))2

i=j+M

We increment j from 1 to N M + 1. The exponentials at the beginning and end of the


expression are included to ensure that the information thats contained in variables other than
the M variables that are still being considered in some capacity. Theres a decay in their
contributions as we move away from the M variables that are being initially considered. K is
chosen to be approximately equal to 1/. We start with a large and we go on to reduce it in
every iteration.
In order to minimize the function, we use the Downhill Simplex Method [5]. Its better to
use the DS method because it doesnt involve calculating the derivatives. The method is also
stable, and is relatively fast for smaller N .
A note should be made on the number of samples that should be chosen from the signal.
Usually, N is large, however, here we should take N to be less than 100, because otherwise,
out minimization scheme will produce wrong results.
The numerical results have been very positive [71]. So, instead of reproducing the results by
the exact same method that has been outlined in this paper, what we will do is use a neural
network filter to denoise the channel.

43

Chapter 13
Denoising Using Adaptive Filters
The previous chapter outlined an analytically heavy method [71]. While we had outlined
the basic method expressed in the paper, we skipped showing numerical results pertaining to
that. What we shall do here though, is outline the results from an entirely different method
Adaptive Neural Networks.
While the literature for Artificial Neural Networks (ANN), is vast, we shall focus on a small
part of it and will give a very brief description to it. We shall start right in the middle.

13.1

Artificial Neural Networks

ANNs [54] [48] [79] form a family of statistical learning algorithms which are used to estimate
a function by updating its structure based on the needs of the operation to be accomplished.
ANNs are suitable for situations in which the function to be evaluated depends on a lot of
inputs. This is the reason why ANNs were chosen for the purpose of filtering out the noise.
There are numerous other filtering algorithms, however, most of them are designed for linear
problems. The error function for such problems have a single minimum. However, for our
problem, the error function i.e. the function which evaluates the error between the noise
afflicted signal and the original signal.
There are three parameters that distinctively define ANNs
The Topology of interconnection between layers of neurons
The Learning algorithm, which consists of updating the weights
The Activation function that convert the input into output
The networks consist of interconnections of neurons. As the values are passed from one neuron
to the other, there is a way to distinguish the different paths that the message takes. That
distinction is modeled into the algorithms by updating the weights of different paths based
on the need of the network. The updating algorithm determines how often the weights get
changed. This is what is called the learning algorithm. Finally, the activation function looks
into the process of deciding whether the information should be passed to the next layer of
neurons or not.
A neural filter, is an ANN, which is designed and trained to sift through a combination of
noise and useful signal. The idea, is to operate with experimental or simulated (as in our
case) data [1] to adjust the weights so that least mean square error between sample input data
and corresponding target data is minimized. See fig. (13.1) for a schematic representation of
the process. Neural filtering rests on two fundamental theorems which are outlined in details
in [43]. Adaptive Neural Networks have linear transfer functions i.e. theres no hard limit
44

Figure 13.1: Neural networks Overview


to the values that they can output. The learning algorithm used by these neural networks is
Widrow-Hoff learning rule or the LMS rule for minimizing the error. We have used MATLABs
Neural Network toolbox to train the an adaptive filter network.

13.2

Results

We will start by delineating our results for the additive noise. We have done the simulations for
the whole communication channel in the Simulink, and have trained the neural networks with
increasing amount of data. The results have been outlined in the graph for synchronization
after elimination.

13.2.1

Additive Noise

The figure (put the reference to additive noise graph), shows how the noise gets eliminated
with continuous training. (three graphs with increased accuracies). The straighter the line is,
the more synchronized the receiver and the transmitter are.
From the block diagram, we can see that encrpt pure gets affected by the additive white
Gaussian noise, to give encrpt noisy. For the neural network, we have used the Neural
Network Time Series toolbox. We tried to view this as a nonlinear autoregressive with external
input. This sort of problem is more accurate than the nonlinear input-output.
Before the neural network time series toolbox is used, one must make sure that Simulink model
for the additive noise, is run, and the outputs encrpt noisy and encrpt pure are prepared
for neural network training by using num2cell. Though, from the Simulation, one can get lots
of data, for training, only the last 500 data points are used.
Once the network is created and trained, use the network to predict the output i.e. predict
encrpt pure from encrpt noisy. Once, we make that prediction, we can pass the predicted
encrypted signal through the receiver rather than the noisy one. The results with various
lengths of neural network training are shown below. Fig. (13.2) is a situation in which no filter
has been applied.

45

Figure 13.2: Synchronization Graph with Additive Noise

Figure 13.3: Synchronization Graph with Denoising (5 epochs of Training)

Figure 13.4: Synchronization Graph with Denoising (8 epochs of Training)

13.2.2

Multiplicative Noise

The results for multiplicative noise are not conclusive. While the network seems to have
eliminated noise for first phase of the denoising process, the synchronization breaks in the later
phases. The results for the speckle noise is thus inconclusive.
The reason for this maybe the nonlinearity in the form in which error function manifests itself
46

Figure 13.5: Synchronization Graph with Denoising (11 epochs of Training)


into the signal. While, the additive noise is still a linear transformation, the multiplicative
noise isnt.

47

Chapter 14
Summary to Part II
This is the end of the part II. We started by looking into the ways in which Chaos can be used
to communicate securely. We setup a communication system as a Simulink model, and then
studied it. As a sample, we used an audio file to be transmitted from one end to the other.
The next phase of the part II was to introduce noise and see how things work out.
We intended to work on two types of noise
Additive Noise
Speckle (Multiplicative) Noise
At first, we illustrated subtle points of the algorithm given by Ott and Sharma [71]. Then,
instead of simply implementing the algorithm, we started working on a neural network model.
Using the neural network toolbox, we built a network and generated the corresponding script.
After training it for 11 epochs, we have found that synchronization is restored.
The multiplicative noise part was rather difficult to do because of the nature of the noise
addition. The neural network approach did not work, and hence, at the conclusion of the part,
we havent been able to eliminate multiplicative noise to any efficacy.

48

Conclusion to Synchrony and Pattern


in Nature
In this project thesis, we have dealt with three problems. The results of each of these problems
have been outlined separately in each of the parts. The first problem was to identify synchronization in a network of Mathieu oscillators. We successfully, completed that by considering
the flow of information between the oscillators. The second problem was to denoise a chaotic
communication channel. This was done by using adaptive neural filters. The results of this
part were not conclusive enough to make a deduction.
We have been able to make a case for the importance of the phenomenon of synchronization.
We have studied in details the relationship between synchronization and chaos. We have tried
to understand how useful synchronization can be, and how one can use other phenomena in
conjunction with synchronization. Throughout the project, our primary aim has been to study
the phenomenon and investigate it from novel directions. To a large extent, we have been
able to do this. Synchronization is a heavily interdisciplinary, which affords one to find many
problems to work on. The number of problems here has been limited to two, only because
of lack of time. One can certainly pick from here and go on to work on a number of related
problems.

49

Part III
Appendix - Simulink Models

50

Bibliography
[1] H. D. I. Abarbanel, Reggie Brown, John J. Sidorowich, and Lev Sh. Tsimring. The analysis
of observed chaotic data in physical systems. Rev. Mod. Phys., 65:13311392, Oct 1993.
[2] J.A. Acebron, L.L. Bonilla, C. Perez, F. Ritort, and R. Spigler. The kuramoto model: A
simple paradigm for synchronization phenomenon. Rev. Mod. Phys., 77, January 2005.
[3] A. Ahlborn and U. Parlitz. Control and synchronization of spatiotemporal chaos. Phys.
Rev. E, 77:016201, Jan 2008.
[4] K. T. Alligood, T. D. Sauer, and J. A. Yorke. Chaos- An introduction to Dynamical
Systems. Springer-Verlag, 1996.
[5] M. Avriel. Nonlinear Programming: Analysis and Methods. Courier Dover Publications,
2003.
[6] Murilo S. Baptista and Luis Lopez. Information transfer in chaos-based communication.
Phys. Rev. E, 65:055201, May 2002.
[7] D. Belato, H.I. Weber, J.M. Balthazar, and D.T. Mook. Chaotic vibrations of a nonideal
electro-mechanical system. International Journal of Solids and Structures, 38:1699, 2001.
[8] C.M. Bender and S.A. Orszag. Advanced Mathematical Methods for Scientists and Engineers. Springer, 1999.
[9] G. Benettin, L. Galgani, A. Giorgilli, and J. M. Strelcyn. Lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems: A method for computing
all of them. Meccanica, 15(9-30), 1980.
[10] S. Boccaletti, A. Farini, and F. T. Arecchi. Adaptive synchronization of chaos for secure
communication. Phys. Rev. E, 55:49794981, May 1997.
[11] S. Boccaletti, J. Kurths, G. Osipov, D.L. Valladares, and C.S. Zhou. The synchronization
of chaotic systems. Physics Reports, 2002.
[12] S. Boccaletti, J. Kurths, G. Osipov, D.L. Valladares, and C.S. Zhou. The synchronization
of chaotic systems. Physics Reports, 2002.
[13] Edward Bullard. The stability of a homopolar dynamo. Mathematical Proceedings of the
Cambridge Philosophical Society, 51:744760, 9 1955.
[14] M.J. Cooks, D.B. Litvin, P.W. Mathews, R. Macaulay, and J.Shaw. One-piece faraday
generator: A paradoxical experiment from 1851. Am. J. Phys., 46:729, 1978.
[15] J.P. Crutchfield. Between order and chaos. Nature Physics, 2011.

[16] J. Djuric. Suggestions for an experiment with a unipolar generator and its bearing on the
earths magnetic field. J. Phys. D: Appl. Phys., 9:2623, 1976.
[17] J. P. Eckmann and D. Ruelle. Ergodic theory of chaos and strange attractors. Rev. Mod.
Phys., 57:617656, Jul 1985.
[18] L.W. Sheppard et.al. Characterizing an ensemble of interacting oscillators: The mean-field
variability index. Phys. Rev. E, 87, 2013.
[19] D. Gabor. Theory of communication. J. IEE (London), 93 (III):429457, 1946.
[20] R. Gencay and W.D. Dechert. An algorithm for the n lyapunov exponents of an ndimensional unknown dynamical system. Physica D, 59:142157, 1992.
[21] J.M. Gonzalez-Miranda. Synchronization and Control of Chaos. Imperial College Press,
2004.
[22] P. Grassberger and Itamar Procaccia. Estimation of the kolmogorov entropy from a chaotic
signal. Physical Review A, 1983.
[23] H. Haken. Synergetics: An Introduction. Springer-Verlag, 1977.
[24] Scott Hayes, Celso Grebogi, and Edward Ott. Communicating with chaos. Phys. Rev.
Lett., 70:30313034, May 1993.
[25] Scott Hayes, Celso Grebogi, Edward Ott, and Andrea Mark. Experimental control of
chaos for communication. Phys. Rev. Lett., 73:17811784, Sep 1994.
[26] R. Hide. The nonlinear differential equations governing a hierarchy of self-exciting coupled faraday-disk homopolar dynamos. Physics of the Earth and Planetary Interiors,
103(34):281 291, 1997.
[27] E. M. Izhikevich and B. Ermentrout. Phase model. 3(10):1487, 2008.
[28] A. W. Jayawardena, Pengcheng Xu, and W. K. Li. Modified correlation entropy estimation
for a noisy chaotic time series. Chaos: An Interdisciplinary Journal of Nonlinear Science,
20(2):023104, 2010.
[29] J. Kaplan and J. Yorke. Chaotic behavior of multidimensional difference equations. Lecture
Notes in Mathematics, Vol 730, Springer, 1979.
[30] Johannes Kestler, Evi Kopelowitz, Ido Kanter, and Wolfgang Kinzel. Patterns of chaos
synchronization. Phys. Rev. E, 77:046209, Apr 2008.
[31] S. Kichenassamy and R. A. Krikorian. Note on maxwells equations in relativistically
rotating frames. Journal of Mathematical Physics, 35(11):57265733, 1994.
[32] L. Kocarev and S. Lian. Chaos-Based Cryptography Theory, Applications and Algorithms.
Springer-Verlag, 2011.
[33] L. Kocarev and U. Parlitz. General approach for chaotic synchronization with applications
to communication. Phys. Rev. Lett., 74:50285031, Jun 1995.
[34] W. Krabs and S. Pickl. Dynamical Systems- Stability, Controllability and Chaotic Behavior. Springer, 2010.

[35] Y. Kuramoto. Chemical Oscillations, Waves and Turbulence. Springer, NY, 1984.
[36] C. Laroche, R. Labbe, F. Petrelis, and S. Fauve. Chaotic motors. Am. J. Phys., 80:113,
2012.
[37] F. Ledrappier. Some relations between dimension and lyapounov exponents. Communications in Mathematical Physics, 81(2):229238, 1981.
[38] N. Leprovost, B. Dubrulle, and F. Plunian. Intermittency in the homopolar dynamo.
[39] H. Leung, H. Yu, and K. Murali. Ergodic chaos-based communication schemes. Phys.
Rev. E, 66:036203, Sep 2002.
[40] Larry S. Liebovitch and Tibor Toth. A fast algorithm to determine fractal dimensions by
box counting. Physics Letters A, 141(89):386 390, 1989.
[41] Fan-Yi Lin, Yuh-Kwei Chao, and Tsung-Chieh Wu. Effective bandwidths of broadband
chaotic signals. Quantum Electronics, IEEE Journal of, 48(8):10101014, Aug 2012.
[42] D. Lind. An Introduction to Symbolic Dynamics and Coding. Cambridge University Press,
1996.
[43] J. Ting-Ho Lo. Neural filtering. Scholarpedia, 8(4):7868, 2009.
[44] E.N. Lorenz. Deterministic nonperiodic flow. J. Atmos. Sci., 1936.
[45] Amos Maritan and Jayanth R. Banavar. Chaos, noise, and synchronization. Phys. Rev.
Lett., 72:14511454, Mar 1994.
[46] N.W. McLachlan. Theory and Application of Mathieu Functions. Oxford University Press,
1951.
[47] A. Medio and M. Lines. Nonlinear Dynamics- A primer. Cambridge University Press,
2003.
[48] K. Mehrotra, C.K. Mohan, and S. Ranka. Elements of Artificial Neural Networks. A
Bradford book. MIT Press, 1997.
[49] A.S. Mikhailov. Foundations of Synergetics I: Distributive Active Systems. SpringerVerlag, 1990.
[50] A.S. Mikhailov and A.Y. Loskutov. Foundations of Synergetics II: Chaos and Noise.
Springer, 1996.
[51] Shapour Mohammadi. Fractaldim: Matlab function to compute fractal dimension. Statistical Software Components, Boston College Department of Economics, July 2009.
[52] A Nussbaum. Fardays law paradoxes. Phys. Educ., 7:231, 1972.
[53] Edward Ott, Celso Grebogi, and James A. Yorke. Controlling chaos. Phys. Rev. Lett.,
64:28372837, Jun 1990.
[54] D. W. Patterson. Artificial Neural Networks: Theory and Applications. Prentice-Hall
Series in Advanced Communications. Prentice Hall, 1996.
[55] L. Pecora and T. Carroll. Master stability functions for synchronized coupled systems.
Phys. Rev. Letters, March 1998.

[56] L. Pecora, T. Carroll, G. Johnson, and D. Mar. Fundamentals of synchronization in chaotic


systems, concepts and applications. Chaos, 1997.
[57] Louis M. Pecora and Thomas L. Carroll. Synchronization in chaotic systems. Phys. Rev.
Lett., 64:821824, Feb 1990.
[58] Louis M. Pecora and Thomas L. Carroll. Driving systems with chaotic signals. Phys. Rev.
A, 44:23742383, Aug 1991.
[59] A. Pikovsky. Synchronization in a population of globally coupled chaotic oscillators. Europhys. Lett, 34:165170, 1996.
[60] A. Pikovsky, M. Rosenblum, and J. Kurths. Synchronization in a population of globally
coupled chaotic oscillators. Europhysics Letters, 1996.
[61] A. Pikovsky, M. Rosenblum, and J Kurths. Phase synchronization of chaotic oscillators
by external driving. Physica D, 104:219238, 1997.
[62] A. Pikovsky, M. Rosenblum, and J. Kurths. Synchronization: A universal concept in
nonlinear sciences. Cambridge, 2001.
[63] A. Politi. Lyapunov exponent. Scholarpedia, 8(3):2722, 2013.
[64] Hai-Peng Ren, Murilo S. Baptista, and Celso Grebogi. Wireless communication with
chaos. Phys. Rev. Lett., 110:184101, Apr 2013.
[65] Epaminondas Rosa, Jr., Scott Hayes, and Celso Grebogi. Noise filtering in communication
with chaos. Phys. Rev. Lett., 78:12471250, Feb 1997.
[66] M. Rosenblum and A. Pikovsky. Self-organized quasiperiodicity in oscillator ensembles
with global nonlinear coupling. Physical Review Letters, 2007.
[67] D. Schieber. A model of unipolar induction. Archiv fr Elektrotechnik, 69(2):121127, 1986.
[68] H.G. Schuster and W. Just. Deterministic Chaos. Wiley-VCH, 2005.
[69] C. Shannon. Communication theory of secrecy systems. Bell Sys. Tech. J., 28:959, 1949.
[70] C. Shannon and W. Weaver. The Mathematical Theory of Communication. 1949.
[71] N. Sharma and E. Ott. Synchronization-based noise reduction method for communication
with chaotic systems. December 1998.
[72] S. Strogatz. From kuramoto to crawford: exploring the onset of synchronization in populations of coupled oscillators. Physica D, 143, 2000.
[73] Steven H Strogatz. Nonlinear dynamics and chaos: with applications to physics, biology,
chemistry, and engineering. Sarat Book Distributors, 1994.
[74] R. Tsuneji. Oscillations of a system of disk dynamos. Mathematical Proceedings of the
Cambridge Philosophical Society, 54:89105, 0 1958.
[75] Thomas Valone. The one-piece farday generator: Research results. In The Homopolar
Handbook: A Definitive Guide to Faraday Disk and N-machine Technologies.
[76] A. Winfree. The Geometry of Biological Time. Springer, 2001.

[77] Alan Wolf, Jack B. Swift, Harry L. Swinney, and John A. Vastano. Determining lyapunov
exponents from a time series. Physica D: Nonlinear Phenomena, 16(3):285 317, 1985.
[78] T. Yalcinkaya and Y.C. Lai. Phase synchronization of chaos. Phys. Rev. Lett., pages
38853888, 1997.
[79] B. Yegnanarayana. Artificial Neural Networks. PHI Learning, 2009.
[80] Yongguang Yu and Suochun Zhang. the synchronization of linearly bidirectional coupled
chaotic systems. Chaos, Solitons and Fractals, 22(1):189 197, 2004.
[81] Meng Zhan, Gang Hu, and Junzhong Yang. Synchronization of chaos in coupled systems.
Phys. Rev. E, 62:29632966, Aug 2000.

Você também pode gostar