Você está na página 1de 11



A Modeling Framework for Engineered

Complex Adaptive Systems
Moeed Haghnevis and Ronald G. Askin

AbstractThe objective of this paper is to develop an integrated method to study emergent behavior and consequences
of evolution and adaptation in a certain engineered complex
adaptive system. A conceptual framework is provided to describe
the structure of a class of engineered complex systems and predict
their future adaptive patterns. The proposed modeling approach
allows examining complexity in the structure and the behavior
of components as a result of their connections and in relation to
their environment. Electrical power demand is used to illustrate
the applicability of the modeling approach. We describe and use
the major differences of natural complex adaptive systems (CASs)
with artificial/engineered CASs to build our framework. The
framework allows focus on the critical factors of an engineered
system, but also enables one to synthetically employ engineering
and mathematical models to analyze and measure complexity
in such systems without complex modeling. This paper adopts
concepts of complex systems science to management science and
system-of-systems engineering.
Index TermsComplex adaptive systems (CASs), decentralization, emergence, engineered complexity, evolution, system of

I. Introduction

RADITIONALLY, we analyze a system by reductionism.

In other words, we study behaviors of large systems
by decomposing the system into components, analyzing the
components, and then inferring system behavior by aggregation of component behaviors. However, this bottom-up method
of describing systems often fails to analyze complex levels
and fully describe behavior. Holism reveals that the sum of
components is less than the whole system [1]. This idea
becomes important in studies of complex systems.
Complex systems have been widely studied; however, there
is not yet a comprehensive and widely accepted mathematical
model for engineered systems. Defense Research and Development Canada-Valcartier, Valcartier, QC, Canada, distributed
four comprehensive reports dedicated to the study of complex
systems. The first document provides 471 references and 713
related Internet addresses in list of projects, organizations,
journals, and conferences [2]. The second one provides different formulations and measures of complexity [3]. Their glossary defined 335 related keywords [4]. An overview of theoretical concepts of complexity theory is presented in the fourth

Manuscript received October 29, 2010; revised June 28, 2011; accepted
January 12, 2012. Date of publication April 18, 2012; date of current version
August 21, 2012.
The authors are with the School of Computing, Informatics, and Decision
Systems Engineering, Arizona State University, Tempe, AZ 85287 USA
(e-mail: moeed.haghnevis@asu.edu; ron.askin@asu.edu).
Digital Object Identifier 10.1109/JSYST.2012.2190696

document [5]. Magee and Weck [6] classified complex systems

and presented several examples for each group. While these
surveys show the extent of prior research, they also indicate
the lack of a comprehensive engineering model and motivate
us to consider engineered complex adaptive systems (ECASs).
Current research (mentioned in the surveys) usually considers natural systems (biological, physical, and chemical
systems) where the emergence and evolutionary behaviors can
be studied by thermodynamic laws, biological rules, and their
intrinsic dynamics that are innate parts of these systems. However, in engineered systems, decision makers or system designers develop or define rules and procedures to engineer the outcomes and control the possibilities as needed. In ECASs, objectives are artificially defined and interoperabilities between
components can be manipulated to achieve desired goals;
however, objectives and interoperabilities of natural systems
are naturally embedded. These facts motivate us to propose a
new framework for modeling this class of complex adaptive
systems (CASs). Our framework does not design CASs, it
enables us to control or at least predict ECASs mutating issues.
This paper considers the hallmarks of ECASs as emergence, evolution, and adaptation. We define emergence as the
capability of components of a system to do something or
present a new behavior in interaction and dependent to other
components that they are unable to do or present individually.
Also, we define evolution as a process of change and agility for
the whole system. Adaptation is the ability of systems to learn
and adjust to a new environment to promote their survival.
Similar definitions can be found in [4]. We will explain how
to study these hallmarks in our framework for ECASs in detail.
Study of CASs is challenging because of abstract theoretical
concepts, no applicable complete framework, and difficulty in
understanding emergence [1]. The main barrier to analyzing
ECASs by traditional methods stems from the theory of
complex systems that focuses on emergence at the lower level
and evolution at the upper system level whereas engineering
focuses on purposes and outcomes. Some research has considered complex system science in engineering environments.
Scope and Scale [7] studied properties of the structure of
complex systems and interdependence of components. The
complexity profile [8] helps measure the amount of information needed to describe each level of detail. These methods are
not mature enough to analyze and predict ECASs completely.
While electricity consumption profiles will be utilized for
illustration and validation, we will discuss how this framework
could likewise be applied in other ECASs, such as traffic and

c 2012 IEEE


crowed behaviors, wholesale marketing, health care systems,

urban design, robotics and AI, supply chain management,
modern defense sectors, and other meta-systems. The Electric Power Research Institute, Palo Alto, CA, estimates 26%
growth of electricity consumption by 2030 in the U.S. (1.7%
annually from 1996 to 2006) [9]. Electric power grids are
ECASs with high economic impact driven by the maximum
consumption rate and uniformity of aggregate regional demand. Applying our integrated model allows reduction of
disuniformity in electricity consumption. Economic incentives
motivate local consumers to adjust behavior to limit maximum
system usage.
One of the most engineered and mathematically modeled complex systems is complex networks. Previous studies
quantify dynamics of small-world networks [10] and model
evolutionary structure of population and components in social
networks [11]. For example, structural properties of the power
grids of Southern California [12] and New York [13] have
been analyzed. We will apply some of the concepts of complex
network science at the last step of our framework.
In this paper, we focus on human decision making. Humans
can adjust their structural artifacts and actions to respond to the
challenges and opportunities of their environment. This ability
usually increases complexity. Three developed approaches to
mimic human decision behaviors are classified by [14]. Most
of the research on human networks assumes some kind of
hierarchy in the system. These studies are useful in organizational systems that have different levels of authority, such as
military and education systems that have leaders and followers.
However, complexities in heterarchical systems (components
share the same authority) are not studied.
The remainder of this paper is organized as follows. Section II presents our framework. Hallmarks and theoretical concepts of complexity are considered in building this framework.
Other sections are mapped to the profiles of the framework.
Sections III and IV detail the mathematical mechanisms
of features and relationships of components (step 1 of the
framework). These lead to analyzing the interoperabilities that
induce emergence in Section V (step 2). Evolution of traits
as the process of system adaptation and their response to the
changes is covered in Section VI (steps 3, 4). Various examples
demonstrate the validity of our method in each section.
II. Framework for Engineered Complex
Adaptive Systems
Couture and Charpentier [1] and Mostashari and Sussman [15] presented a framework to study complex systems.
Prokopenko et al. [16] depicted complex system science
concepts. Also, Sheard and Mostashari [17] visualized characteristics of complex systems. Frameworks for ECASs are still
incomplete and fragmented. In this paper, we propose a more
detailed framework for ECASs (Fig. 1). The framework can
help us focus on critical factors that change the states of an
ECAS, and enables us to synthetically employ engineering and
mathematical models to analyze and measure complexity in an
adaptive system without complex modeling. Four profiles of
ECASs and their characteristics are presented in component

Fig. 1.


Framework for engineered complex adaptive systems.

and system levels to show behavior of the three hallmarks.

In our proposed approach, a preparatory step identifies
adaptive complexity in an engineered system. This step is
necessary to make sure we do not spend unnecessary resources
to analyze a normal system as a complex system. To identify
a complex engineered system, we check [18] the following.
1) System structure:
a) displays no or incomplete central organizing for
the system organization (prescriptive hierarchically controlled systems are assumed to not be
complex systems);
b) behavioral interactions among components at
lower levels are revealed by observing behavior
of the system at higher level.
2) Analysis of system behavior:
a) analyzing components fails to explain higher level
b) reductionist approach does not satisfactorily describe the whole system.
Total electricity consumption grows every year, affecting the
topology of power grids. Some researchers believe this huge
growth supports the idea of transformation from a centralized
network to a less centralized one (from producer-controlled
to consumer-interactive). This decentralization results in complexity in this system by decreasing central organization.
Moreover, the interaction of physics with the design of the
transmission links increases its complexity as do the diversity
of people, their interdependences, and their willingness to
cooperate. Time dependence of the network [19], scale-free
or single-scale feature of these networks (their node degree
distribution follows a power-law or Gaussian distribution in
long run) [20], and human decisions based on other consumers
all justify considering the electric power grid as an ECAS.
These factors have placed the U.S. power grid beyond the
capability of mathematical modeling to date [13].
To take advantage of the fundamental theories of complex systems, we study and analyze complex systems based


on the framework in Fig. 1. Systems are composed of

components. Components possess individual features and interoperable behaviors. Systems then have traits and learning
behaviors. Together, these form the system profile comprised
of the following aspects (we define state of each profile in
1) Features (components readjust themselves continuously): Here, dissection of features leads to decomposability (e.g., number of each component type and
patterns of individual behaviors) and willingness (e.g.,
fitness rate of each component and behavioral/decision
rules). The environment of the system may also affect
component actions. A measurable property of this profile
is self-information (entropy) of components. Entropy
is increased with the diversity of components and is
decreased with their compatibility. Sections III and IV
mathematically model and analyze the dissection of
features and show how self-organization appears.
2) Interoperabilities (components update their interdependences): In this profile, emergence as the hallmark
of interoperability shows what components can do in
interaction and dependent to other components that they
would not do individually. Components have exchangeability and synchronization. Autonomy increases and
dependence decreases the interrelationship of components. This profile helps us to infer the behavior of the
components. Section V models this profile.
3) Traits (system tries to improve its efficiency and effectiveness): In this profile, systems may evolve. The
whole system applies its resilience and agile abilities
to perform more effectively and efficiently. Categories
of trait structures or behaviors will be considered here.
The threshold for changing the nature or perceived
characteristic of the system is the measurable property
of this profile. It is discussed in Section VI.
4) Learning (system has flexibility to perform in unforeseen
situations): After evolving, the system must adapt to the
new situation. Systems need to be adaptive to survive;
otherwise, they may collapse in dynamic conditions.
Flexibility and robustness allow systems to adapt and
show the performance of the system. In some studies, adaptation is one kind of evolution while other
researchers delineate a difference between evolution and
adaptation (modeled in Section VI).
We define complexity of a system with the measurable
properties of the profiles; entropy (E), interoperabilities (I),
and evolution thresholds (). E measures diversity versus
compatibility of component features (Sections III, IV). Is define sensitivity (autonomy versus dependence) to other related
components and their effects (Section V). s are milestones for
changes and adjustments in the system performance that can
differentiate trait categories (Section VI). In addition, a system
may have a goal. In our case, this is to minimize disuniformity
of electricity demand, D, to be formally defined later in this
The framework starts with dissection of features. First, we
study dynamics of components similar to noncomplex systems


(Sections III-A, III-B). Then, we define a new measure to

depict the relationships (Section III-C). These relationships
are the initial source of emergence and are defined based on
ECAS goals (in natural CASs unlike ECASs this measure is
embedded to the system and should be found by analyzing the
system behavior).
Then, we focus on the emergence phenomena of ECASs as
the core concept of complex adaptive behaviors and the source
of dynamic evolution. We present a comprehensive section on
dissection of features and propose four detailed theorems to
show controllability and predictability of the framework at the
emergence level of a system. Then, we generalize the theorems
in the comprehensive theory of mechanisms of components for
ECASs (Section IV).
To distinguish an ECAS from a pure multiagent system
(MAS), we define interoperability as the behavioral changes
that are caused by interactions (Section V). In MASs components have relationships; however, in CASs the interactions
and behaviors evolve. Interoperability shows how components
cooperate/compete based on other components and interactions to evolve and adapt to new environments (see new
measures in Section V). While either would suffice, we use the
term interoperability instead of interaction to indicate information sharing and beneficial behavior coordination. Finally, the
framework shows the adaptability and learning behavior of a
system at Section VI.
III. Dissection of Features
Various studies apply the concept of information theory to
study system complexities. The key point is that the required
length to describe a system is related to its complexity [21].
Yu and Efstathiou defined a complexity measure based on
entropy and a quantitative method to evaluate the performance
of manufacturing networks [22]. These studies applied the
concept of entropy in their research; however, they did not
discuss other hallmarks of CASs. Here, we start with the same
idea then we extend it to the other hallmarks.
A. Exponential Fitness
Consider a system of components with n different patterns
of behavior. For example, there may be n daily electricity
usage profiles for the different classes of consumers. If population of pattern i (Xi , i = 1, ..., n) changes exponentially with
fitness rate bi
= bi Xi .
To increase the readability of the formulation in the following
sections all ts are suppressed from the expressions except when
necessary to compare different times. The probabilities of the
patterns can be measured by the percentage of each pattern
Xi (t + 1) = bi Xi (t) + Xi (t) or

Pi =  .
We obtain the growth equation for percentage of each group

dPi bi Xi Xi Xi bi Xi

bi Pi . (3)

( X i )2


In the long run, we may assume small periods of ts

as continuous intervals. In continuous time, the exponential
function (4) replaces (1)
Xi = i ei t or

= i i ei t

= i Pi Pi
i Pi



i t
where Pi = i e ei t . To find self-information of components,

we can measure the entropy of population by


Pi log2 Pi .


So growth of entropy is
 dPi 1
+ log2 Pi )].
dt ln 2
From (3) and (7)

bi Pi (
Pi log2 Pi log2 Pi ).

If population Xi has limit Li , its growth follows a logistic

function, (1) will change to
= bi Xi (1 ).


= bi Pi (1 ) Pi [
bi Pi (1 )].


Thus, (3) becomes



bi i Pi ).
= Pi (bi i
From (11) and (7), (8) can be rewritten as follows:




i bi Pi (
Pi log2 Pi log2 Pi ).
Growth of entropy shows how the population changes in
time by the exponential or logistic function (entropy is selfinformation). However, it is not sufficient for interpreting
the combination of components as any combination of three
components with 0.3, 0.3, and 0.4 probability leads to the same
entropy. In addition, engineered systems have a defined goal
that is not shown in the entropy (we call it disuniformity).
C. Disuniformity
Let Cit (w) be the average consumption of electricity at time
w for pattern i in period t. The disuniformity of pattern i in
time t is as follows:
Di (t) =
(Cit (w) Cit )2 dw

Ct (w)dw

= 0 wi
and w is a cyclic time in period t.
For example, if we want to show patterns of consumption
in each quarterly season for the next 20 years, w0 covers
the 24 h of consumption each day while t (t = 1, ..., 80)
shows each season. The pattern of consumption in the first
season Ci1 (w) may be different from the second one Ci2 (w).
We will illustrate how disuniformity can be extended to other
ECASs. At first glance, the disuniformity of an individual
component, (13), looks similar to variance. We do not use this
term because the consumption is not considered as a random
variable. Furthermore, it is customary to refer to the variance
as the range/noise of consumption at a specific time w.
The control objective is to minimize the disuniformity
(consumers cooperate to have uniform aggregate consumption
at each time). Thus, we seek to minimize D


B. Logistic Fitness

Define growth potential i = 1





Ci (w)Xi )

iS Xi


Ci (w)Xi dw


dw (14)

where population S is a connected graph of components to

show their interactions. These interactions are the source of
interoperabilities in Section V. Note that we remove ts in our
formula to increase readability. However, D, C, and X are
functions of t.
Generally, we define disuniformity as a normalized measure
of difference between the current state of components and the
goal state. Disuniformity could be reduced by incentives that
change one or more profiles or rearrange class probabilities
(source of self-organization).
Here, we use disuniformity to show how the system behaves
as an ECAS (we will show how it causes dependences between
behaviors later). Concepts from information theory are adapted
to describe complexity, self-organization and emergence in
the context of our ECASs [16]. Controlling disuniformity
is a source of self-organization in ECASs (see Section IV).
Shalizi [23] and Shalizi et al. [24] defined a quantifying
self-organization for discrete random fields (e.g., cellular automata). We reinterpret these concepts to apply them in ECASs
that may have continuous states and, unlike natural physical
systems, may not have a natural embedded energy dynamic or
self-directing law. Self-organization and adaptive agents are
analyzed by [25]. We will extend these concepts to all hallmarks of ECASs. Bashkirov [26] described self-organization
in a complex system by using Renyi and Gibbs-Shannon
entropy. These studies are applicable in natural and physical
systems. For example, a biological application, gene-gene
and gene-environment interactions, is identified by interaction
information and generalization of mutual information in [27].

IV. Entropy Versus Disuniformity, Source of

In this section, we connect the concept of entropy and
disuniformity for component patterns. We prove lemmas for a
system with two components that interact in a basic dominance
scenario. Then, we generalize our lemmas to more complicated



structures of patterns and behaviors for the n-component case.

These theorems allow control, predict behaviors of features
and their relationships, and enable us to study emergence by
modeling interoperability in the next section.
Definition I:
1) Dominance: behavior i dominates behavior j (i  j) if
D i Dj .
2) Strict positive dominance: behavior i strictly positively
dominates behavior j (i  j) if Di < Dj , |Ci (w)
Ci | |Cj (w) Cj | for all w and sgn(Ci (w) Ci ) =
sgn(Cj (w) Cj ) for all w.
3) Positive dominance: behavior i positively dominates behavior j (i  j) if Di < Dj , |Ci (w)Ci | > |Cj (w)Cj |
for some w and sgn(Ci (w) Ci ) = sgn(Cj (w) Cj ) for
all w.
4) Strict negative dominance: behavior i strictly negatively
dominates behavior j (i  j) if Di < Dj , |Ci (w)
Ci | |Cj (w) Cj | for all w and sgn(Ci (w) Ci ) =
sgn(Cj (w) Cj ) for all w.
5) Negative dominance: behavior i negatively dominates
behavior j (i  j) if Di < Dj , |Ci (w) Ci | > |Cj (w)
Cj | for some w and sgn(Ci (w) Ci ) = sgn(Cj (w) Cj )
for all w.
Note that E is increasing in time (E ) means E(t + 1) >
E(t) and (E ) means E(t + 1) < E(t). We use the same
definition for (D ) and (D ). Here, Pi refers to Pi (t).
Lemma I: Given two different patterns of behavior (i and
j) in the population and i  j:
I.1) Pi < Pj (Xi < Xj ) and bi > bj iff E is increasing
time (E ) and D decreases in time (D );
I.2) Pi > Pj (Xi > Xj ) and bi > bj iff E is decreasing
time (E ) and D decreases in time (D );
I.3) Pi < Pj (Xi < Xj ) and bi < bj iff E is decreasing
time (E ) and D increases in time (D );
I.4) Pi > Pj (Xi > Xj ) and bi < bj iff E is increasing
time (E ) and D increases in time (D ).


Proof (Sufficiency of Lemma I): We are given n = 2, Pi (t) +

Pj (t) = 1, and Pi (t) < Pj (t), so, Pi (t) < 1/2 and Pj (t) > 1/2.
Also, bi > bj results in
bi Xi (t) + Xi (t)
Xi (t)
Xi (t) + Xj (t)
bi Xi (t) + Xi (t) + bj Xj (t) + Xj (t)


thus Pi (t) < Pi (t + 1) and similarly, Pj (t) > Pj (t + 1). So the

probabilities are closer to a uniform distribution (Pi is closer
to Pj ) in t + 1.
Recall that the uniform distribution of Xi s (frequency of
patterns) gives the maximum entropy of the system (see [28]
for proof). Suppose Pi = n1 is the uniform probability mass
function for Xi , i = 1, ..., n, so the maximum entropy of the
system is log2 n.
From the recall max(E) = 1 when n = 2 and Pi (t) = Pj (t) =
1/2 at time t, hence, E is an increasing function of time t, i.e.,
E(t + 1) > E(t) while Pi (t) < Pj (t).
Furthermore, because i strictly dominants j, for all time
intervals w, and has similar sign with j, increasing the portion



decreases disuniformity in (14) because here

|Ci (w) Ci | |Cj (w) Cj |


Xi (t)
Xi (t + 1)
Xj (t)
Xj (t + 1)


and Di < Dj ; therefore, D(t + 1) < D(t), i.e., D increases.

It is easy to show that in Lemma I.2) E is a decreasing
function of t and apply the same argument for I.3) and I.4).
Necessity of Lemma I (Proof by Contradiction): Suppose
E increases and D decreases but one or both conditions of
Lemma I.1) do not hold. In this case, necessary conditions
for one of the I.2), I.3), or I.4) hold. For example, if bi > bj
but Xi > Xj instead of Xi < Xj , this is Lemma I.2) and E
decreases which contradicts our assumption of I.1). Note that
we do not consider bi = bj or Pi = Pj , because they are neutral
cases and do not have any effect. So all four combinations of
bs and ps are generated in this lemma.
Corollary I: When conditions of Lemma I hold and t :
I.1) in exponential growth Di is a lower bound for D and
E (0, 1) when D decreases [Lemma I.1), I.2)]; also,
Dj is an upper bound for D and E (0, 1) when D
increases [Lemma I.3), I.4)];
I.2) consider logistic growth where f , f , g, and g are
functions of the logistic limits Li ; then, max{Di , f (Li )}
is a lower bound for D and E (0, g(Li )) when D
decreases [Lemma I.1), I.2)]. Also, min{Dj , f (Lj )} is
an upper bound for D and E (0, g (Lj )) when D
increase [Lemma I.3), I.4)].
Proof: In Corollary I.1), D decreases when proportion XXji
increases (due to the dominance condition), so min(D) = Di
when all components are i ( XXji and E = 0). And D
increases when proportion XXji decreases, so max(D) = Dj
when all components are j ( XXji 0 and E = 0). However,
max(E) = log2 n and n = 2, so max(E) = 1 and E is
When the fitness follows a logistic function [Corollary I.2)],
we have limits for the number of is and js, XXji < if Xj = 0
and XXji > 0 if Xi = 0. So min(D) is a function of the limit of
i when XXji increases and max(D) is a function of the limit of
j when XXji decreases. Clearly, min(D) = Di when Xj = 0 and
max(D) = Dj when Xi = 0. Using the same argument we can
find the range of E which is a function of limits.
Theorem I: Given n different patterns of behavior (i =
1, ..., n) in population S, bk 0, k S and i  j, for
i S and j S S :

I.1) E < log2 Pi ( iS Pi log2 Pi > log2 Pi ) and bi > bj
for i S and j S S iff E is increasing in time
(E ) and D decreases
in time (D );

I.2) E > log2 Pi ( iS Pi log2 Pi < log2 Pi ) and bi > bj
for i S and j S S iff E is decreasing in time
(E ) and D decreases
in time (D );

I.3) E < log2 Pi ( iS Pi log2 Pi > log2 Pi ) and bi < bj
for i S and j S S iff E is decreasing in time
(E ) and D increases in time (D );


I.4) E > log2 Pi ( iS Pi log2 Pi < log2 Pi ) and bi < bj
for i S and j S S iff E is increasing in time
(E ) and D increases in time (D ).
Proof (Sufficiency of Theorem I): This theorem generalizes
Lemma I to n components. Similar to Lemma I the entropy
of the system increases when probability of components is
closer to uniform distribution. It happens when
for exponential
growth in (8) or for logistic growth in (12), iS Pi log2 Pi =
log2 Pi . To reach this point, E increases if there is a larger
fitness rate for components which have probability less than
uniform distribution. In general, larger fitness rates increase
the entropy if log2 Pi > E [Theorem I.1)] for the cases
that we cannot reach the uniform distribution or if we want
to compare some components where all have smaller or larger
probabilities than uniform.
Like Lemma I, increasing the number of dominant components decreases the total disuniformity (14). The same
argument will prove Theorem I.2), I.3), and I.4). We can also
prove the necessity of Theorem I by contradiction.
Corollary II: When conditions of Theorem I hold and
t :
II.1) Corollary I.1) can be generalized to n components in
Theorem I with E (0, log2 n);
II.2) Corollary I.2) can be generalized to n components in
Theorem I with different f , f , g, and g functions.
Note that bk > 0, k S means all Xi s are growing over
time; however, some Pi s may decrease.
Lemma II: Given two different patterns of behavior (i and
j) in the population and i  j; Lemma I.1), I.2), I.3), and I.4)
and Corollary I.1) and I.2) are valid.
Proof: This is a generalization of Lemma I to the positive
dominance case. This case allows j to dominate i in some
time interval w; however, the proof is still valid because D is
total disuniformity.
Theorem II: Given n different patterns of behavior (i =
1, ..., n) in population S, bk 0, k S and i  j, for
i S and j S S ; Theorem I.1), I.2), I.3), and I.4) and
Corollary II.1) and II.2) are valid.
Proof: This theorem is a generalization of Lemma II to n
components. We can use the same argument which we used
to generalize Lemma I to Theorem I to generalize Lemma II
to Theorem II.
Example 1: (Features in Fig. 1) assume there are 100 components in a complex system which only follows three patterns
i, j, and k. At time t = 1, 15% of components follow pattern
i, 65% follow j, and 20% follow k. Let bi = 0.2, bj = 0.1,
and bk = 0.3. Fig. 2(a) shows the patterns of electricity
consumption in 24 h. The objective is simulating and analyzing
the complex system for the next 20 years (80 seasons).
At t = 1 the system follows Theorem II.1)
Pi (t = 1) = 0.15, Pj (t = 1) = 0.65, Pk (t = 1) = 0.2,
E(t = 1) = 1.28, D(t = 1) = 65.08.
At t = 9 we have max(i) [follows Theorem II.2)]
Pi (t = 9) = 0.18, Pj (t = 9) = 0.38, Pk (t = 9) = 0.44,
E(t = 9) = 1.49, D(t = 9) = 59.08.
At t = 19 disuniformity starts increasing again
Pi (t = 19) = 0.13, Pj (t = 19) = 0.12, Pk (t = 19) = 0.75,
E(t = 19) = 1.07, D(t = 19) = 57.45 (D(t = 18) = 57.36).

Fig. 2.


Example for Theorem II. (a) Patterns. (b) Fitness. (c) D versus E.

Fig. 2(b) shows the probability changes and Fig. 2(c)

presents the behavior of components and simulates entropy and
disuniformity of the system for 80 seasons. Fig. 2(c) shows
the three different possible areas for Theorem II.
Lemma III: Given two different patterns of behavior (i and
j) in the population and i  j:
III.1) Pi < Pj (Xi < Xj ) and bi > bj iff E is increasing in
time (E ) and D decreases
 in time (D ) until D = 0
(Xi (Ci (w) Ci )dw = Xj (Cj (w) Cj )dw) afterward
D increases in time (D );
III.2) Pi > Pj (Xi > Xj ) and bi > bj iff E is decreasing in
time (E ) and D decreases
 in time (D ) until D = 0
(Xi (Ci (w) Ci )dw = Xj (Cj (w) Cj )dw) afterward
D increases in time (D );
III.3) Pi < Pj (Xi < Xj ) and bi < bj iff E is decreasing in
time (E ) and D increases in time (D );
III.4) Pi > Pj (Xi > Xj ) and bi < bj iff E is increasing
(E ) and D increases in time (D ).
Proof: To prove this lemma, we should consider different
sgn(Ci (w) Ci ) between disuniformity of i and j, for all w.
So, the total disuniformity decreases until 0 and increases
after that [because of power of 2 in (14)]. D = 0 when
the weighted disuniformity for all components i is equal to
weighted disuniformity for all components j. When the total
disuniformity increases [Lemma III.3), III.4)] we do not need
to consider any minimum point, because the function is non


Corollary III: When conditions of Lemma III hold and

t :
III.1) in exponential growth  > 0 where, D <  ( is a
lower bound for D) and E (0, 1) when D decreases
[Lemma III.1), III.2)]; also, Dj is an upper bound for
D and E (0, 1) when D increases [Lemma III.3),
III.2) in logistic growth max{0, f (Li )} is a lower bound for
D and E (0, g(Li )) when D decreases [Lemma III.1),
III.2)]; also, min{Dj , f (Lj )} is an upper bound for D
and E (0, g (Lj )) when D increase [Lemma III.3),
Proof: Proof is similar to Corollary I; however,
for a specific

w = w0 where Xi (Ci (w0 )Ci )dw0 Xj (Cj (w0 )Cj )dw0 ,
we have D 0. This point may happen before all components
become similar to is, so min(D) = 0 where E = 0 and E = 0
where D = 0.
Theorem III: Given n different patterns of behavior (i =
1, ..., n) in population S, bk 0, k S and i  j for i S
and j S S :

III.1) E < log2 Pi ( iS Pi log2 Pi > log2 Pi ) and bi >
bj for i S and j S S iff E is increasing in
in time
 and D decreases 
 (D ) until D =
(E )
0 ( Xi (Ci (w) Ci )dw =
Xj (Cj (w) Cj )dw)
afterward D increases
 in time (D );
III.2) E > log2 Pi ( iS Pi log2 Pi < log2 Pi ) and bi >
bj for i S and j S S iff E is decreasing in
in time
 and D decreases 
 (D ) until D =
(E )
0 ( Xi (Ci (w) Ci )dw =
Xj (Cj (w) Cj )dw)
afterward D increases
in time (D );

III.3) E < log2 Pi ( iS Pi log2 Pi > log2 Pi ) and bi < bj
for i S and j S S iff E is decreasing in time
(E ) and D increases
in time (D );

III.4) E > log2 Pi ( iS Pi log2 Pi < log2 Pi ) and bi < bj
for i S and j S S iff E is increasing in time
(E ) and D increases in time (D ).
Corollary IV: When conditions of Theorem III hold and
t :
IV.1) Corollary III.1) can be generalized to n components in
Theorem III with E (0, log2 n);
IV.2) Corollary III.2) can be generalized to n components
in Theorem III with different f , f , g, and g
Lemma IV: Given two different patterns of behavior (i and
j) in the population and i  j, Lemma III.1), III.2), III.3), and
III.4) and Corollary III.1) and III.2) apply.
Theorem IV: Given n different patterns of behavior (i =
1, ..., n) in population S, bk 0, k S and i  j, for i S
and j S S , Theorem III.1), III.2), III.3), and III.4) and
Corollary IV.1) and IV.2) apply.
Example 2: (Features on Fig. 1) modify Example 1 to three
components with negative dominance, Theorem IV [Fig. 3(a)].
Fig. 3(b) shows the behavior of the complex system. Fig. 3(c)
shows the different possible cases of Theorem IV for scenario
Fig. 3(b), respectively.
Summary: We can summarize the results of Theorems I, II,
III, IV in Table I and conclude Theorem V as a general theo-


Fig. 3.

Example for Theorem IV. (a) Pattern. (b) Fitness. (c) D versus E.

rem to control decomposability and willingness of components

of a complex system in all dominance cases.
Theorem V (Mechanisms of Components): If i  j, i.e.,
is dominate js, disuniformity of the system is decreasing in
time if the entropy increases in time when log2 Pi > E or
time when
log2 Pi < E while,

the entropy decreases in 
Xi (Ci (w) Ci )dw <
Xj (Cj (w) Cj )dw for both
We can apply this theorem to control or at least predict the
complex behaviors in large ECASs. Here, we provide incentives to motivate the components to decrease the disuniformity
by adjusting their patterns (this adjustment changes the fitness
rates bi s dynamically). This heterarchical rearrangement with
external changes to the environment but without central organization is a source of self-organizing in components. As
an illustration, assume n patterns of consumption in a system.
When n is large (e.g., patterns of consumers in large metropolitan area), it is impossible to control and predict all behaviors
and their relationships. We can focus on a few groups (pattern
i where log2 Pi > E) and increase the entropy by motivating
other consumers to adjust to this pattern (migrate to this
pattern or increase its fitness portion). This phenomena makes
nonlinear complex dynamic fitness rates, i.e., bi = K(R(D); E).
Here, K is a function of R(D) and population of other patterns
(i.e., E). R(D) shows the motivations based on D (e.g.,
rewards that consumers receive by cooperating to reduce the
disuniformity). These changes in bi s make Xi dependent on
each other. To predict the behaviors at each time, we can map
the system conditions (dominance, entropy, and fitness rates)



Summary of Emergence


log2 Pi > E
bi > bj
bi < bj

log2 Pi < E
bi > bj
bi < bj

*Note:  means
Xi (Ci (w) Ci )dw >
Xj (Cj (w) Cj )dw changes

Xi (Ci (w) Ci )dw <
Xj (Cj (w) Cj )dw in time, or vice versa.

to an appropriate theorem. In the next section, we will show

how we can control the interoperability between patterns by
using a third pattern (catalyst), i.e., indirectly utilize Theorem
V to decrease the disuniformity.

V. Emergence as the Effect of Interoperability

In this step of the framework, we study the engineering
concept of emergence in ECASs. Bar Yam [29] conceptually
and mathematically showed the possibility of defining a notion
of emergence and described four concepts of emergence.
Conceptual classification for emergence is proposed by Halley
and Winkler [30]. Prokopenko et al. [16] interpreted concepts
of emergence and self-organization by information theory
and compared them in CASs. We borrow some concept of
information theory to analyze and predict emergence behaviors
of ECASs and show the applicability of Theorem V.
Emergence cannot be defined by properties and relationships of the lower component level [23]. Assume there is an
interaction between pattern i and j at their current level. Then
(6) becomes
E(i, j) =


Pmi mj log2 Pmi mj


mi =1 mj =1

where Pmi mj is the joint probability to find simultaneously

pattern i and pattern j in state mi and mj . The interaction
information (mutual information) of i and j

I (i; j) =


mi =1 mj =1

Pmi mj log2

Pm i m j
Pm i Pm j


measures the interoperability between i and j which is the

amount of information that i and j share and reduce the
uncertainty of each other, where Pmi is the marginal probability
for State mi . We can obtain (see [28])

completely autonomic). We can use this property to control

the entropy in Lemmas IIV.
The generalization of (20) to three-pattern cases is
E = E(i, j, k) = [E(i) + E(j) + E(k)] I(i; j; k) +
E(i, j) + E(i, k) + E(k, j)


where interoperability I can be negative and

I(i; j; k) = I(i; j) I(i; j|k).


Positive I means k supports and increases the interoperability between i and j. However, negative I shows k inhibits and
decreases the interoperability.
Definition II:
1) Catalyst: Pattern k is a positive catalyst for other patterns
in the system if k supports their interoperability and is
a negative catalyst if inhibits their interoperability.
It is possible to generalize (21) and (22) to n patterns [27]

E() =
(1)(||||1) E() I()
= {im |m = 1, ..., n}
I(i1 ; ...; in ) = I(i1 ; ...; in1 ) I(i1 ; ...; in1 |in ).


Generally, for multiple catalyst (k number of catalyst)

I(i1 ; ...; in ) = I(i1 ; ...; ink ) I(i1 ; ...; ink |i(nk+1) ; ...; in ).
In Theorem V, instead of increasing or decreasing the
entropy we can change the interoperability. We add catalyst(s)
to control (inhibit or support) the interoperability.
Definition III:
1) Catalyst-associate interoperability (CAI)
CAI = I(|k) I().


2) Effect of catalyst (EOC)

E = E(i, j) = E(i) + E(j) I(i; j)


where E(i) = I(i, i) is the self-information of i.

From (20) when I(i; j) increases (I ), E decreases (E ).
For the case of only two groups of patterns in the system,
the mutual information is a positive number with maximum
of one, 0 I 1 [from (19)]. E is minimal when i and j are
identical, I = 1 (one group follows the other one) and E is at
its maximum when i and j are independent, I = 0 (groups are

Et Et
where E (t) and E(t) are entropy in time t after and
before applying the catalyst(s), respectively.
Example 3: (Interoperability in Fig. 1) assume Table II is
the joint probabilities for i and j in Example 1 if i can be 0.2,
0.4 or 0.6 and j can be 0.1, 0.15 or 0.2 of total consumers.
Population of other patterns and their effects are negligible.



Prior Probabilities for k 0
P(mi , mj )




Posterior Probabilities for k > 0
P(mi , mj |k)




From (18), (19), and (20), E(i) = 1.56, E(j) = 1.54,

E(i, j) = 2.90, I(i; j) = 0.20. If adding catalyst k updates Table
II to Table III (users k affect the interrelationships between is
and js), E(i) = 1.56, E(j) = 1.53, E(i, j) = 2.73, I(i; j) = 0.36.
So we increase the entropy by increasing the interoperability
which decreases the disuniformity in Example 1
CAI = 0.36 0.2 = 0.16
EOC = 2.732.90
= 1.06.
We can use the concept of EOC to select an appropriate
catalyst. For example, assume n patterns of consumption in a
social population, where i1 and i2 have the majority of population and thus the largest effect on the disuniformity of the
consumption. We are planning to decrease the disuniformity
with a limited amount of resources (e.g., some rewards to give
to cooperative consumers). Instead of distributing the reward
between a large group (say i1 ) to cooperate with the other
group which is not so effective (because the portion of each
individual is too low), we can reward a small group of catalyst
(say i3 ) to improve the interoperability between i1 and i2 . This
idea is similar to finding and investing on hubs in a social
network (based on power law the numbers of components
with higher relationships decrease exponentially [12]). The
next step is to show how these emergence phenomena cause
evolution in the system.

VI. Evolution Because of Updates in the Traits

Here, we analyze the evolution process. Then, in the last
step of the framework we depict the adaptation and learning in
the system. Some measures are developed for the complexity
threshold parameter of physical complex systems in previous
studies [31]. Erdo" s and Renyi [32] studied probability threshold function and evolution in random graphs. We borrow the
concept of threshold [32].
Let M (t), = 1, ..., 0 , be the number of components in
patterns which have the trait at time t. Here, (, t) is a
binary variable that shows the system possesses trait at time t

if MX(t)(t)
i i
(, t) =
if MX(t)(t) <

where is the threshold for trait .

Fig. 4. Complicated example. (a) Nonstationary adaptation. (b) Fast


(t) = ((, t); = 1, ..., 0 ) be a vector of 0 and
1s, where its th position is 1 if (, t) = 1. Let (t) be a
predefined finite set of
s at time t. Based on the definition,
the system evolves when t > t,
(t) &
(t )
/ (or

/ &
(t ) ).
Definition IV:
1) Stagnation: systems are stagnant when they are not
evolvable, i.e.,
(t) (or
/ ) t.
Example 4: (Traits in Fig. 1) assume i = 0.2, j = 0.4,
k = 0.3, and = {[0 1 1], [1 1 1]} in Example 1

t = 4 : MiX(t)(t) = 156

i i

t = 4 :  X (t) = 156 0.4

(4) = [0 1 0]

i i


t = 4 :  kX (t) = 156
i i

t = 5 : MiX(t)(t) = 183

t = 5 :  X (t) = 183 0.4

(4) = [0 1 1]

i i


t = 5 : MkX(t)(t) = 183
i i

t = 9 : MiX(t)(t) = 367

i i




(4) = [0 0 1].
X (t)

i i

t = 9 : Mk (t) = 163


Xi (t)

So the system evolves when t = 5 and t = 9. If we assume

the system evolves only when it possesses all traits (i.e., =
{[1 1 1]}), this system is stagnant.
In this example, the system is adjusted by two evolutions.
This adapting situation can be nonstationary. Fig. 4(a) simulates a case where components adjust their behaviors several
times to increase their objectives. Here, i, j, and k compete to
get more rewards by reducing the disuniformity. However, to
reduce the disuniformity they should cooperate by adjusting
their behaviors (changing fitness rates, bi s). Adding a learning
procedure (do not adjust to previous tried states) omits the nonstationary evolution and causes faster adaptation in Fig. 4(b).
To extend this framework, the concept of dissection of
features can extend to other ECASs easily. Entropy of components is a general concept for all systems and disuniformity can be interpreted in different ECASs. For example,
reducing demand fluctuations in wholesale marketing, resource
allocations in supply chain management, and synergism of
commands to reduce the distances to a target in AI or


defense sectors are other types of disuniformity. Decision

makers may assign different objectives to ECASs based on
their requirement and they are not limited to disuniformity.
However, any ECAS of the class being addressed needs at
least one minimizing/maximizing measure to study dissection
of features other than component entropy. Other hallmarks
(evolution and adaptation) are driven from the emergence
concept (dissection of features and the interactions) and their
mathematical calculation is not limited to electricity usage.
VII. Conclusion and Future Work
In this paper, we presented a framework that helped us employ engineering and mathematical models to analyze certain
ECASs. We can apply this framework to study and predict the
hallmarks of complex heterarchical engineered systems. The
proposed method was used to engineer emergence of human
decisions in an ECAS, evolution of the behaviors, and its
adaptation to new environments. We illustrated how we can
extend the concept of our measures to other ECASs.
We employed information theory in our mathematical
model. All possible dominance cases in complex systems were
defined and four theorems were presented to calibrate the
current situation and predict future behaviors of each case.
Theorem V (mechanisms of components) can be employed to
study self-organization in ECASs.
Catalyst associate interoperability and stagnation of the
system are new concepts that can help us measure or scale
the emergence and evolution behaviors without complex modeling. Researchers may control the interoperability of components with CAI. Also, they can measure evolvability or
stagnation of a complex system by a threshold function.
Varying fitness rates by time, bi (t), may lead to a new
formulation in future research. We can consider statistical or
dynamical functions for fitness rates. Agent-based modeling
and simulation can support and extend the mathematical basis
of this research for investigating real cases.
The authors would like to thank Prof. D. Armbruster, School
of Mathematical and Statistical Sciences, Arizona State University, Tempe, for his constructive comments that improved
the quality of this paper.
[1] M. Couture and R. Charpentier, Elements of a framework for studying
complex systems, in Proc. 12th Int. Command Control Res. Technol.
Symp., Jun. 2007, pp. 117.
[2] M. Couture, Complexity and chaos: State-of-the-art; list of works, experts, organizations, projects, journals, conferences and tools, Defense
Res. Develop. Canada-Valcartier, Valcartier, QC, Canada, Tech. Rep. TN
2006-450, 2006.
[3] M. Couture, Complexity and chaos: State-of-the-art; formulations and
measures of complexity, Defense Res. Develop. Canada-Valcartier,
Valcartier, QC, Canada, Tech. Rep. TN 2006-451, 2006.
[4] M. Couture, Complexity and chaos: State-of-the-art; glossary, Defense
Res. Develop. Canada-Valcartier, Valcartier, QC, Canada, Tech. Rep. TN
2006-452, 2006.


[5] M. Couture, Complexity and chaos: State-of-the-art; overview of theoretical concepts, Defense Res. Develop. Canada-Valcartier, Valcartier,
QC, Canada, Tech. Rep. TN 2006-453, 2007.
[6] C. L. Magee and O. L. de Weck, Complex system classification, in
Proc. 14th Annu. Int. Symp. INCOSE, Jun. 2004, pp. 118.
[7] Y. Bar-Yam, Multiscale complexity/entropy, Adv. Complex Syst., vol. 7,
no. 1, pp. 4763, 2004.
[8] Y. Bar-Yam. (2000). Complexity Rising: From Human Beings to Human
Civilization, a Complexity Profile [Online]. Available: http://necsi.org/
[9] N. Parks, Energy efficiency and the smart grid, Environ. Sci. Technol.,
vol. 43, no. 9, pp. 29993000, May 2009.
[10] D. J. Watts and S. H. Strogatz, Collective dynamics of small-world
networks, Nature, vol. 393, pp. 440442, Jun. 1998.
[11] C. Avin and D. Dayan-Rosenman, Evolutionary reputation games
on social networks, Complex Syst., vol. 17, no. 3, pp. 259277,
[12] L. A. N. Amaral, A. Scala, M. Barthelemy, and H. E. Stanley, Classes
of small-world networks, Proc. Nat. Acad. Sci., vol. 97, no. 21, pp.
11 14911 152, Oct. 2000.
[13] S. H. Strogatz, Exploring complex networks, Nature, vol. 410, pp.
268276, Mar. 2001.
[14] S. Lee, Y. Son, and J. Jin, Decision field theory extensions for behavior
modeling in dynamic environment using Bayesian belief network,
Inform. Sci., vol. 178, no. 10, pp. 22972314, 2008.
[15] A. Mostashari and J. M. Sussman, A framework for analysis, design
and management of complex large-scale interconnected open sociotechnological systems, Int. J. Decision Support Syst. Technol., vol. 1, no. 2,
pp. 5368, 2009.
[16] M. Prokopenko, F. Boschietti, and A. J. Ryan, An information-theoretic
primer on complexity, self-organization, and emergence, Complexity,
vol. 15, no. 1, pp. 1128, 2009.
[17] S. Sheard and A. Mostashari, Principles of complex systems for
systems engineering, Syst. Eng., vol. 12, no. 4, pp. 295311,
[18] J. Ottino, Engineering complex systems, Nature, vol. 427, p. 399, Jan.
[19] D. Braha and Y. Bar-Yam, The statistical mechanics of complex product
development: Empirical and analytical results, Manage. Sci., vol. 53,
no. 7, pp. 11271145, Jul. 2007.
[20] B. Shargel, H. Sayama, I. Epstein, and Y. Bar-Yam, Optimization of
robustness and connectivity in complex networks, Phys. Rev. Lett.,
vol. 90, no. 6, pp. 068701-1068701-4, 2003.
[21] K. Kaneko and I. Tsuda, Complex Systems: Chaos and Beyond: A Constructive Approach With Applications in Life Sciences. Berlin, Germany:
Springer, 2001.
[22] S. B. Yu and J. Efstathiou, An introduction to network complexity, in
Proc. Manuf. Complexity Netw. Conf., Apr. 2002, pp. 110.
[23] C. R. Shalizi, Causal architecture, complexity and self-organization
in time series and cellular automata, Ph.D. dissertation, Center Study
Complex Syst., Univ. Michigan, Ann Arbor, May 2001.
[24] C. R. Shalizi, K. L. Shalizi, and R. Haslinger, Quantifying selforganization with optimal predictors, Phys. Rev. Lett., vol. 93, no. 14,
pp. 1 187 0111 187 014, 2004.
[25] S. E. Page, Self organization and coordination, Comput. Econ., vol. 18,
no. 1, pp. 2548, Aug. 2001.
[26] A. G. Bashkirov, Renyi entropy as a statistical entropy for complex systems, Theor. Math. Phys., vol. 1149, no. 2, pp. 15591573,
[27] P. Chanda, L. Sucheston, A. Zhang, D. Brazeau, J. L. Freudenheim,
C. Ambrosone, and M. Ramanathan, Ambience: A novel approach and
efficient algorithm for identifying informative genetic and environmental
associations with complex phenotypes, Genetics, vol. 180, no. 2, pp.
11911210, 2008.
[28] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed.
Hoboken, NJ: Wiley-Interscience, 2006.
[29] Y. Bar-Yam, A mathematical theory of strong emergence using multiscale variety, Complexity, vol. 9, no. 4, pp. 1524, 2004.
[30] J. D. Halley and D. A. Winkler, Classification of emergence and its
relation to self-organization emergence, Complexity, vol. 13, no. 5, pp.
1015, 2008.
[31] C. Langton, Computation at the edge of chaos: Phase transitions and
emergent computation, Physica D, vol. 42, nos. 13, pp. 1237, 1990.
[32] P. Erdo" s and A. Renyi, On the evolution of random graphs, Publication
Math. Instit. Hungarian Acad. Sci., vol. 5, pp. 1761, 1960.


Moeed Haghnevis received the B.Sc. and M.Sc.

degrees in industrial and system engineering from
the Amirkabir University of Technology, Tehran,
Iran, and the University of Tehran, Tehran, respectively. He is currently pursuing the Ph.D. degree in
industrial engineering with the School of Computing, Informatics, and Decision Systems Engineering,
Arizona State University, Tempe.
Before pursuing the Ph.D. degree, he served as
an Adjunct Instructor with two universities. He has
published a book and several papers. His current
research interests include engineered complex systems, agent-based modeling,
and simulation.


Ronald G. Askin received the Ph.D. degree from

the Georgia Institute of Technology, Atlanta.
He is currently the Director of the School of
Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe. He has
30 years of experience in systems modeling and
Dr. Askin is a Fellow of the IIE. He has received
the National Science Foundation Presidential Young
Investigator Award, the Shingo Prize for Excellence
in Manufacturing Research, the IIE Joint Publishers
Book of the Year Award, and the IIE Transactions Development and Applications Award.