A modeling framework

© All Rights Reserved

6 visualizações

A modeling framework

© All Rights Reserved

- Eq Vs. Iq
- Fyp Project Proposal Template 2016
- Embracing Integrated Complexity
- National Science Foundation: 2007 10 23 suff evid
- From Business Anarchy to Grandiose Prevalence by Andres Agostini
- Systems Biology Issues
- The language of redress for brand monitoring
- ch7
- Course Entrance Survey Jj616 Maintenance Management
- Management
- 10.1.1.138.1949
- Dechert_The Social Impact of Cybernetics
- Crisis in Social Psychology sherif.docx
- Community Diagnosis
- LC14 Essay
- System and Integrated System
- Biblio for New Matrix
- Solution Strategies for Dynamic Optimization Problems
- 1-The Management Process
- 505

Você está na página 1de 11

Complex Adaptive Systems

Moeed Haghnevis and Ronald G. Askin

AbstractThe objective of this paper is to develop an integrated method to study emergent behavior and consequences

of evolution and adaptation in a certain engineered complex

adaptive system. A conceptual framework is provided to describe

the structure of a class of engineered complex systems and predict

their future adaptive patterns. The proposed modeling approach

allows examining complexity in the structure and the behavior

of components as a result of their connections and in relation to

their environment. Electrical power demand is used to illustrate

the applicability of the modeling approach. We describe and use

the major differences of natural complex adaptive systems (CASs)

with artificial/engineered CASs to build our framework. The

framework allows focus on the critical factors of an engineered

system, but also enables one to synthetically employ engineering

and mathematical models to analyze and measure complexity

in such systems without complex modeling. This paper adopts

concepts of complex systems science to management science and

system-of-systems engineering.

Index TermsComplex adaptive systems (CASs), decentralization, emergence, engineered complexity, evolution, system of

systems.

I. Introduction

In other words, we study behaviors of large systems

by decomposing the system into components, analyzing the

components, and then inferring system behavior by aggregation of component behaviors. However, this bottom-up method

of describing systems often fails to analyze complex levels

and fully describe behavior. Holism reveals that the sum of

components is less than the whole system [1]. This idea

becomes important in studies of complex systems.

Complex systems have been widely studied; however, there

is not yet a comprehensive and widely accepted mathematical

model for engineered systems. Defense Research and Development Canada-Valcartier, Valcartier, QC, Canada, distributed

four comprehensive reports dedicated to the study of complex

systems. The first document provides 471 references and 713

related Internet addresses in list of projects, organizations,

journals, and conferences [2]. The second one provides different formulations and measures of complexity [3]. Their glossary defined 335 related keywords [4]. An overview of theoretical concepts of complexity theory is presented in the fourth

Manuscript received October 29, 2010; revised June 28, 2011; accepted

January 12, 2012. Date of publication April 18, 2012; date of current version

August 21, 2012.

The authors are with the School of Computing, Informatics, and Decision

Systems Engineering, Arizona State University, Tempe, AZ 85287 USA

(e-mail: moeed.haghnevis@asu.edu; ron.askin@asu.edu).

Digital Object Identifier 10.1109/JSYST.2012.2190696

and presented several examples for each group. While these

surveys show the extent of prior research, they also indicate

the lack of a comprehensive engineering model and motivate

us to consider engineered complex adaptive systems (ECASs).

Current research (mentioned in the surveys) usually considers natural systems (biological, physical, and chemical

systems) where the emergence and evolutionary behaviors can

be studied by thermodynamic laws, biological rules, and their

intrinsic dynamics that are innate parts of these systems. However, in engineered systems, decision makers or system designers develop or define rules and procedures to engineer the outcomes and control the possibilities as needed. In ECASs, objectives are artificially defined and interoperabilities between

components can be manipulated to achieve desired goals;

however, objectives and interoperabilities of natural systems

are naturally embedded. These facts motivate us to propose a

new framework for modeling this class of complex adaptive

systems (CASs). Our framework does not design CASs, it

enables us to control or at least predict ECASs mutating issues.

This paper considers the hallmarks of ECASs as emergence, evolution, and adaptation. We define emergence as the

capability of components of a system to do something or

present a new behavior in interaction and dependent to other

components that they are unable to do or present individually.

Also, we define evolution as a process of change and agility for

the whole system. Adaptation is the ability of systems to learn

and adjust to a new environment to promote their survival.

Similar definitions can be found in [4]. We will explain how

to study these hallmarks in our framework for ECASs in detail.

Study of CASs is challenging because of abstract theoretical

concepts, no applicable complete framework, and difficulty in

understanding emergence [1]. The main barrier to analyzing

ECASs by traditional methods stems from the theory of

complex systems that focuses on emergence at the lower level

and evolution at the upper system level whereas engineering

focuses on purposes and outcomes. Some research has considered complex system science in engineering environments.

Scope and Scale [7] studied properties of the structure of

complex systems and interdependence of components. The

complexity profile [8] helps measure the amount of information needed to describe each level of detail. These methods are

not mature enough to analyze and predict ECASs completely.

While electricity consumption profiles will be utilized for

illustration and validation, we will discuss how this framework

could likewise be applied in other ECASs, such as traffic and

c 2012 IEEE

1932-8184/$31.00

HAGHNEVIS AND ASKIN: MODELING FRAMEWORK FOR ENGINEERED COMPLEX ADAPTIVE SYSTEMS

urban design, robotics and AI, supply chain management,

modern defense sectors, and other meta-systems. The Electric Power Research Institute, Palo Alto, CA, estimates 26%

growth of electricity consumption by 2030 in the U.S. (1.7%

annually from 1996 to 2006) [9]. Electric power grids are

ECASs with high economic impact driven by the maximum

consumption rate and uniformity of aggregate regional demand. Applying our integrated model allows reduction of

disuniformity in electricity consumption. Economic incentives

motivate local consumers to adjust behavior to limit maximum

system usage.

One of the most engineered and mathematically modeled complex systems is complex networks. Previous studies

quantify dynamics of small-world networks [10] and model

evolutionary structure of population and components in social

networks [11]. For example, structural properties of the power

grids of Southern California [12] and New York [13] have

been analyzed. We will apply some of the concepts of complex

network science at the last step of our framework.

In this paper, we focus on human decision making. Humans

can adjust their structural artifacts and actions to respond to the

challenges and opportunities of their environment. This ability

usually increases complexity. Three developed approaches to

mimic human decision behaviors are classified by [14]. Most

of the research on human networks assumes some kind of

hierarchy in the system. These studies are useful in organizational systems that have different levels of authority, such as

military and education systems that have leaders and followers.

However, complexities in heterarchical systems (components

share the same authority) are not studied.

The remainder of this paper is organized as follows. Section II presents our framework. Hallmarks and theoretical concepts of complexity are considered in building this framework.

Other sections are mapped to the profiles of the framework.

Sections III and IV detail the mathematical mechanisms

of features and relationships of components (step 1 of the

framework). These lead to analyzing the interoperabilities that

induce emergence in Section V (step 2). Evolution of traits

as the process of system adaptation and their response to the

changes is covered in Section VI (steps 3, 4). Various examples

demonstrate the validity of our method in each section.

II. Framework for Engineered Complex

Adaptive Systems

Couture and Charpentier [1] and Mostashari and Sussman [15] presented a framework to study complex systems.

Prokopenko et al. [16] depicted complex system science

concepts. Also, Sheard and Mostashari [17] visualized characteristics of complex systems. Frameworks for ECASs are still

incomplete and fragmented. In this paper, we propose a more

detailed framework for ECASs (Fig. 1). The framework can

help us focus on critical factors that change the states of an

ECAS, and enables us to synthetically employ engineering and

mathematical models to analyze and measure complexity in an

adaptive system without complex modeling. Four profiles of

ECASs and their characteristics are presented in component

Fig. 1.

521

In our proposed approach, a preparatory step identifies

adaptive complexity in an engineered system. This step is

necessary to make sure we do not spend unnecessary resources

to analyze a normal system as a complex system. To identify

a complex engineered system, we check [18] the following.

1) System structure:

a) displays no or incomplete central organizing for

the system organization (prescriptive hierarchically controlled systems are assumed to not be

complex systems);

b) behavioral interactions among components at

lower levels are revealed by observing behavior

of the system at higher level.

2) Analysis of system behavior:

a) analyzing components fails to explain higher level

behavior;

b) reductionist approach does not satisfactorily describe the whole system.

Total electricity consumption grows every year, affecting the

topology of power grids. Some researchers believe this huge

growth supports the idea of transformation from a centralized

network to a less centralized one (from producer-controlled

to consumer-interactive). This decentralization results in complexity in this system by decreasing central organization.

Moreover, the interaction of physics with the design of the

transmission links increases its complexity as do the diversity

of people, their interdependences, and their willingness to

cooperate. Time dependence of the network [19], scale-free

or single-scale feature of these networks (their node degree

distribution follows a power-law or Gaussian distribution in

long run) [20], and human decisions based on other consumers

all justify considering the electric power grid as an ECAS.

These factors have placed the U.S. power grid beyond the

capability of mathematical modeling to date [13].

To take advantage of the fundamental theories of complex systems, we study and analyze complex systems based

522

components. Components possess individual features and interoperable behaviors. Systems then have traits and learning

behaviors. Together, these form the system profile comprised

of the following aspects (we define state of each profile in

parentheses).

1) Features (components readjust themselves continuously): Here, dissection of features leads to decomposability (e.g., number of each component type and

patterns of individual behaviors) and willingness (e.g.,

fitness rate of each component and behavioral/decision

rules). The environment of the system may also affect

component actions. A measurable property of this profile

is self-information (entropy) of components. Entropy

is increased with the diversity of components and is

decreased with their compatibility. Sections III and IV

mathematically model and analyze the dissection of

features and show how self-organization appears.

2) Interoperabilities (components update their interdependences): In this profile, emergence as the hallmark

of interoperability shows what components can do in

interaction and dependent to other components that they

would not do individually. Components have exchangeability and synchronization. Autonomy increases and

dependence decreases the interrelationship of components. This profile helps us to infer the behavior of the

components. Section V models this profile.

3) Traits (system tries to improve its efficiency and effectiveness): In this profile, systems may evolve. The

whole system applies its resilience and agile abilities

to perform more effectively and efficiently. Categories

of trait structures or behaviors will be considered here.

The threshold for changing the nature or perceived

characteristic of the system is the measurable property

of this profile. It is discussed in Section VI.

4) Learning (system has flexibility to perform in unforeseen

situations): After evolving, the system must adapt to the

new situation. Systems need to be adaptive to survive;

otherwise, they may collapse in dynamic conditions.

Flexibility and robustness allow systems to adapt and

show the performance of the system. In some studies, adaptation is one kind of evolution while other

researchers delineate a difference between evolution and

adaptation (modeled in Section VI).

We define complexity of a system with the measurable

properties of the profiles; entropy (E), interoperabilities (I),

and evolution thresholds (). E measures diversity versus

compatibility of component features (Sections III, IV). Is define sensitivity (autonomy versus dependence) to other related

components and their effects (Section V). s are milestones for

changes and adjustments in the system performance that can

differentiate trait categories (Section VI). In addition, a system

may have a goal. In our case, this is to minimize disuniformity

of electricity demand, D, to be formally defined later in this

paper.

The framework starts with dissection of features. First, we

study dynamics of components similar to noncomplex systems

depict the relationships (Section III-C). These relationships

are the initial source of emergence and are defined based on

ECAS goals (in natural CASs unlike ECASs this measure is

embedded to the system and should be found by analyzing the

system behavior).

Then, we focus on the emergence phenomena of ECASs as

the core concept of complex adaptive behaviors and the source

of dynamic evolution. We present a comprehensive section on

dissection of features and propose four detailed theorems to

show controllability and predictability of the framework at the

emergence level of a system. Then, we generalize the theorems

in the comprehensive theory of mechanisms of components for

ECASs (Section IV).

To distinguish an ECAS from a pure multiagent system

(MAS), we define interoperability as the behavioral changes

that are caused by interactions (Section V). In MASs components have relationships; however, in CASs the interactions

and behaviors evolve. Interoperability shows how components

cooperate/compete based on other components and interactions to evolve and adapt to new environments (see new

measures in Section V). While either would suffice, we use the

term interoperability instead of interaction to indicate information sharing and beneficial behavior coordination. Finally, the

framework shows the adaptability and learning behavior of a

system at Section VI.

III. Dissection of Features

Various studies apply the concept of information theory to

study system complexities. The key point is that the required

length to describe a system is related to its complexity [21].

Yu and Efstathiou defined a complexity measure based on

entropy and a quantitative method to evaluate the performance

of manufacturing networks [22]. These studies applied the

concept of entropy in their research; however, they did not

discuss other hallmarks of CASs. Here, we start with the same

idea then we extend it to the other hallmarks.

A. Exponential Fitness

Consider a system of components with n different patterns

of behavior. For example, there may be n daily electricity

usage profiles for the different classes of consumers. If population of pattern i (Xi , i = 1, ..., n) changes exponentially with

fitness rate bi

Xi

(1)

= bi Xi .

t

To increase the readability of the formulation in the following

sections all ts are suppressed from the expressions except when

necessary to compare different times. The probabilities of the

patterns can be measured by the percentage of each pattern

Xi (t + 1) = bi Xi (t) + Xi (t) or

Xi

Pi = .

(2)

Xi

We obtain the growth equation for percentage of each group

dPi bi Xi Xi Xi bi Xi

=

=

b

P

bi Pi . (3)

P

i

i

i

dt

( X i )2

HAGHNEVIS AND ASKIN: MODELING FRAMEWORK FOR ENGINEERED COMPLEX ADAPTIVE SYSTEMS

as continuous intervals. In continuous time, the exponential

function (4) replaces (1)

Xi = i ei t or

dXi

= i i ei t

dt

dPi

= i Pi Pi

i Pi

dt

(4)

(5)

i t

where Pi = i e ei t . To find self-information of components,

i

E=

Pi log2 Pi .

(6)

So growth of entropy is

dPi 1

dE

=

[

(

+ log2 Pi )].

dt

dt ln 2

From (3) and (7)

dE

=

bi Pi (

Pi log2 Pi log2 Pi ).

dt

function, (1) will change to

Xi

dXi

= bi Xi (1 ).

dt

Li

(9)

Xi

dPi

Xi

= bi Pi (1 ) Pi [

bi Pi (1 )].

dt

Li

Li

(10)

Xi

,

Li

then

dPi

bi i Pi ).

= Pi (bi i

dt

From (11) and (7), (8) can be rewritten as follows:

D=

(8)

(11)

dE

=

i bi Pi (

Pi log2 Pi log2 Pi ).

(12)

dt

Growth of entropy shows how the population changes in

time by the exponential or logistic function (entropy is selfinformation). However, it is not sufficient for interpreting

the combination of components as any combination of three

components with 0.3, 0.3, and 0.4 probability leads to the same

entropy. In addition, engineered systems have a defined goal

that is not shown in the entropy (we call it disuniformity).

C. Disuniformity

Let Cit (w) be the average consumption of electricity at time

w for pattern i in period t. The disuniformity of pattern i in

time t is as follows:

w0

Di (t) =

(Cit (w) Cit )2 dw

(13)

0

wt

Ct (w)dw

= 0 wi

and w is a cyclic time in period t.

where

For example, if we want to show patterns of consumption

in each quarterly season for the next 20 years, w0 covers

the 24 h of consumption each day while t (t = 1, ..., 80)

shows each season. The pattern of consumption in the first

season Ci1 (w) may be different from the second one Ci2 (w).

We will illustrate how disuniformity can be extended to other

ECASs. At first glance, the disuniformity of an individual

component, (13), looks similar to variance. We do not use this

term because the consumption is not considered as a random

variable. Furthermore, it is customary to refer to the variance

as the range/noise of consumption at a specific time w.

The control objective is to minimize the disuniformity

(consumers cooperate to have uniform aggregate consumption

at each time). Thus, we seek to minimize D

(7)

B. Logistic Fitness

w0

Cit

523

w0

iS

w0

Ci (w)Xi )

iS Xi

iS

Ci (w)Xi dw

w0

dw (14)

show their interactions. These interactions are the source of

interoperabilities in Section V. Note that we remove ts in our

formula to increase readability. However, D, C, and X are

functions of t.

Generally, we define disuniformity as a normalized measure

of difference between the current state of components and the

goal state. Disuniformity could be reduced by incentives that

change one or more profiles or rearrange class probabilities

(source of self-organization).

Here, we use disuniformity to show how the system behaves

as an ECAS (we will show how it causes dependences between

behaviors later). Concepts from information theory are adapted

to describe complexity, self-organization and emergence in

the context of our ECASs [16]. Controlling disuniformity

is a source of self-organization in ECASs (see Section IV).

Shalizi [23] and Shalizi et al. [24] defined a quantifying

self-organization for discrete random fields (e.g., cellular automata). We reinterpret these concepts to apply them in ECASs

that may have continuous states and, unlike natural physical

systems, may not have a natural embedded energy dynamic or

self-directing law. Self-organization and adaptive agents are

analyzed by [25]. We will extend these concepts to all hallmarks of ECASs. Bashkirov [26] described self-organization

in a complex system by using Renyi and Gibbs-Shannon

entropy. These studies are applicable in natural and physical

systems. For example, a biological application, gene-gene

and gene-environment interactions, is identified by interaction

information and generalization of mutual information in [27].

Self-Organization

In this section, we connect the concept of entropy and

disuniformity for component patterns. We prove lemmas for a

system with two components that interact in a basic dominance

scenario. Then, we generalize our lemmas to more complicated

524

These theorems allow control, predict behaviors of features

and their relationships, and enable us to study emergence by

modeling interoperability in the next section.

Definition I:

1) Dominance: behavior i dominates behavior j (i j) if

D i Dj .

2) Strict positive dominance: behavior i strictly positively

dominates behavior j (i j) if Di < Dj , |Ci (w)

Ci | |Cj (w) Cj | for all w and sgn(Ci (w) Ci ) =

sgn(Cj (w) Cj ) for all w.

3) Positive dominance: behavior i positively dominates behavior j (i j) if Di < Dj , |Ci (w)Ci | > |Cj (w)Cj |

for some w and sgn(Ci (w) Ci ) = sgn(Cj (w) Cj ) for

all w.

4) Strict negative dominance: behavior i strictly negatively

dominates behavior j (i j) if Di < Dj , |Ci (w)

Ci | |Cj (w) Cj | for all w and sgn(Ci (w) Ci ) =

sgn(Cj (w) Cj ) for all w.

5) Negative dominance: behavior i negatively dominates

behavior j (i j) if Di < Dj , |Ci (w) Ci | > |Cj (w)

Cj | for some w and sgn(Ci (w) Ci ) = sgn(Cj (w) Cj )

for all w.

Note that E is increasing in time (E ) means E(t + 1) >

E(t) and (E ) means E(t + 1) < E(t). We use the same

definition for (D ) and (D ). Here, Pi refers to Pi (t).

Lemma I: Given two different patterns of behavior (i and

j) in the population and i j:

I.1) Pi < Pj (Xi < Xj ) and bi > bj iff E is increasing

time (E ) and D decreases in time (D );

I.2) Pi > Pj (Xi > Xj ) and bi > bj iff E is decreasing

time (E ) and D decreases in time (D );

I.3) Pi < Pj (Xi < Xj ) and bi < bj iff E is decreasing

time (E ) and D increases in time (D );

I.4) Pi > Pj (Xi > Xj ) and bi < bj iff E is increasing

time (E ) and D increases in time (D ).

in

in

in

in

Pj (t) = 1, and Pi (t) < Pj (t), so, Pi (t) < 1/2 and Pj (t) > 1/2.

Also, bi > bj results in

bi Xi (t) + Xi (t)

Xi (t)

<

Xi (t) + Xj (t)

bi Xi (t) + Xi (t) + bj Xj (t) + Xj (t)

(15)

probabilities are closer to a uniform distribution (Pi is closer

to Pj ) in t + 1.

Recall that the uniform distribution of Xi s (frequency of

patterns) gives the maximum entropy of the system (see [28]

for proof). Suppose Pi = n1 is the uniform probability mass

function for Xi , i = 1, ..., n, so the maximum entropy of the

system is log2 n.

From the recall max(E) = 1 when n = 2 and Pi (t) = Pj (t) =

1/2 at time t, hence, E is an increasing function of time t, i.e.,

E(t + 1) > E(t) while Pi (t) < Pj (t).

Furthermore, because i strictly dominants j, for all time

intervals w, and has similar sign with j, increasing the portion

of

Xi

Xj

|Ci (w) Ci | |Cj (w) Cj |

(16)

Xi (t)

Xi (t + 1)

<

Xj (t)

Xj (t + 1)

(17)

It is easy to show that in Lemma I.2) E is a decreasing

function of t and apply the same argument for I.3) and I.4).

Necessity of Lemma I (Proof by Contradiction): Suppose

E increases and D decreases but one or both conditions of

Lemma I.1) do not hold. In this case, necessary conditions

for one of the I.2), I.3), or I.4) hold. For example, if bi > bj

but Xi > Xj instead of Xi < Xj , this is Lemma I.2) and E

decreases which contradicts our assumption of I.1). Note that

we do not consider bi = bj or Pi = Pj , because they are neutral

cases and do not have any effect. So all four combinations of

bs and ps are generated in this lemma.

Corollary I: When conditions of Lemma I hold and t :

I.1) in exponential growth Di is a lower bound for D and

E (0, 1) when D decreases [Lemma I.1), I.2)]; also,

Dj is an upper bound for D and E (0, 1) when D

increases [Lemma I.3), I.4)];

I.2) consider logistic growth where f , f , g, and g are

functions of the logistic limits Li ; then, max{Di , f (Li )}

is a lower bound for D and E (0, g(Li )) when D

decreases [Lemma I.1), I.2)]. Also, min{Dj , f (Lj )} is

an upper bound for D and E (0, g (Lj )) when D

increase [Lemma I.3), I.4)].

Proof: In Corollary I.1), D decreases when proportion XXji

increases (due to the dominance condition), so min(D) = Di

when all components are i ( XXji and E = 0). And D

increases when proportion XXji decreases, so max(D) = Dj

when all components are j ( XXji 0 and E = 0). However,

max(E) = log2 n and n = 2, so max(E) = 1 and E is

nonnegative.

When the fitness follows a logistic function [Corollary I.2)],

we have limits for the number of is and js, XXji < if Xj = 0

and XXji > 0 if Xi = 0. So min(D) is a function of the limit of

i when XXji increases and max(D) is a function of the limit of

j when XXji decreases. Clearly, min(D) = Di when Xj = 0 and

max(D) = Dj when Xi = 0. Using the same argument we can

find the range of E which is a function of limits.

Theorem I: Given n different patterns of behavior (i =

1, ..., n) in population S, bk 0, k S and i j, for

i S and j S S :

I.1) E < log2 Pi ( iS Pi log2 Pi > log2 Pi ) and bi > bj

for i S and j S S iff E is increasing in time

(E ) and D decreases

in time (D );

I.2) E > log2 Pi ( iS Pi log2 Pi < log2 Pi ) and bi > bj

for i S and j S S iff E is decreasing in time

(E ) and D decreases

in time (D );

I.3) E < log2 Pi ( iS Pi log2 Pi > log2 Pi ) and bi < bj

for i S and j S S iff E is decreasing in time

(E ) and D increases in time (D );

HAGHNEVIS AND ASKIN: MODELING FRAMEWORK FOR ENGINEERED COMPLEX ADAPTIVE SYSTEMS

I.4) E > log2 Pi ( iS Pi log2 Pi < log2 Pi ) and bi < bj

for i S and j S S iff E is increasing in time

(E ) and D increases in time (D ).

Proof (Sufficiency of Theorem I): This theorem generalizes

Lemma I to n components. Similar to Lemma I the entropy

of the system increases when probability of components is

closer to uniform distribution. It happens when

for exponential

growth in (8) or for logistic growth in (12), iS Pi log2 Pi =

log2 Pi . To reach this point, E increases if there is a larger

fitness rate for components which have probability less than

uniform distribution. In general, larger fitness rates increase

the entropy if log2 Pi > E [Theorem I.1)] for the cases

that we cannot reach the uniform distribution or if we want

to compare some components where all have smaller or larger

probabilities than uniform.

Like Lemma I, increasing the number of dominant components decreases the total disuniformity (14). The same

argument will prove Theorem I.2), I.3), and I.4). We can also

prove the necessity of Theorem I by contradiction.

Corollary II: When conditions of Theorem I hold and

t :

II.1) Corollary I.1) can be generalized to n components in

Theorem I with E (0, log2 n);

II.2) Corollary I.2) can be generalized to n components in

Theorem I with different f , f , g, and g functions.

Note that bk > 0, k S means all Xi s are growing over

time; however, some Pi s may decrease.

Lemma II: Given two different patterns of behavior (i and

j) in the population and i j; Lemma I.1), I.2), I.3), and I.4)

and Corollary I.1) and I.2) are valid.

Proof: This is a generalization of Lemma I to the positive

dominance case. This case allows j to dominate i in some

time interval w; however, the proof is still valid because D is

total disuniformity.

Theorem II: Given n different patterns of behavior (i =

1, ..., n) in population S, bk 0, k S and i j, for

i S and j S S ; Theorem I.1), I.2), I.3), and I.4) and

Corollary II.1) and II.2) are valid.

Proof: This theorem is a generalization of Lemma II to n

components. We can use the same argument which we used

to generalize Lemma I to Theorem I to generalize Lemma II

to Theorem II.

Example 1: (Features in Fig. 1) assume there are 100 components in a complex system which only follows three patterns

i, j, and k. At time t = 1, 15% of components follow pattern

i, 65% follow j, and 20% follow k. Let bi = 0.2, bj = 0.1,

and bk = 0.3. Fig. 2(a) shows the patterns of electricity

consumption in 24 h. The objective is simulating and analyzing

the complex system for the next 20 years (80 seasons).

At t = 1 the system follows Theorem II.1)

Pi (t = 1) = 0.15, Pj (t = 1) = 0.65, Pk (t = 1) = 0.2,

E(t = 1) = 1.28, D(t = 1) = 65.08.

At t = 9 we have max(i) [follows Theorem II.2)]

Pi (t = 9) = 0.18, Pj (t = 9) = 0.38, Pk (t = 9) = 0.44,

E(t = 9) = 1.49, D(t = 9) = 59.08.

At t = 19 disuniformity starts increasing again

Pi (t = 19) = 0.13, Pj (t = 19) = 0.12, Pk (t = 19) = 0.75,

E(t = 19) = 1.07, D(t = 19) = 57.45 (D(t = 18) = 57.36).

Fig. 2.

525

Example for Theorem II. (a) Patterns. (b) Fitness. (c) D versus E.

presents the behavior of components and simulates entropy and

disuniformity of the system for 80 seasons. Fig. 2(c) shows

the three different possible areas for Theorem II.

Lemma III: Given two different patterns of behavior (i and

j) in the population and i j:

III.1) Pi < Pj (Xi < Xj ) and bi > bj iff E is increasing in

time (E ) and D decreases

in time (D ) until D = 0

(Xi (Ci (w) Ci )dw = Xj (Cj (w) Cj )dw) afterward

D increases in time (D );

III.2) Pi > Pj (Xi > Xj ) and bi > bj iff E is decreasing in

time (E ) and D decreases

in time (D ) until D = 0

(Xi (Ci (w) Ci )dw = Xj (Cj (w) Cj )dw) afterward

D increases in time (D );

III.3) Pi < Pj (Xi < Xj ) and bi < bj iff E is decreasing in

time (E ) and D increases in time (D );

III.4) Pi > Pj (Xi > Xj ) and bi < bj iff E is increasing

(E ) and D increases in time (D ).

Proof: To prove this lemma, we should consider different

sgn(Ci (w) Ci ) between disuniformity of i and j, for all w.

So, the total disuniformity decreases until 0 and increases

after that [because of power of 2 in (14)]. D = 0 when

the weighted disuniformity for all components i is equal to

weighted disuniformity for all components j. When the total

disuniformity increases [Lemma III.3), III.4)] we do not need

to consider any minimum point, because the function is non

decreasing.

526

t :

III.1) in exponential growth > 0 where, D < ( is a

lower bound for D) and E (0, 1) when D decreases

[Lemma III.1), III.2)]; also, Dj is an upper bound for

D and E (0, 1) when D increases [Lemma III.3),

III.4)];

III.2) in logistic growth max{0, f (Li )} is a lower bound for

D and E (0, g(Li )) when D decreases [Lemma III.1),

III.2)]; also, min{Dj , f (Lj )} is an upper bound for D

and E (0, g (Lj )) when D increase [Lemma III.3),

III.4)].

Proof: Proof is similar to Corollary I; however,

for a specific

w = w0 where Xi (Ci (w0 )Ci )dw0 Xj (Cj (w0 )Cj )dw0 ,

we have D 0. This point may happen before all components

become similar to is, so min(D) = 0 where E = 0 and E = 0

where D = 0.

Theorem III: Given n different patterns of behavior (i =

1, ..., n) in population S, bk 0, k S and i j for i S

and j S S :

III.1) E < log2 Pi ( iS Pi log2 Pi > log2 Pi ) and bi >

bj for i S and j S S iff E is increasing in

time

in time

and D decreases

(D ) until D =

(E )

0 ( Xi (Ci (w) Ci )dw =

Xj (Cj (w) Cj )dw)

afterward D increases

in time (D );

III.2) E > log2 Pi ( iS Pi log2 Pi < log2 Pi ) and bi >

bj for i S and j S S iff E is decreasing in

time

in time

and D decreases

(D ) until D =

(E )

0 ( Xi (Ci (w) Ci )dw =

Xj (Cj (w) Cj )dw)

afterward D increases

in time (D );

III.3) E < log2 Pi ( iS Pi log2 Pi > log2 Pi ) and bi < bj

for i S and j S S iff E is decreasing in time

(E ) and D increases

in time (D );

III.4) E > log2 Pi ( iS Pi log2 Pi < log2 Pi ) and bi < bj

for i S and j S S iff E is increasing in time

(E ) and D increases in time (D ).

Corollary IV: When conditions of Theorem III hold and

t :

IV.1) Corollary III.1) can be generalized to n components in

Theorem III with E (0, log2 n);

IV.2) Corollary III.2) can be generalized to n components

in Theorem III with different f , f , g, and g

functions.

Lemma IV: Given two different patterns of behavior (i and

j) in the population and i j, Lemma III.1), III.2), III.3), and

III.4) and Corollary III.1) and III.2) apply.

Theorem IV: Given n different patterns of behavior (i =

1, ..., n) in population S, bk 0, k S and i j, for i S

and j S S , Theorem III.1), III.2), III.3), and III.4) and

Corollary IV.1) and IV.2) apply.

Example 2: (Features on Fig. 1) modify Example 1 to three

components with negative dominance, Theorem IV [Fig. 3(a)].

Fig. 3(b) shows the behavior of the complex system. Fig. 3(c)

shows the different possible cases of Theorem IV for scenario

Fig. 3(b), respectively.

Summary: We can summarize the results of Theorems I, II,

III, IV in Table I and conclude Theorem V as a general theo-

Fig. 3.

Example for Theorem IV. (a) Pattern. (b) Fitness. (c) D versus E.

of a complex system in all dominance cases.

Theorem V (Mechanisms of Components): If i j, i.e.,

is dominate js, disuniformity of the system is decreasing in

time if the entropy increases in time when log2 Pi > E or

if

time when

log2 Pi < E while,

the entropy decreases in

Xi (Ci (w) Ci )dw <

Xj (Cj (w) Cj )dw for both

conditions.

We can apply this theorem to control or at least predict the

complex behaviors in large ECASs. Here, we provide incentives to motivate the components to decrease the disuniformity

by adjusting their patterns (this adjustment changes the fitness

rates bi s dynamically). This heterarchical rearrangement with

external changes to the environment but without central organization is a source of self-organizing in components. As

an illustration, assume n patterns of consumption in a system.

When n is large (e.g., patterns of consumers in large metropolitan area), it is impossible to control and predict all behaviors

and their relationships. We can focus on a few groups (pattern

i where log2 Pi > E) and increase the entropy by motivating

other consumers to adjust to this pattern (migrate to this

pattern or increase its fitness portion). This phenomena makes

nonlinear complex dynamic fitness rates, i.e., bi = K(R(D); E).

Here, K is a function of R(D) and population of other patterns

(i.e., E). R(D) shows the motivations based on D (e.g.,

rewards that consumers receive by cooperating to reduce the

disuniformity). These changes in bi s make Xi dependent on

each other. To predict the behaviors at each time, we can map

the system conditions (dominance, entropy, and fitness rates)

HAGHNEVIS AND ASKIN: MODELING FRAMEWORK FOR ENGINEERED COMPLEX ADAPTIVE SYSTEMS

527

TABLE I

Summary of Emergence

ij

ij

ij

ij

log2 Pi > E

bi > bj

bi < bj

ED

ED

ED

ED

EDD ED

EDD

ED

log2 Pi < E

bi > bj

bi < bj

ED

ED

ED

ED

EDD

ED

EDD ED

*Note: means

Xi (Ci (w) Ci )dw >

Xj (Cj (w) Cj )dw changes

Xi (Ci (w) Ci )dw <

to

Xj (Cj (w) Cj )dw in time, or vice versa.

how we can control the interoperability between patterns by

using a third pattern (catalyst), i.e., indirectly utilize Theorem

V to decrease the disuniformity.

In this step of the framework, we study the engineering

concept of emergence in ECASs. Bar Yam [29] conceptually

and mathematically showed the possibility of defining a notion

of emergence and described four concepts of emergence.

Conceptual classification for emergence is proposed by Halley

and Winkler [30]. Prokopenko et al. [16] interpreted concepts

of emergence and self-organization by information theory

and compared them in CASs. We borrow some concept of

information theory to analyze and predict emergence behaviors

of ECASs and show the applicability of Theorem V.

Emergence cannot be defined by properties and relationships of the lower component level [23]. Assume there is an

interaction between pattern i and j at their current level. Then

(6) becomes

E(i, j) =

Mj

Mi

(18)

mi =1 mj =1

pattern i and pattern j in state mi and mj . The interaction

information (mutual information) of i and j

p

I (i; j) =

Mj

Mi

mi =1 mj =1

Pmi mj log2

Pm i m j

Pm i Pm j

(19)

amount of information that i and j share and reduce the

uncertainty of each other, where Pmi is the marginal probability

for State mi . We can obtain (see [28])

the entropy in Lemmas IIV.

The generalization of (20) to three-pattern cases is

E = E(i, j, k) = [E(i) + E(j) + E(k)] I(i; j; k) +

E(i, j) + E(i, k) + E(k, j)

(21)

I(i; j; k) = I(i; j) I(i; j|k).

(22)

Positive I means k supports and increases the interoperability between i and j. However, negative I shows k inhibits and

decreases the interoperability.

Definition II:

1) Catalyst: Pattern k is a positive catalyst for other patterns

in the system if k supports their interoperability and is

a negative catalyst if inhibits their interoperability.

It is possible to generalize (21) and (22) to n patterns [27]

E() =

(1)(||||1) E() I()

,=

(23)

= {im |m = 1, ..., n}

I(i1 ; ...; in ) = I(i1 ; ...; in1 ) I(i1 ; ...; in1 |in ).

(24)

I(i1 ; ...; in ) = I(i1 ; ...; ink ) I(i1 ; ...; ink |i(nk+1) ; ...; in ).

(25)

In Theorem V, instead of increasing or decreasing the

entropy we can change the interoperability. We add catalyst(s)

to control (inhibit or support) the interoperability.

Definition III:

1) Catalyst-associate interoperability (CAI)

CAI = I(|k) I().

(26)

E = E(i, j) = E(i) + E(j) I(i; j)

(20)

From (20) when I(i; j) increases (I ), E decreases (E ).

For the case of only two groups of patterns in the system,

the mutual information is a positive number with maximum

of one, 0 I 1 [from (19)]. E is minimal when i and j are

identical, I = 1 (one group follows the other one) and E is at

its maximum when i and j are independent, I = 0 (groups are

Et Et

(27)

CAI

where E (t) and E(t) are entropy in time t after and

before applying the catalyst(s), respectively.

Example 3: (Interoperability in Fig. 1) assume Table II is

the joint probabilities for i and j in Example 1 if i can be 0.2,

0.4 or 0.6 and j can be 0.1, 0.15 or 0.2 of total consumers.

Population of other patterns and their effects are negligible.

EOC =

528

TABLE II

Prior Probabilities for k 0

P(mi , mj )

0.20

0.40

0.60

0.10

0.20

0.15

0.05

0.15

0.05

0.15

0.05

0.20

0.02

0.15

0.18

TABLE III

Posterior Probabilities for k > 0

P(mi , mj |k)

0.20

0.40

0.60

0.10

0.23

0.15

0.02

0.15

0.03

0.19

0.03

0.20

0.02

0.13

0.20

E(i, j) = 2.90, I(i; j) = 0.20. If adding catalyst k updates Table

II to Table III (users k affect the interrelationships between is

and js), E(i) = 1.56, E(j) = 1.53, E(i, j) = 2.73, I(i; j) = 0.36.

So we increase the entropy by increasing the interoperability

which decreases the disuniformity in Example 1

CAI = 0.36 0.2 = 0.16

EOC = 2.732.90

= 1.06.

0.16

We can use the concept of EOC to select an appropriate

catalyst. For example, assume n patterns of consumption in a

social population, where i1 and i2 have the majority of population and thus the largest effect on the disuniformity of the

consumption. We are planning to decrease the disuniformity

with a limited amount of resources (e.g., some rewards to give

to cooperative consumers). Instead of distributing the reward

between a large group (say i1 ) to cooperate with the other

group which is not so effective (because the portion of each

individual is too low), we can reward a small group of catalyst

(say i3 ) to improve the interoperability between i1 and i2 . This

idea is similar to finding and investing on hubs in a social

network (based on power law the numbers of components

with higher relationships decrease exponentially [12]). The

next step is to show how these emergence phenomena cause

evolution in the system.

Here, we analyze the evolution process. Then, in the last

step of the framework we depict the adaptation and learning in

the system. Some measures are developed for the complexity

threshold parameter of physical complex systems in previous

studies [31]. Erdo" s and Renyi [32] studied probability threshold function and evolution in random graphs. We borrow the

concept of threshold [32].

Let M (t), = 1, ..., 0 , be the number of components in

patterns which have the trait at time t. Here, (, t) is a

binary variable that shows the system possesses trait at time t

1,

if MX(t)(t)

i i

(, t) =

(28)

0,

if MX(t)(t) <

i

adaptation.

Let

(t) = ((, t); = 1, ..., 0 ) be a vector of 0 and

1s, where its th position is 1 if (, t) = 1. Let (t) be a

predefined finite set of

s at time t. Based on the definition,

the system evolves when t > t,

(t) &

(t )

/ (or

(t)

/ &

(t ) ).

Definition IV:

1) Stagnation: systems are stagnant when they are not

evolvable, i.e.,

(t) (or

(t)

/ ) t.

Example 4: (Traits in Fig. 1) assume i = 0.2, j = 0.4,

k = 0.3, and = {[0 1 1], [1 1 1]} in Example 1

26

t = 4 : MiX(t)(t) = 156

0.2

i i

M

(t)

j

87

t = 4 : X (t) = 156 0.4

(4) = [0 1 0]

i i

M

(t)

44

t = 4 : kX (t) = 156

0.3

i i

31

t = 5 : MiX(t)(t) = 183

0.2

M

(t)

j

95

t = 5 : X (t) = 183 0.4

(4) = [0 1 1]

i i

57

t = 5 : MkX(t)(t) = 183

0.3

i i

55

t = 9 : MiX(t)(t) = 367

0.2

i i

M

(t)

j

139

t=9:

=

0.4

(4) = [0 0 1].

367

X (t)

i i

0.3

367

i

Xi (t)

the system evolves only when it possesses all traits (i.e., =

{[1 1 1]}), this system is stagnant.

In this example, the system is adjusted by two evolutions.

This adapting situation can be nonstationary. Fig. 4(a) simulates a case where components adjust their behaviors several

times to increase their objectives. Here, i, j, and k compete to

get more rewards by reducing the disuniformity. However, to

reduce the disuniformity they should cooperate by adjusting

their behaviors (changing fitness rates, bi s). Adding a learning

procedure (do not adjust to previous tried states) omits the nonstationary evolution and causes faster adaptation in Fig. 4(b).

To extend this framework, the concept of dissection of

features can extend to other ECASs easily. Entropy of components is a general concept for all systems and disuniformity can be interpreted in different ECASs. For example,

reducing demand fluctuations in wholesale marketing, resource

allocations in supply chain management, and synergism of

commands to reduce the distances to a target in AI or

HAGHNEVIS AND ASKIN: MODELING FRAMEWORK FOR ENGINEERED COMPLEX ADAPTIVE SYSTEMS

makers may assign different objectives to ECASs based on

their requirement and they are not limited to disuniformity.

However, any ECAS of the class being addressed needs at

least one minimizing/maximizing measure to study dissection

of features other than component entropy. Other hallmarks

(evolution and adaptation) are driven from the emergence

concept (dissection of features and the interactions) and their

mathematical calculation is not limited to electricity usage.

VII. Conclusion and Future Work

In this paper, we presented a framework that helped us employ engineering and mathematical models to analyze certain

ECASs. We can apply this framework to study and predict the

hallmarks of complex heterarchical engineered systems. The

proposed method was used to engineer emergence of human

decisions in an ECAS, evolution of the behaviors, and its

adaptation to new environments. We illustrated how we can

extend the concept of our measures to other ECASs.

We employed information theory in our mathematical

model. All possible dominance cases in complex systems were

defined and four theorems were presented to calibrate the

current situation and predict future behaviors of each case.

Theorem V (mechanisms of components) can be employed to

study self-organization in ECASs.

Catalyst associate interoperability and stagnation of the

system are new concepts that can help us measure or scale

the emergence and evolution behaviors without complex modeling. Researchers may control the interoperability of components with CAI. Also, they can measure evolvability or

stagnation of a complex system by a threshold function.

Varying fitness rates by time, bi (t), may lead to a new

formulation in future research. We can consider statistical or

dynamical functions for fitness rates. Agent-based modeling

and simulation can support and extend the mathematical basis

of this research for investigating real cases.

Acknowledgment

The authors would like to thank Prof. D. Armbruster, School

of Mathematical and Statistical Sciences, Arizona State University, Tempe, for his constructive comments that improved

the quality of this paper.

References

[1] M. Couture and R. Charpentier, Elements of a framework for studying

complex systems, in Proc. 12th Int. Command Control Res. Technol.

Symp., Jun. 2007, pp. 117.

[2] M. Couture, Complexity and chaos: State-of-the-art; list of works, experts, organizations, projects, journals, conferences and tools, Defense

Res. Develop. Canada-Valcartier, Valcartier, QC, Canada, Tech. Rep. TN

2006-450, 2006.

[3] M. Couture, Complexity and chaos: State-of-the-art; formulations and

measures of complexity, Defense Res. Develop. Canada-Valcartier,

Valcartier, QC, Canada, Tech. Rep. TN 2006-451, 2006.

[4] M. Couture, Complexity and chaos: State-of-the-art; glossary, Defense

Res. Develop. Canada-Valcartier, Valcartier, QC, Canada, Tech. Rep. TN

2006-452, 2006.

529

[5] M. Couture, Complexity and chaos: State-of-the-art; overview of theoretical concepts, Defense Res. Develop. Canada-Valcartier, Valcartier,

QC, Canada, Tech. Rep. TN 2006-453, 2007.

[6] C. L. Magee and O. L. de Weck, Complex system classification, in

Proc. 14th Annu. Int. Symp. INCOSE, Jun. 2004, pp. 118.

[7] Y. Bar-Yam, Multiscale complexity/entropy, Adv. Complex Syst., vol. 7,

no. 1, pp. 4763, 2004.

[8] Y. Bar-Yam. (2000). Complexity Rising: From Human Beings to Human

Civilization, a Complexity Profile [Online]. Available: http://necsi.org/

Civilization.html

[9] N. Parks, Energy efficiency and the smart grid, Environ. Sci. Technol.,

vol. 43, no. 9, pp. 29993000, May 2009.

[10] D. J. Watts and S. H. Strogatz, Collective dynamics of small-world

networks, Nature, vol. 393, pp. 440442, Jun. 1998.

[11] C. Avin and D. Dayan-Rosenman, Evolutionary reputation games

on social networks, Complex Syst., vol. 17, no. 3, pp. 259277,

2007.

[12] L. A. N. Amaral, A. Scala, M. Barthelemy, and H. E. Stanley, Classes

of small-world networks, Proc. Nat. Acad. Sci., vol. 97, no. 21, pp.

11 14911 152, Oct. 2000.

[13] S. H. Strogatz, Exploring complex networks, Nature, vol. 410, pp.

268276, Mar. 2001.

[14] S. Lee, Y. Son, and J. Jin, Decision field theory extensions for behavior

modeling in dynamic environment using Bayesian belief network,

Inform. Sci., vol. 178, no. 10, pp. 22972314, 2008.

[15] A. Mostashari and J. M. Sussman, A framework for analysis, design

and management of complex large-scale interconnected open sociotechnological systems, Int. J. Decision Support Syst. Technol., vol. 1, no. 2,

pp. 5368, 2009.

[16] M. Prokopenko, F. Boschietti, and A. J. Ryan, An information-theoretic

primer on complexity, self-organization, and emergence, Complexity,

vol. 15, no. 1, pp. 1128, 2009.

[17] S. Sheard and A. Mostashari, Principles of complex systems for

systems engineering, Syst. Eng., vol. 12, no. 4, pp. 295311,

2009.

[18] J. Ottino, Engineering complex systems, Nature, vol. 427, p. 399, Jan.

2004.

[19] D. Braha and Y. Bar-Yam, The statistical mechanics of complex product

development: Empirical and analytical results, Manage. Sci., vol. 53,

no. 7, pp. 11271145, Jul. 2007.

[20] B. Shargel, H. Sayama, I. Epstein, and Y. Bar-Yam, Optimization of

robustness and connectivity in complex networks, Phys. Rev. Lett.,

vol. 90, no. 6, pp. 068701-1068701-4, 2003.

[21] K. Kaneko and I. Tsuda, Complex Systems: Chaos and Beyond: A Constructive Approach With Applications in Life Sciences. Berlin, Germany:

Springer, 2001.

[22] S. B. Yu and J. Efstathiou, An introduction to network complexity, in

Proc. Manuf. Complexity Netw. Conf., Apr. 2002, pp. 110.

[23] C. R. Shalizi, Causal architecture, complexity and self-organization

in time series and cellular automata, Ph.D. dissertation, Center Study

Complex Syst., Univ. Michigan, Ann Arbor, May 2001.

[24] C. R. Shalizi, K. L. Shalizi, and R. Haslinger, Quantifying selforganization with optimal predictors, Phys. Rev. Lett., vol. 93, no. 14,

pp. 1 187 0111 187 014, 2004.

[25] S. E. Page, Self organization and coordination, Comput. Econ., vol. 18,

no. 1, pp. 2548, Aug. 2001.

[26] A. G. Bashkirov, Renyi entropy as a statistical entropy for complex systems, Theor. Math. Phys., vol. 1149, no. 2, pp. 15591573,

2006.

[27] P. Chanda, L. Sucheston, A. Zhang, D. Brazeau, J. L. Freudenheim,

C. Ambrosone, and M. Ramanathan, Ambience: A novel approach and

efficient algorithm for identifying informative genetic and environmental

associations with complex phenotypes, Genetics, vol. 180, no. 2, pp.

11911210, 2008.

[28] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed.

Hoboken, NJ: Wiley-Interscience, 2006.

[29] Y. Bar-Yam, A mathematical theory of strong emergence using multiscale variety, Complexity, vol. 9, no. 4, pp. 1524, 2004.

[30] J. D. Halley and D. A. Winkler, Classification of emergence and its

relation to self-organization emergence, Complexity, vol. 13, no. 5, pp.

1015, 2008.

[31] C. Langton, Computation at the edge of chaos: Phase transitions and

emergent computation, Physica D, vol. 42, nos. 13, pp. 1237, 1990.

[32] P. Erdo" s and A. Renyi, On the evolution of random graphs, Publication

Math. Instit. Hungarian Acad. Sci., vol. 5, pp. 1761, 1960.

530

degrees in industrial and system engineering from

the Amirkabir University of Technology, Tehran,

Iran, and the University of Tehran, Tehran, respectively. He is currently pursuing the Ph.D. degree in

industrial engineering with the School of Computing, Informatics, and Decision Systems Engineering,

Arizona State University, Tempe.

Before pursuing the Ph.D. degree, he served as

an Adjunct Instructor with two universities. He has

published a book and several papers. His current

research interests include engineered complex systems, agent-based modeling,

and simulation.

the Georgia Institute of Technology, Atlanta.

He is currently the Director of the School of

Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe. He has

30 years of experience in systems modeling and

analysis.

Dr. Askin is a Fellow of the IIE. He has received

the National Science Foundation Presidential Young

Investigator Award, the Shingo Prize for Excellence

in Manufacturing Research, the IIE Joint Publishers

Book of the Year Award, and the IIE Transactions Development and Applications Award.

- Eq Vs. IqEnviado porKrithika Raj
- Fyp Project Proposal Template 2016Enviado porMughira Rajput
- Embracing Integrated ComplexityEnviado porJohn Michael
- National Science Foundation: 2007 10 23 suff evidEnviado porNSF
- From Business Anarchy to Grandiose Prevalence by Andres AgostiniEnviado porOmniSciSavant
- Systems Biology IssuesEnviado porSith Wiel
- The language of redress for brand monitoringEnviado porAlan Wilensky
- ch7Enviado porapi-26587237
- Course Entrance Survey Jj616 Maintenance ManagementEnviado porAkiff Jayari
- ManagementEnviado porandreea_alex86
- 10.1.1.138.1949Enviado porAbhranil Gupta
- Dechert_The Social Impact of CyberneticsEnviado porharry196
- Crisis in Social Psychology sherif.docxEnviado porKarinJadeAbarai
- Community DiagnosisEnviado porJaiGanesh
- LC14 EssayEnviado porYoann Dragneel
- System and Integrated SystemEnviado porKristoffel Pandiangan
- Biblio for New MatrixEnviado porMarsandvenus Areplanets
- Solution Strategies for Dynamic Optimization ProblemsEnviado porLouisSigne
- 1-The Management ProcessEnviado porkirin
- 505Enviado porcolendrina
- Change Process ModelEnviado porananth999
- M0 PresentationEnviado poralexeca
- Automatically Generated Settlement Rules - SAP DocumentationEnviado porVenu Vemula
- Braverman ClaborneEnviado porBilal
- Bunge - Semiotic SystemsEnviado porhitzseb
- Vi Sem Mg6851 QbEnviado porsuganya1190
- IPB - YAC VISIT.docxEnviado porTam-Tam Ridwan Sinulingga
- ﻋﻼﻗﺔ ﻭ ﺍﺛﺭ ﺇﺩﺍﺭﺓ ﺍﻟﺳﻼﻣﺔ ﻭﺍﻟﺻﺣﺔ ﺍﻟﻣﻬﻧﻳﺔ ﻓﻲ ﺗﻠﻙ ﺍﻟﻣﻧﻅﻣ ﺎﺕ ﺍﻟﺻ ﻧﺎﻋﻳﺔ. ﺑﺈﻧﺗﺎﺟﻳ ﺔ ﺍﻟﻌ ﺎﻣﻠﻳﻥ.pdfEnviado porAhmedAyman
- lecture 4 new.pdfEnviado porZahir Hassan
- 1993-Greenwood-Understanding-strategic-change (1).pdfEnviado porAuxiliar De Talento Humano

- 915.pdfEnviado porRangga K Negara
- Work Less Accomplish MoreEnviado porThomas
- Ross Jeffries - Nailing Women Under 25 (eBook - English)Enviado porxicmen
- A Teacher Guide to a Nton Chekhov sEnviado porAbdurrahman Rijal Bukhari
- lesson2 1Enviado porapi-294961842
- The Use of Intrinsic and Extrinsic Rewards on Improving Job Performance in an OrganizationEnviado porChee Voon
- ChE 234-1 Basic ConceptsEnviado portwiddleap
- 139055728-ADC-20-Jan-2013-Final.pdfEnviado porishan singh
- 105 Lazatin vs House of Rep Electoral TribunalEnviado porYen Yen
- 125253005-Soal-Ulangan-Harian-Bahasa-Inggris-Kelas-7-Semester-2-Deskriptif-Text-docx.docxEnviado porDebby Sihite
- 462616 ch14_ch14dEnviado porThinh Tran Van
- Samir Rafla Principles of Cardiology Pages 112 to EndEnviado porsamir rafla
- CoherenceEnviado porapi-3858535
- Elements of a Narrative Text (2)Enviado porAriane Galeno Del Rosario
- Management of Pt with Dysrhythmias and Coduction ProblemsEnviado porKrischel Jane Latombo
- Businessing SampleEnviado porKurogane Gane
- Basic Reading ModelsEnviado porJerome Earl Amaranto
- Level_6_Passage_5.pdfEnviado porAzie Karim
- Jason Park NormalizationEnviado porxbcckkckjck
- ThomasDay-GerontionEnviado porParam Kesare
- Crush It eBookEnviado porEmmanuel Adah
- DID TOYOTA WORK CULTURE CAUSE ITS PROBLEMEnviado porDanie
- Chapter 3Enviado pornelson
- Review Notes 2000 - DermatologyEnviado poreset5
- pdf backbendEnviado porapi-283814822
- PPG12P12Enviado porMuhammad Ahmad Warraich
- Teracom Brochure VideosEnviado porParag Mahajani
- Bitc Mfi ProjectEnviado porSrikanth Kumar Konduri
- Life With a Partner With ADHD- The Moderating Role of IntimacyEnviado porRamoncito77
- Gold Cup P24-30 Options Spare Parts Exploded Views HY28-2678-03-GC-US DENISONEnviado porthierrylindo

## Muito mais do que documentos

Descubra tudo o que o Scribd tem a oferecer, incluindo livros e audiolivros de grandes editoras.

Cancele quando quiser.