Você está na página 1de 139

INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films
the text directly from the original or copy submitted. Thus, some thesis and
dissertation copies are in typewriter face, while others may be from any type of
computer printer.
The quality of this reproduction is dependent upon the quality of the
copy submitted. Broken or indistinct print, colored or poor quality illustrations
and photographs, print bleedthrough, substandard margins, and improper
alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete manuscript
and there are missing pages, these will be noted.

Also, if unauthorized

copyright material had to be removed, a note will indicate the deletion.


Oversize materials (e.g., maps, drawings, charts) are reproduced by
sectioning the original, beginning at the upper left-hand comer and continuing
from left to right in equal sections with small overlaps.
Photographs included in the original manuscript have been reproduced
xerographically in this copy.

Higher quality 6" x 9" black and white

photographic prints are available for any photographs or illustrations appearing


in this copy for an additional charge. Contact UMI directly to order.

Bell & Howell Information and Learning


300 North Zeeb Road, Ann Arbor, Ml 48106-1346 USA
800-521-0600

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

A STUDY OF THE IM PACT OF


ORGANIZATIO NAL DESIGN
ON ORGANIZATIO NAL LEARNING AND PERFORMANCE

BY
RONALD E. VYHMEISTER
B. T h Universidad Adventista del Plata, 1982
M .B.A., Andrews University. 1985

THESIS

Submitted as partial fulfillment o f the requirements


for the degree o f Doctor o f Philosophy in Business Administration
in the Graduate College o f the
University o f Illinois at Chicago, 2000

Chicago, Illinois

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

UMI Number: 9978639

UMI
UMI Microform 9978639
Copyright 2000 by Bell & Howell Information and Learning Company.
All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.

Bell & Howell Information and Learning Company


300 North Zeeb Road
P.O. Box 1346
Ann Arbor, Ml 48106-1346

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

THE UNIVERSITY OF ILLINOIS ATCHICAGO


Graduate College
CERTIFICATE OF APPROVAL
2c a n

/ hereby recommend that the thesis prepared under my supervision by


RONALD E. VYHMEISTER
A STUDY OF THE IMPACT OF ORGANIZATIONAL DESIGN ON
entitled ____________________________________________________ _
________

ORGANIZATIONALLEARNING

AND PERFORMANCE________

be accepted in partial fulfillment o f the requirements fo r the degree of


DOCTOR OF PHILOSOPHY

*\hJ^

t/

x-VJi tic-

__________ j ' '

Adviser (Chairperson o f Defense Commiuae)

/ concur with this recommendation

FT A/ tbc~

________

Department Head/Chair

Recommendation concurred in:

T.

________
---------

>

Member* of
Thesis or
Dissertation
Defense
Committee

1 1 1 ^ ^ University of Illinois
at Chicago

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

To my wife Shawna
without whose support this would have been impossible,
and to my sons Alex and Erik
who were w illing to give up time with dad
so this could become a reality

To my parents
Werner and Nancy Vyhmeister
who always have been there throughout the process

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

ACKNOWLEDGEMENTS
I wish to thank my advisor. Dr. Aris Ouksel, for his guidance and assistance during this
process. Without his guidance, this dissertation would never have been accomplished. 1 also wish
to thank Ann Rosi, from the College o f Business Administration for her assistance with all the
administrative processes that enabled me to complete this.
I also wish to thank those who have assisted in the editing process, especially Nancy
Vyhmeister, Channah Naiman, Ken Mihavics and Ross Pettit.
Finally, a special appreciation to Heidi and Christian Prohaska. without whose support and
hospitality it would have been impossible to complete this project.

iv

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

TABLE OF CONTENTS
LIST OF TABLES
........................................................................................................................................... viii
LIST OF FIGURES.........................................................................................................................ix
Chapter I: In tro d u ctio n .................................................................................................................... I
Chapter II: A B rief History o f Organizational Design..................................................................... 7
Organizational Design Focusing on the Organization I t s e lf .............................................. 9
Organizational Design Focusing on the Organization M em bers.......................................11
Organizational Design focusing on Response and Communication .................................14
Current Implementations o f Organizational Design ......................................................... 15
Chapter III: Organizational Learning: What is it?How can We Study It ? .................................. 21
Organizational Learning..................................................................................................... 21
A Model for Studying the Impact o f
Organizational Structure on Organizational Learning.......................................................24
Description o f the Ouksel-Mihavics-Carley M o d e l.............................................. 24
Example o f the Model ........................................................................................... 32
Past Studies Using the Model ............................................................................... 36
Past Applications o f the OMC M o d e l................................................................... 37
Chapter IV: Research M ethodology............................................................................................. 43
Model Enhancements......................................................................................................... 44
Feedback Assumptions ......................................................................................... 44
Hierarchical Structure ........................................................................................... 50
Appropriate Feedback ........................................................................................... 52
Data Collection ..................................................................................................................52
Chapter V: Model Robustness........................................................................................................57
Chapter V II:

Model A pplications................................................................................................ 74

Chapter V III: Conclusions and Further Research ......................................................................... 78


Cited Literature .............................................................................................................................. 80

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

TA BLE OF CONTENTS (Continued)

Appendix 1 ......................................................................................................................................86
stub.c ................................................................................................................................. 86
Simulate.c........................................................................................................................... 88
Startgen.c........................................................................................................................... 96
Startloc.c ..........................................................................................................................105
Appendix 2 .................................................................................................................................... 115
Appendix 3 .................................................................................................................................... 117
Regression Results for dependent BEG1NLRN............................................................... 117
Regression Results for STABLENUM ........................................................................... 117
Regression results for StableVal ..................................................................................... 118
Regression Results for P C T L e am ................................................................................... 118

vi

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

LIST OF TABLES
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table
Table

I - Definitions o f Terms Relating to Organizational Redesign........................................... 18


II-Sample Decision-making input patterns......................................................................... 33
Ul-Agent Memory M a p ....................................................................................................... 33
IV-Sample Agent and Organizational Decisions ...............................................................34
VI-Memory map after decision with localized feedback ...................................................35
V II - Possible Organizational Structures............................................................................. 54
V III- Analytical Results
Maximum Percentage o f Correct Decisions .....................................................................58
IX-Comparison to Past Results........................................................................................... 59
X I Impact o f Evidence Overlap on Organizational Performance under Dispersed or
Uniform W e ig h ts................................................................................................................60
X Organizational Performance Using Dispersed and Uniform W eights...................... 61
X II Organizational Performance under Clustered W eights............................................ 62
X III
Impact o f Overlap on Clustered Majority Teams...............................................................63
X IV - Variables Used ......................................................................................................... 67
X V - Calculated Parameters Based on Original Parameters.............................................. 68
XVI-Variables Calculated from the Simulation Results.................................................... 69
X V II - Summary o f Results................................................................................................. 71

vii

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

LIST OF FIGURES
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8

......................................................................................................................................... 29
......................................................................................................................................... 30
......................................................................................................................................... 31
......................................................................................................................................... 32
......................................................................................................................................... 58
......................................................................................................................................... 64
......................................................................................................................................... 65
......................................................................................................................................... 66

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Summary
Since the turn o f this century, many researchers have sought to understand what factors
characterized successful organizations. Taylor (1911), Fayol (1916) and Weber (1947) were among
the first, and focusing on the tasks, procedures and structures. They were followed by Mayo (1933).
Maslow ( 1954), Herzberg (1959,1966) who focused on the members o f the organization. In the last
30 years research has focused on quality (Deming. in Walton. 1986), decentralization (1969,1974),
flexibility and rapid reaction (Peters and Waterman. 1982; Peters and Austin, 1985; Peters, 1992).
In recent years, information technology has transformed how' businesses operate.
Reengineering, downsizing, rightsizing, and flattening are terms which have been used to describe
the transformation o f organizations while in search o f improved performance.

The advent o f

software agents has transformed how information is processed. For the purposes o f this study they
are defined as "atomic software entities operating through autonomous actions on behalf o f the user
machines and humans - without constant human intervention^ Ma. 1999).
In the last few years, researchers have posited that the key to organizational success was an
organization's ability to learn (Senge, 1990; Argote, 1993,1999; Swigart and Johansen, 1994). For
learning to take place, the organization's goals must be known. At the same time, not only is
information necessary, but members o f the organization must also receive feedback regarding their
performance.
There is one model which clearly describes how organizational design impacts organizational
learning. It has been developed in the last decade through a stream o f research which has focused
on how information is processed in an organization and how learning takes place (Carley. 1990,
1992,1996; Lin and Carley, 1993; Carley and Lin, 1995; Yeand Carley. 1995; Mihavics and Ouksel,

ix

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

1996). The decision function used in this model (Ouksel, Mihavics, and Carley, 1996) is extremely
powerful. This decision function does not make any assumptions regarding the independence or
interdependence o f the task information.

It is particularly powerful in that it captures all the

interactions that exist among the various inputs to the decision. A ll relationships such as XOR,
AND, and OR are captured. Because it captures the full power o f propositional calculus, it covers
a very large information space. This allows researchers to study large problems, w ith the assurance
that the decision function is relatively complete.
The process is the same, regardless o f the organization studied. Every decision task an
organization faces is represented by a binary string o f N bits. These bits are first viewed by agents
who each have access to a portion o f the task. Each agent examines its local memory o f prior
instances o f the task as well as the corresponding outcomes o f these past decisions, and uses this
information in combination with the appropriate decision function to make an informed decision.
Each agent's decision is communicated to the respective superior agent, which in turn makes its
decision based on its own decision function. This process is repeated until the organizational
"summit" (top-level agent) is reached, and the final decision is made.
While the weight o f the evidence exists, the agents in the organization are only aware o f the
values o f the weights through learning. Initially the agents have no idea which bits are more
important, but over time they learn which bits should be given greater weight. This procedure
approximates reality in that individuals facing similar repeated tasks learn the differing values o f
various pieces o f information over time.
The model used is a rational model, which makes it particularly appropriate to modeling the
interactions among software agents. This study focuses on intelligent agents (or software agents)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

for a variety o f reasons. First o f all, software agents are inherently rational, which satisfies one o f
the key assumptions to the model. Secondly, software agents are fundamentally more consistent and
understandable in their individual behavior than their human counterparts; we know that
understanding and modeling the decision-making behavior o f individual humans is notoriously
d ifficu lt while on the other hand, the behavior o f a software agent is codified in the form o f a
computer program. Finally, models o f software agents can be regarded as proposals for, rather than
just approximate descriptions of, the behavior o f boundedly rational individuals. This is in contrast
to mathematical utility functions which are often used as models o f human choices, which can only
be taken to be a rough approximation.
In order to model the complete process, we use three categorical variables and eight
numerical variables.

The three categorical variables are organizational structure (majority team,

expert team, and hierarchy), weighting scheme (uniform, dispersed, and clustered), and feedback
type (localized and generalized) resulting in eighteen distinct groups o f organizations. The following
numerical variables are used: layers o f middle management in the hierarchy, agents, bits per agent,
overlap bits, missing information, incorrect information, missing feedback, and incorrect feedback.
Results to date indicate three main findings: organizations learn at different speeds,
maximum learning varies by organizational design, and hierarchies are better in short run.
Unfortunately, this research is not complete, for a variety o f reasons. First o f all, studies have often
mixed organizational design, information processing and behavioral variables, making it difficult
to apply all results to software agents. Secondly, only small organizations have been studied, making
it impossible to know i f the results would be the same as the organizational size fluctuates. Third,
the various studies have each focused on different time-frames, making it difficult and sometimes

xi

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

impossible to compare the results. Finally, while there are many studies, each study uses one subset
o f variables, without ever getting a complete picture o f what happens in the presence o f all the
variables, or studying the interactions between them.
In this study these issues are addressed by focusing on intelligent agents, which are inherently
rationally bound, and whose behavior can fully be described. Secondly, this study uses no behavioral
or emotional variables. Thirdly, organizational sizes are varied in order to permit a study o f the
impact o rg a n iz a tio n a l size on organizational performance. Fourth, this study uses a constant time
frame, allowing for the comparison o f results between organizations. Finally, the interactions
between variables are analyzed.
The study has three main parts. The first verifies the robustness o f the model. The second
looks at the learning curve and attempts to describe its asymptotic behavior. The third portion
introduces the new concept o f localized feedback, where feedback is no longer common to all agents
in the organization, but is appropriate to the inputs to that agent's decision.

In order to accomplish

this, a number o f organizations are simulated. They are selected through random sampling, ensuring
that the sample sizes are sufficiently large to have statistical power and significance (Kendall. 1980;
Cohen, 1988). Every organization was simulated 50 times over 100,000 decisions each time to
ensure both stability in the results through the averaging o f the 50 runs, as well as making sure that
the learning curves o f even relatively large organizations have an opportunity to begin to stabilize.
Every 10 decisions the average over the last decisions, as well as the cumulative average until that
point in time are stored.
Five hypotheses are studied.
1.

Hierarchies are less impacted by unavailable feedback, expert teams most.

xii

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

2.

Hierarchies are less impacted by incorrect feedback, expert teams most.

3.

Hierarchies are less impacted by unavailable data, expert teams most.

4.

Hierarchies less impacted by incorrect data, expert teams most

5.

As organizations become larger, adding layers o f middle management improves initial


learning speed, yet reduces the maximum potential learning capacity o f the organization.
The results confirm the robustness o f past results in general. However, these results also

show that in many instances the results were correct for the subset o f variables and organizational
sizes studied, but that those results were not necessarily valid when organizational size changed.
The study o f the asymptotic behavior o f the learning curve shows that he organizational
learning curve resembles the traditional learning curve, in that it has three distinct points: (a) when
learning begins, (b) when learning begins to slow, and (c) the maximum potential performance that
each organization can achieve. The results also demonstrate that the best organizational design
depends on the time-frame, because no one organizational design is best at all points in time.
Finally, with regards to our hypotheses, the results demonstrate that: (a) information and
feedback distortion are significant to speed o f learning, (b) overlap can cause decreased performance,
(c) information distortion is significant to maximum performance, (d) missing feedback is not
significant to maximum performance, (e) hierarchies are the most impacted by information
distortion, (0 adding layers to some hierarchy downgrades the maximum performance potential o f
an organization, yet increases its initial learning speed.
The study demonstrates that this model can be applied in many different areas, including ecommerce, workflow management and battle management, amongst others. It also demonstrates that

xiii

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

hierarchies are not necessarily as robust as once thought, but that in the short run they may well offer
improved performance over alternative organizational forms.
While this study provides significant insights into how tasks can best be structured, some
significant questions remain. Among them is why hierarchies are not so robust. Another significant
question relates to the use o f different decision functions, where more complex decision functions
can be studied. The ability to forecast outcomes for a given organization within the model without
using simulations is a significant step forward. The possibility o f understanding the impact o f various
organizational design parameters on both the maximum organizational performance as well as the
curve leading to that performance enables an a p rio ri evaluation o f some o f the results o f changing
an organization from one design to another. Organizations could combine the ability to determine
a p rio ri the learning characteristics o f a proposed organizational design derived from this study with
the information processing costs for that same organizational design using the cost measures
(production, coordination, and vulnerability costs) presented by Mihavics and Ouksel (1996). This
combination could be a valuable tool to assist them in determining which o f the proposed alternative
organizational structures would in fact be financially beneficial before having to invest resources in
the implementation o f the new design.

xiv

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Chapter I: Introduction
Since the turn o f the century, researchers have sought to understand what factors lead some
organizations to succeed while others fail. Much research has been done, specifically on how
organizations should be structured and how tasks should be organized. Most research to date focuses
on structuring traditional organizations and tasks, which involve human agents. At the same time,
recent research in information technology has developed the functionality o f intelligent agents, which
are often used as substitutes (at least in part) for human agents, especially during the design o f new
business processes or the redesign o f old ones.
This study addresses one facet o f how organizations should be structured

It looks at the

impact o f various information processing mechanisms on organizational performance across a


variety o f tasks. While past research has focused on traditional organizations, this study focuses on
organizations partially or entirely composed o f intelligent agents. For the purposes o f this study,
intelligent agents arc defined as atomic software entities operating through autonomous actions on
behalf o f the user machines and humans - without constant human intervention (Ma, 1999). This
research is informed by past research on organizational design in general, the main focus o f which
has been to identify the characteristics o f successful organizations.
Today, after almost a century o f research, there is little more than anecdotal evidence as to
the impact o f various designs on the performance o f traditional organizations, and even less on the
organization o f intelligent agents. In spite o f this lack o f evidence, organizations operating in today's
rapidly changing and competitive business environment are finding that doing business as usual is
inadequate for long-term survival: they are forced to restructure and reorganize in order to remain

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

in business. Change in organizational design has become both pervasive and persistent. It is
normality according to Hammer and Champy (1993, p. 23).
Organizational change today is often presented using various names, such as downsizing,
rightsizing, reengineering, or flattening.

Regardless o f the term used, the stated goal o f the

reorganizations tends to be similar: to increase productivity and effectiveness, and to align the
organizational structure with the organizations business objectives. The results o f reorganization
and downsizing are often the same: the elimination o f middle management. This generally requires
the individuals remaining in the organization to process more information than before reorganization.
Information technology is otten used as the enabler o f downsizing, often making members
o f the organization encounter significantly more information than before (Evaristo, Adams and
Curley, 1995). The increased use o f information technology, whether related to a reorganization or
not, may cause information overload as agents are expected to deal with more information than they
are able to adequately process. Sometimes the increased information processing is planned for, but
it often occurs as an entirely unintended and unplanned byproduct o f change. The increased
information processing requirements which are placed on agents as a result o f downsizing have led
to serious concerns about the limits o f individual information-processing capabilities and
information overload (Davenport, 1996).
Information overload, whether caused by the increased use o f information technology and/or
reorganization (downsizing and/or flattening), can contribute to information loss (Evaristo, Adams
and Curley, 1995). Another contributor to information loss is the cost o f capturing information,
which may cause organizations to record only a portion o f available information (Levitt and March,

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

1988). When information is missing, organizational decision-making may be impaired, which in


turn may lead to decreased effectiveness and performance.
Some researchers believe that the elimination o f middle management may be counterproduc
tive (Cascio, 1995; Hailey, 1995; Schenk, 1997). They attribute the varying results o f reorganization
and downsizing to organizational design.

Studies have found that reengineering (and its

accompanying organizational redesign) often produces a less than desirable result (Davenport, 1996).
This has motivated researchers to search for a better understanding o f what constitutes good
organizational design. Recent research has shown how intelligent agents can address some o f the
limitations o f humans, especially addressing the issue o f information overload, information
processing costs, and information loss. In each o f these areas, intelligent agents have shown promise
in effectively replacing human agents.
Many authors have written describing their ideal organization. Since early this century,
Tay lor (1911), Fayol (1918), Weber (1947) and a host o f others have searched for the one ideal
organizational structure in order to maximize organizational performance. In recent years, however,
many have realized that there is no ideal organizational structure, but that the organizational
structure is contingent on the environment (Mintzberg, 1983). Rather than focusing on the ideal
structure, each organization should strive to find the structure which best facilitates its ability to
adapt in a dynamic environment (Senge, 1990; Johansen and Swigart. 1994). According to recent
research, improved organizational performance depends on adaptation and learning, which are in
turn dependent on the existence o f (a) organizational goals, (b) information which enables the
organization to make decisions, and (c) feedback that enables agents involved in the decision-making
process to assess whether a decision is correct or not (Senge, 1990; Argote, 1993, 1999).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

In order to study the impact o f organizational structure on organizational teaming and


organizational performance, a model was developed by Ouksel, Mihavics and Carley (1996). This
model (hereafter referred to as the OMC model), is a generalization and a formalization o f a number
o f past studies (Carley, 1990, 1992, 1996; Lin and Carley, 1993; Carley and Lin, 1995; Ye and
Carley, 1995; Mihavics and Ouksel, 1996). Applications o f this model demonstrate in various
specific cases that no organizational structure is ideal for all scenarios. Results from simulations o f
specific cases o f the OMC model show that (a) organizations facing new tasks learn at different
speeds and that (b) some structures learn better in the short term while others do poorly at first, but
leam better in the long run (Mihavics and Ouksel, 1996; Lin and Carley, 1993; Carley and Lin,
1995).
While the results achieved to date are interesting, the question arises as to their robustness.
This question is especially important since past results are limited by (1) the limited organizational
size used, and (2) the limited subset o f organizational design parameters used in any one study,
which ignores many o f the possible interactions among the various design characteristics.
Building upon the most comprehensive study which uses the OMC model (Mihavics and
Ouksel, 1996), this study expands past research on the impact o f organizational structures and
decision making on organizational learning and organizational performance. This study uses all o f
the organizational design parameters used in past studies (structure, weighting mechanism, problem
size, number o f agents, information availability and correctness, feedback availability and
correctness) rather than focusing on just a subset o f them. In addition, the concept o f localized
feedback is introduced, where agents receive feedback which is not dependent on the overall
organizational decision, but rather on the information available to them. Using this more complete

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

model the values o f the numerical parameters used are significantly expanded in order to test the
robustness o f all past results relating to the relationship o f organizational design to organizational
learning and performance, enabling the present study to determine the limitations o f those results.
In this study regression analysis is used to show that it is possible to develop a set o f
mathematical equations which approximate the learning curve achieved by each simulated
organizational structure. These equations allow researchers to: (a) easily observe the impact o f the
changing the values o f the parameters, (b) eliminate the need to simulate every organizational
structure that one might desire to explore, and (c) more clearly understand which design parameters
have the largest impact on both maximum performance and speed o f learning, and therefore make
an informed organizational design choice.
This model applicable to organizations o f intelligent agents, since at least their behavior can
be fully comprehended and predicted. At the same time, we recognize that human behavior is so
complex that it cannot be entirely predicted, no matter how good the model is. In spite o f this, this
study can assist in understanding how different organizational designs may have certain inherent
strategic advantages or disadvantages.
This study is divided into 7 sections. In order to understand the reasons leading to today s
organizational design theories the first section presents a brief survey o f organizational design. The
second section describes a model for studying the impact o f organizational design on organizational
learning and performance, and review past studies on the impact o f structure on organizational
learning and performance, focusing on the results obtained by Carley, Lin. Mihavics, Ouksel, and
Ye (Carley, 1992; Lin and Carley, 1993; Carley and Lin, 1995; Mihavics and Ouksel. 1996; Ouksel,
Mihavics, and Carley, 1996; Ye and Carley, 1995). The third section describes the methodology

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

used to study the impact o f organizational design on organizational learning and performance. The
fourth section presents a brief overview o f the results as well as the results which verify the
robustness o f the OMC model. The fifth section expands the present understanding o f the impact
o f organizational design on organizational performance, as the impact o f the various design
parameters on performance is analyzed. Ths sixth section describes several applications where the
results o f this model could be used to improve the design o f organizations, especially those using
large numbers o f intelligent agents. The final section presents a summary o f the conclusions and
point out areas where further research should be done.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Chapter II: A Brief History of Organizational Design


Before continuing this study o f the impact o f organizational design on organizational learning
and performance, it is important to first understand what an organization is. A t the same time, it is
important to be familiar with past research on organizational design to permit a comparison to and
contrast with past studies to the present one.
An accurate and concise definition o f an organization is challenging to construct, because
there are almost as many definitions o f organization as there are authors. Two basic factors are
necessary ingredients: (a) organizations are systems composed o f a varying group o f agents; and (b)
organizations must interact with the environment in which they operate (i.e. they must be open
systems).

The following definition captures both o f these elements, and w ill be used as the

operational definition for this study:


Organizations are systems o f interdependent activities linking shifting coalitions
o f participants; the systems are embedded independent on continuing exchanges
with and constituted bythe environments in which they operate (Scott. 1992).
Systems (or organizations) are inherently dynamic organisms which can change and evolve
over time, in large part due to their interaction with their environment. In recent years competition
has intensified in all markets and consumers' expectations are constantly changing. Organizations
have therefore been forced to increase their interaction with and adaptation to their environment.
Intelligent agents are being used more and more in the quest to process the information derived from
these interactions. Organizations which wish to survive in this rapidly changing environment must
also increase the rate at which they adapt.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

8
Scott's definition suggests that the study o f organizations could profitably focus on the
organization as a collection o f smaller decision-making units, each o f which must interact with its
task-related environment, w ithin and without the organization. Rather than attempting to design a
single homogenous organizational structure for the entire organization, this definition suggests that
appropriate organizational design should consider the various groups and subsystems in the
organization. The organizations long-term survival is not dependent on any one design factor, but
rather on the interconnections among the various factors. It is these interconnections (or design)
which are o f particular interest to us.
Research into how organizations should be designed began in the early part o f the twentieth
century. Over the course o f this century there have been several radical shifts in what is considered
key factors to good organizational design. The first researchers focused on the organization itself.
Later, this focus shifted to the members o f the organization, and more recently to the communica
tions and the learning capabilities o f the organization. These three foundations for organizational
design are reviewed, demonstrating how each has added to researchers understanding o f what aspects
must be included in good organizational design, and what aspects should be considered when
designing an organization, regardless o f whether the agents in the organization are human or
electronic. Current implementations o f organizational design w ill then be considered, as well as
what limitations still exist to understanding what constitutes effective organizational design.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Organizational Design Focusing on the Organization Itself


In the first half o f the twentieth century, many organizations were designed on the basis o f
theories such as Scientific Management (Taylor, 1911), General Theories o f Administration
(Favol.1918). and Bureaucracy (Weber. 1947). Fach o f these theories emphasized different aspects
o f the organization itself. A ll were similarly inwardly focused, ignoring both the environment in
which the organization operated and the members o f the organization.
Frederick Tay lor (1911) realized that the productivity o f labor was a significant factor which
needed to be studied. He therefore focused on defining each individual's task clearly to ensure that
each job would be performed in the most efficient way. Work and responsibility were to be divided
equally among managers and workers. Taylor was concerned with the task organization (or process
induced structure) rather than the formal organizational structure. Efficiency, as measured in elapsed
time to task completion, became the essential consideration in designing the organization.
Organizational designs based on Taylor's model ensured that information was segmented, ideally
without overlap, so that each individual only dealt with the minimum information necessary to
accomplish the task at hand. Management's role was to see that the workers were focused and
efficient. Organizational design was effectively a function o f the decomposition o f the production
process.
Fayol (1918) chose to focus on the formal organizational structure rather than its processes.
He identified fourteen principles o f management and organizational design: division o f work,
authority, discipline, unity o f command, unity o f direction, subordination o f individual interest,
remuneration, centralization, scalar chain, order, equity, stability o f tenure, initiative, and esprit de

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

10

corps. He proposed that every organization should be designed like an army, with each individual
knowing his or her job, with minimal overlap, following a preset chain o f command. Power,
according to Fayol. was the essence o f an organization. Preservation o f the organizational structure,
as opposed to individual initiative, was to be rewarded. According to this model, the organization
should focus on making sure that rules were followed, not only by those involved in production, but
also by management. Innovation was to be discouraged. Unlike Taylor, Fayol was not necessarily
concerned with maximum efficiency. Preservation o f the organization was o f paramount concern
and would be achieved by reinforcing the power and authority o f the structure.

For Fayol

organizations were not dynamic, but static in both structure and decision-making rules.
Max Weber ( 1947) described the ideal organization as a bureaucracy. He was concerned that
organizations should be able to function regardless o f which individual might be in any given
position. In a bureaucracy every individual is dispensable; anyone can be replaced by another, since
all decisions are codified in rules and procedures, not based on individual experience.

An

organization should have division o f labor, an authority hierarchy, formal selection, formal rules and
regulations, impersonality, and career orientation. As with Fayol. there was minimal overlap o f
function, with every individual functioning in a well-defined, semi-isolated environment. For
Weber, stability was important; the individual, unimportant. Efficiency and productivity were o f
little importance. Organizations needed to focus on self-preservation and on maintaining order,
structure, and procedures in the relationships o f all the members o f the organization. I f procedures
were followed, the organization would survive, and that was the most important consideration.
Taylor's scientific management led to the disregard o f the individual worker, while Fayol's
theory o f administration removed peoples incentive to be creative.

By its nature Weber's

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

11

bureaucracy became rule-bound, making it difficult for the organization to adapt to new situations.
A ll three o f these theories are rationally bound, and focus on the structuring o f the formal
organization and/or its processes.

Each fits into the closed-rational-structural category o f

organization theories (see Scott, 1992), where there is no provision for interaction with and/or
adaptation to the environment. At the same time, the only theory which relates to intelligent agents
is Taylor's, where members o f the organization are rational, and performance is the key. not
preservation or politics.
The varied results achieved by different organizations using these theories showed
researchers and managers that the variation in performance o f organizations could not be
satisfactorily explained by theories focusing on the organization itself. A t the same time, much o f
the variation in performance was seen as dependent on the irrational behavior o f the members o f the
organization. This led to further research into factors influencing organizational performance.

Organizational Design Focusing on the Organization Members


Beginning in the 1930s several researchers (Mayo, 1933: Maslow, 1954; Herzberg, 1959,
1966; McGregor, 1967) emphasized the importance o f the individual to the organization. They saw
that an organization needed to make the individual feel wanted and motivated. These researchers,
known as the founders o f the human relations movement, considered the people in the organization
to be at least as important as organizational structures and processes.
In the landmark Hawthorne Studies, Elton Mayo (1933) was the first to publicize that
organizations could be made more productive by paying attention to the employees. Through

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

12

experiments w ith the employees at the Hawthorne Plant, he demonstrated that structure (Weber),
power or authority (Fayol), and specialization (Taylor) were not the only factors influencing
productivity.
Abraham Maslow (1954) agreed that motivation and paying attention to employees were
important.

He pointed out that individuals have a hierarchy o f needs, and that in order to

successfully motivate an individual it was necessary to (a) recognize the position o f the individual
in the hierarchy o f needs, and (b) reward the individual with something meeting that need.
Productivity depended on meeting the perceived needs o f the individuals. Paying attention to the
individuals (as in the Hawthorne Studies) would be sufficient for some individuals, but would
eventually lose its effectiveness. The organization should have a structure and procedures to support
this motivation effort. Because individuals in the organization ofien arc at different levels in the
motivational hierarchy, it is necessary to have different structures and/or procedures at various levels
within the motivational hierarchy, geared to the different members o f the organization.
Frederick Herzberg (1959, 1966) studied employees to determine what made them like or
dislike their jobs. He identified two main factors influencing job satisfaction and productivity: what
he called hygiene," and motivation.

While he agreed that motivation was an important factor,

especially in job satisfaction and productivity, he found many sources affecting an individuals
perception o f lack o f hygiene; these sources could be controlled by the organization. Hygiene factors
included things such as working conditions, policies and administrative practices, and salary and
benefits. For an organization to function optimally it was important to address the hygiene factors
first: then motivation could be considered.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

13

Douglas McGregor (1967) believed that managers act on one o f two opposite theories, which
he called theory X and theory Y \ Theory X is a carrot-and-stick mentality that assumes that
most people are immature and incapable o f taking responsibility, and that they need direction and
control. They are viewed as lazy, unwilling to work, and in need o f a mixture o f financial
inducements and the threat o f losing their job to make them function. Managers subscribing to
Theory X manage by a combination o f things such as threats, coercion, and tangible rewards. Theory
Y, on the other hand, assumes that people desire to fu lfill themselves. They seek self-respect,
self-development, and self-fulfilment at work as in other areas o f life. Managers must rethink their
dealings with individuals and explore new ways o f task organization. For McGregor, organizational
design depended substantially on management's convictions on what motivated the employees:
organizational design was structured to maximize that motivation.
Maslow's, Herzberg s and McGregor's theories are all based on one foundational premise:
successful management must call upon and obtain the goodwill o f the organizational members.
Their conclusion is that the organization's behavior should be contingent on the abilities, beliefs, and
attitudes o f its members. As with their organizationally focused counterparts, these theories interpret
organizations and the behavior o f the members o f the organization as though an organization were
a closed system. They can thus be classified as Closed-Natural-Social Psychological (see Scott.
1992). They focus on individual behavioral patterns, on a social psychological level o f analysis.
Because o f their behavioral focus, they are difficult to apply other than on a case-by-case basis to
assist in organizational design, and are certainly not relevant to our study o f rationally-bound
intelligent agents.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

14

Organizational Design focusing on Response and Communication


Adding motivation as a key factor in organizational design improved the understanding o f
why organizational performance varied, but it still did not satisfactorily explain performance
differences across organizations or give clearer direction to the design o f organizations. Researchers
discovered the existence, indeed, the inevitability, o f non-traditional organizational models in
successful organizations.

These new models emphasized communication linkages w ithin the

organization as well as organizational responsiveness and communication with the environment, not
neglecting the individual, yet no longer considering the individual the main (or only) ingredient to
success.
Beginning in the late 1960s, several new key factors in organizational design were suggested.
These new designs emphasized decentralization (Drucker. 1969; 1974), quality control (Deming,
in Walton, 1986), andfle x ib ility and rapid reaction (Peters and Waterman, 1982; Peters and Austin,
1985; Peters, 1992). Each o f these designs emphasized the need for organizations to be responsive
to changes in the marketplace. Organizations that did not become responsive would rapidly become
outdated and be left behind their competitors. The idea that any one organizational design was ideal
was explicitly called into question. The determinant o f success was the ability to communicate
rapidly and efficiently. Adaptation was now considered the key, in contrast to the need for stability
in an organization, as proposed by Taylor, Fayol, and Weber.
Many researchers have posited that no single organizational structure is best for all
organizations. Henry Mintzberg (1983) stated that generally organizations belonged to one o f five
types (Simple Structure, Machine Bureaucracy, Professional Bureaucracy, Divisionalized Form, and

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

15

Adhocracy), where each organizational design reflects an organization which focuses on different
aspects. For Mintzberg, each organization should choose the appropriate style for its environment.
Several others (Senge, 1990; Johansen and Swigart, 1994; Senge et al, 1994, Leavitt and March,
1988; Carley, 1996b) have also shown the importance o f organizational flexibility, adaptability, and
communications.
The arrival o f computers and information technology created new opportunities for
organizations to communicate in different ways, thus enabling different organizational designs. New
organizational forms began to appear, with differing levels o f success. Some companies used matrix
organizational forms (Boeing is a prime example) (Davis et al., 1977), while others used small
groups working together (Volvo) (Morvat, 1984).

The increased availability o f networks and

electronic communications made the concept o f physical proximity much less relevant, as is clearly
apparent in the continuing increase in telecommuting, teleconferencing, and video conferencing.

Current Implementations of Organizational Design


As the United States continues to shift towards an information economy (Nolan and Crosson,
1995), new organizational forms, such as the network form. are being enabled. These new
organizations do not have the traditional tree-like hierarchies.

Instead o f having a chain o f

command, network organizations focus on information flows. These flows move not only vertically,
but also horizontally.

Organizations which exist primarily (and sometimes exclusively) in

cyberspace are beginning to appear (Martin, 1996). This trend towards individualized organizational
designs suited to a specific task and/or environment is likely to continue, emphasizing communica-

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

16

tion and rapid adaptation to the environment. Johansen and Swigart (1994) state that organizations
need to be like fishnets, where each node in the organization can be elevated to primacy for any
specific responsibility and/or time, meaning that any member o f the organization may be the leader
for a given task, emphasizing the dynamic nature o f an organization. This "fishnet organization or
individualized organizational design is quite difficult to implement in traditional organizations, but
can readily be implemented using intelligent agents.
Modem organizations have realized that past models o f organizational design are useful
foundations, yet often yield unsatisfactory results in today's environment. By the early 1990s one o f
the "hot topics in management became the restructuring o f organizations, especially in light o f new
information technologies. This trend is reflected in the many books and articles on the subject
(Savage, 1990; Keen, 1991; Morton, 1991; Davenport, 1996). The objective o f this restructuring
has been to eliminate unneeded processes and to focus on the core competencies and objectives
o f the organization, all enabled through increased communication (Davenport, 1996).
Researchers have also found that coordination costs are significant in and o f themselves, and
also vary significantly across different organization structures (Malone, 1987). The driving force
behind the recent emphasis on reorganization and restructuring has been the recognition o f the
importance o f coordination costs and the fact that information technology enables organizations to
significantly lower these costs (Morton, 1991). This lower cost, in turn, gives organizations the
opportunity to move away from "steep hierarchies (Savage, 1990). to teams and other more
collaborative forms o f organization. Mihavics and Ouksel (1996) have further differentiated these
costs into three categories: information production, coordination, and vulnerability costs.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

17

One often-used technique used in order to attempt to reduce costs has been business process
reengineering (BPR), with the expectation that organizations would become more efficient,
increasing productivity and often slashing costs and personnel, through (a) the elimination o f
redundant processes, and (b) streamlined communication needs.
These new organizational designs have been accompanied by the creation and/or adaptation
o f a number o f terms to describe the new theories.

Administrative reform. reorganization,

restructuring, downsizing. reengineering, rightsizing, and delayering all relate to the same
phenomenon: making significant changes in the organizational structure. Working definitions for
several o f these terms are presented in Table I (adapted from Hailey, 1995).The most commonly used
o f these terms has been reengineering.

Reengineering is often done in response to business

pressures, and is typically effected in conjunction with a large downsizing component. The stated
objective is usually to eliminate unnecessary layers (normally in management), and to make sure
that each employee o f the organization is working at maximum potential. Reengineering has been
sold as the "best' solution, the one providing significant cost savings, with minimal, i f any.
performance degradation (Hammer and Champy, 1993). Most organizations have hoped that by
reorganizing they would eliminate duplication and increase efficiency. Often information technology
is at the center o f the reengineering process, used as a tool to streamline operations and communica
tions (Davenport and Short, 1990). This drive towards efficiency and standardization seems in many
ways an adaptation o f the Tayloristic view o f efficiency as the key and o f Weber s bureaucratic view
that all decision-making processes must be well defined.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

18

Table I - Definitions o f Terms Relating to Organizational Redesign


Administrative Reform is the induced systemic improvement of operational performance.
Reorganization is a process which changes the
organizational structure, modifying the distribution ofauthority and the prevailing lines of
authority. It has political, technical and social aspects. It can be applied at different levels
in the organization and implies some sort of philosophical and/or strategic shift
Restructuring involves moving, adding, and eliminating organizational boxes or units
represented by an organizational chart. It can also be defined asrebuilding the strength of
anorganization by changing its assetstructure and its resourceallocation patterns. Neither
reorganization nor restructuring involves normal, expected, routine, or minor changes.
Reorganization/Restructuring.

Downsizing (or workforce reduction) is a strategy to streamline, tighten, and shrink the
organizational structure and thereby reduce the number of personnel the organization
employs.
Reengineering is a strategy that attempts to streamline the business processes of an
organization. Workforce reduction is a normal part of reengineering. Today, information

technology is usually central to the reengineering of business processes.


Rightsizing can involve reducing the workforce (downsizing) as well as eliminating

functions, reducing expenses, and redesigning systems and policies (e.g., to reduce costs
or reduce organizational size). It can also require upsizing (increasing the workforce) in
certain areas. It eliminates unnecessary work and improves and prioritizes the most
important work. It is a multifaceted attempt to reshape the total organization. Rightsizing
may also have a strong humanistic orientation. Other terms which may be used as
synonyms: lean organization, revitalization, renewal, reinvention, total organizational
performance, organizational re-design.

A number o f recently published reports (Hailey, 1995; Cascio, Young and Morris, 1997;
Davenport. 1996) show that reengineering, and downsizing in particular, are not the panaceas for
organizational performance which their proponents have suggested. Cascio, Young and Morris (
1997) show that companies engaging in mere employment downsizing did not show significantly
higher returns on their investments compared w ith those who did not. According to a recent study

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

19

by Tom Davenport (Davenport, 1996), 67% o f all reengineering projects are judged to produce
mediocre, marginal, or failed results.
Several reasons have been given for this: information overload, poor task organization and
supervision o f personnel, and inadequate human resources, among others (Davenport, 1996). The
fundamental problem with reengineering (and organizational design), however, seems to lie in (a)
not understanding that people and communications are at the core o f any organization (Davenport,
1996) and cannot be maneuvered or directed like so many bits and bytes, and (b) not replacing the
information-filtering function that was once performed by middle management. Typically the
problems are not due to a lack o f understanding o f the business process, but rather insufficient
knowledge o f where and how information is communicated within the organization (Krackhardt and
Hanson, 1993; Krackhardt, 1996). Because o f these failures, organizations, especially in the United
States, have begun reconsidering their expectations o f reengineering. As Davenport states the
reengineering fever has broken" (1996. p. 1).
While the focus on process efficiency in BPR is important, many organizational theorists
believe that organizational design is a critical component o f an organization's performance
(Lawrence and Lorsch, 1967; Burton and Obel, 1984;Carley, 1990; Malone, 1987; Mackenzie 1978;
Scott, 1992; Krackhardt. 1994; Galbraith, 1973,1974). Substantial evidence from empirical research
(Carley 1990, 1992; Mihavics, 1995; Mihavics and Ouksel, 1996) supports this assertion. Many
researchers (Baligh, Burton and Obel, 1987, 1990; Lawrence and Lorsch, 1969; Mintzberg, 1983;
Woodward, 1965) now recognize that organizations change in response to the environment, and that
organizational designs need to adapt in the face o f different environmental situations.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

20

Because o f the repeated failures o f reengineering and reorganization, one o f the issues that
has emerged is the question o f what makes an organizational structure good or bad within a
determined context. The results o f reengineering and reorganization (and o f course downsizing) vary
widely, regardless o f the organizational or decision-making form to which they are applied. While
one organizational form works in one situation, it may fail in another: one structure does not meet
the needs o f every organization.

What is needed is a way to study the impact o f various

organizational designs on organizational performance, in order to determine what the key


organizational design factors are.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Chapter III: Organizational Learning: What is it? How can


We Study It?
The question o f what makes an organizational structure good or bad within a determined
context remains, especially in the light o f repeated failures o f reengineering and reorganization.
Researchers today realize that merely changing the structure o f an organization does not guarantee
success and that even a complete reengineering often fails (Schein, 1996). There is increasing
interest in determining what factors, in addition to the organizational structure, contribute to the
long-term success o f reorganization or reengineering. An organization's ability to learn from its
experiences shows significant promise in helping comprehend why they succeed or fail (Senge,
1990; Levitt and March. 1988), and is posited by some (Moingeon and Edmondson, 1996) as
providing competitive advantage to the firm.
Before understanding the impact o f organizational structure on organizational learning (and
therefore organizational performance), it is important to first define organizational learning.
Secondly, a description and review past research o f a model which enables the study o f the
relationship between organizational structure and organizational learning is given.

Organizational Learning
Every organization strives to meet one or more goals. By learning from their experiences
organizations are able to avoid repeating costly mistakes, and so have a better opportunity to reach
their goals. For the purposes o f this study, organizational learning is defined as:
the ability o f an organization to measure its past experiences against some
aspiration level and to adjust itsfuture decision making behavior in order to move
closer to that level {Mihavics, 1995, p. 19).

21

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

22

This concept o f organizational learning is experience-based and is dependent on the ability


o f the agents in the organization to learn from their own experiences and from the feedback they
receive from the environment.

For organizational learning to take place there must be both

knowledge o f past experiences and a way to determine whether the desired objectives have been
achieved. There must also be a willingness within an organization to adapt to its environment in
order to achieve better performance (Levitt and March, 1988).
Several authors have focused on the need for organizations to learn quickly and accurately
in order to change and adapt in order to survive (Senge. 1990; Davenport, 1993). The lifespan o f
traditional organizations is measured in years. On the other hand, organizations composed o f
intelligent agents often are created and destroyed in a matter o f minutes, although some last much
longer (Martin, 1998). One o f the key determinants to the longevity o f an organization no matter it s
form is its ability to adapt and learn (Senge, 1990). Rather than focusing on a single ideal
organizational structure, organizations should focus on determining which organizational structure
learns best in their current context.
Organizational learning is complex because organizations are made up o f agents who have
complex learning patterns. For example, the specialist within an organization w ill remember a large
amount o f information about few variables, while the generalist w ill retain limited information on
many variables. This happens because individuals generally have a limited capacity to store and
process information relating to any given task (Evaristo, Adams and Curley, 1995). Even when there
is a desire to learn, agents, human agents have a limited capacity to retain and process information
(Levitt and March, 1988). Even in the case o f intelligent agents, there are storage limitations due

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

23

to speed and cost constraints. While it may be possible for an agent to have perfect memory, the cost
might eventually outweigh the benefits o f perfect information.
The recent rate o f increase in technology has both forced and enabled organizations to adapt
and change faster than ever before. This rapid pace o f change has further strained individuals'
lim ited memory capacity as the amount o f information which they are expected to process has
increased almost exponentially (Evaristo, Adams and Curley, 1995), so much so that many suffer
from information overload (Reuters, 1996; Shenk, 1997). Today, instead o f going to the library to
try to find some information (which we may or may not be able to find), we tend to go on-line, and
we receive thousands o f hits, which we must sift through in an attempt to choose the correct one.
Also, while in the past it was necessary to wait for mail or courier to deliver a message, today e-mail
communicates almost instantly. Instead o f waiting days or even weeks to have a meeting, we hold
a tele-conference or even a video-conference immediately, increasing the pressure to make
immediate decisions without regard to the impact it might have on decision-making performance.
These technological changes only serve to increase the amount o f information which must be
processed for each decision. A t the same time, recent research has focused on using intelligent
agents to alleviate some o f these problems, by delegating more and more information processing
tasks to the intelligent agents, in order to reduce the volume o f information to a manageable size.
I f past research on information processing capacity is correct, the move to flatten organizations could
be counterproductive by inducing information overload, impairing an organization'sability to learn.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

24

A Model for Studying the Impact of


Organizational Structure on Organizational Learning
During the planning phase o f any organization, whether it is an initial design or a redesign,
it is important to assess the impact o f proposed organizational structures on organizational learning.
In order to accomplish this is it important to understand not only what an organization is and what
organizational learning is, but also how decisions are made in organizations^Davenport, 1996; Senge
et al, 1994).
Two main approaches are used to study how organizations make decisions: descriptive and
normative (or prescriptive) (Vroom and Jago 1974). Descriptive studies simply describe how
decision making takes place, while normative research develops models which provide a rational
basis for making decisions (Vroom and Yetton 1973, Vroom and Jago 1974, Simon 1965).
Descriptive research focuses on obtaining a complete understanding o f the situation, portraying in
detail what happens in an organization, making the results d ifficu lt to generalize across organiza
tions. Normative research, on the other hand, tends to utilize quantitative methods which enable
rational decision making. By their very nature, normative models, while not perfect, are more
generalizable than descriptive models. Because o f this studys focus on intelligent agents and the
desire to produce generalizable results, a normative model w ill be used in this study.

Description o f the Ouksel-Mihavics-Carley Model


The only existing model o f organizations that clearly captures the complexity o f the task to
be modeled, as well as the structure and the decision-making process o f an organization is the OMC
model, (Ouksel, Mihavics, and Carley, 1996). It has been applied to agents in general and is readily

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

25

applicable to intelligent agents. It has been developed through research on the relationship between
organizational structure and organizational learning (Carley 1990.1991,1992, Lin and Carley 1993,
Carley and Lin 1995, 1997, Mihavics 1995, Mihavics and Ouksel 1996, Ouksel. Mihavics and
Carley 1996). The four main components o f this model are evidence (the raw inputs to the decision),
decision rules, agents (whether human or electronic), and information processing structure.
Information processing structure is composed o f communication channels, formal relationships
between agents, and evidence input patterns.
The OMC model focuses on procedures and formal communication structures, including task
composition. Therefore, it is able to capture the keys to organizational success as defined by (a)
Taylor (1911), who believed that the focus o f organizational design should be on the process, with
decomposed tasks and minimal overlap; (b) Fayol (1918), who focused on the formal structure; and
(c) Weber (1947), who was concerned with both the process and the formal structure. The social
psychological model is not explicitly addressed, as this is a rational system, not a natural one, which
makes it even more appropriate for organizations o f intelligent agents, who are rationally bound.
The OMC model posits organizational learning as a multidimensional and complex process which
can be greater or lesser than agent learning, depending on the ability o f agents to learn from their
own experiences (Levitt and March, 1988; Argote. 1993, 1999).
The OMC model makes a number o f general assumptions, which are documented by Ouksel,
Mihavics, and Carley (Ouksel, Mihavics, and Carley, 1996):
1.

Organizational decision-making behavior is historically based.

2.

Organizational learning depends on the boundedly rational decision making behaviors o f the
individual agents which form the organization.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

26

3.

Subordinates condense their input data into output recommendations to their superiors, and
this information compression is lossy; uncertainty absorption (March and Simon, 1958)
occurs at each node in the structure.

4.

Overall organizational decisions do not require that a consensus be reached (e.g., a legitimate
policy might be to let the majority opinion rule).

5.

The organizational decision is two-valued (e.g., go/no go).

6.

The organization faces quasi-repetitive, integrated decision making tasks: quasi-repetitive


in that the tasks are typically similar although not identical to the previous tasks and
integrated, meaning that the task is too complex for a single agent to handle aione. forcing
the combination o f sub-decisions o f multiple agents to reach an overall organizational
decision. The tasks o f interest here are assumed to be non-decomposable, meaning that
combining the correct solutions to each sub-task may not always yield the correct solution
to the overall task.
W ithin the constraints o f these general assumptions, each decision task is represented by a

binary string o f N bits. Each bit is denoted x, (also called evidence"). Each o f these bits represents
the presence (1) or absence (0) o f an environmental feature which is relevant to the decision task at
hand. These bits are first viewed by agents (the first-level sub-decision makers), who each have
access to a portion o f the task, x

....... .. where 1 i j N and (j-i) < N. Each agent examines

its local memory o f prior instances o f the task (bit patterns) as well as the corresponding outcomes
o f these past decisions, and uses this information in combination with the appropriate decision
function (or classification function) to make an informed decision.

Each agent's decision is

communicated to the respective superior agent, which in turn makes its decision based on its own

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

27

decision function (independent o f the decision function used by lower level agents). This process
is repeated until the organizational summit' (top-level agent) is reached, and the final decision is
made.
In early studies, all bits o f evidence were modeled as having equal weight, resulting in the
uniform model (Carley, 1992; Carley and Lin, 1995; Lin and Carley, 1993).

More recent

applications o f the model have allowed each bit o f evidence to be assigned an explicit weight, which
is a measure o f the relative importance o f that evidence bit in relationship to the decision (Ouksel
and Mihavics, 1996). For example, when the evidence bit pattern is 0 1 0, and corresponding
weights are 1 3 1, the weight o f the second bit (3) shows its relative importance is more than that o f
the other two evidence bits combined, and would thus be the key factor in the decision.
Agents in the organization become aware o f the values o f the weights through learning. I f
the weights were equal, the decision would be based on a simple majority o f the evidence. Because
agents are initially unaware o f the weights, they must learn over time which bits should be given
greater weight. This procedure approximates reality in that individuals facing similar repeated tasks
learn to distinguish amongst various pieces o f information over time.
After each decision cycle is completed, each agent is informed what the correct decision
should have been, based on all the evidence for the task as well as the decision function. This
provides the necessary feedback for learning.

This feedback may be based on the overall

organizational evidence (generalized feedback) or the evidence which was used for that individual
agent s decision (localized feedback). The agent s memory slot corresponding to the last evidence
pattern is updated, indicating whether this pattern should be associated with a 0 or a 1 (go/no-go).
Each time a decision is to be made, agents match the evidence pattern to the corresponding memory

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

28

slot. Initially, since an agent has no memory o f such a pattern, it makes a decision based on a simple
majority rule. Once an agent has seen a given pattern it w ill select either a 0 or a 1, always choosing
the decision that the specific pattern has matched most often. I f the number o f matches is equal for
Os and Is, the decision is made on a random basis. This mechanism insures that in cases where
interdependent information is not seen by the same agent, the accuracy o f the decision-making
process is lessened.
Initial applications o f the model assumed data independence between the evidence bits. This
meant that relationships (such as an XOR, where one bit o f information is only relevant in the
presence or absence o f another one) between evidence which was dispersed over more than one
agent could not be accounted for.

In the formal model Ouksel, Mihavics. and Carley have

eliminated the assumption o f data independence and developed a decision function which does not
imply independence o f all evidence bits (Ouksel, Mihavics. and Carley, 1996). This decision
function is a polynomial, taking the form shown in Formula 1. where K is the set o f all rational
numbers

/ ( * ..........* ) =

t*j *|V C

J
V

(V ,

Vn

( 1)

), v( e {0,1}, a} e K

This polynomial function is particularly new and powerful in that it captures all the
interactions that exist among the various inputs to the decision. A ll relationships such as XOR,
AND, and OR are captured. For example, this means that in a case such as a credit rating case,
where length o f employment and wages are both being considered, the interrelationship between this

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

29

interrelationship can be accounted for. Because the model captures the full power o f propositional
calculus, it covers a very large information space.

This allows researchers to study complex

problems, with the assurance that the decision function is relatively complete.
Two basic decision-making structures exist: an expert team and a democratic (or voting)
team.

Multiple layers o f either type may be combined to create a hierarchy. The organizational

decision-making process is dependent on the selected organizational structure.


Expert teams (Figure 1) are composed o f multiple agents and a team leader. Each agent
makes a decision based on the current evidence and its memory o f prior events. This decision is
communicated to the team leader, who makes the organizational decision based on the decisions
communicated to it by subordinate agents, as well as its memory o f prior events.
Figure 1

Expert Team
(final decision = leaders judgement)

Team Leader

^ r

AgCntS

4*nu i

Kftm i

I 0 0

A $m

A* 4

*4 k ^

>

A|m'

1 0 .............................................. Bits o f Evidence

acm

Agou

0 1 II

0 I

In a democratic team (Figure 2) the organizational decision is made by a simple majority vote
o f the member agents. Each agent makes its decision based on the evidence it receives and memory
o f past events. The role o f the leader is merely to tabulate the votes o f the agents and to report the
results in the decision.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

30

Figure 2

Democratic Team
(final decision 3 majority vote

Vfoic

A gents

A fB U

A f(M I

A |H 1

A |l *

*4k

A tm

>

A |M t

|M

'

A gcM l

I 0 0 I 0 ............................................. Bits of Evidence............................................0 I

A g tM

I I 0 I

Hierarchies have at least three kinds o f agents: those agents who see the initial evidence,
middle managers that receive information from agents and/or other middle managers, and a leader
(or top decision maker) who receives information from middle managers. Each agent (regardless
o f which level o f the hierarchy it belongs to) makes a decision based on its evidence and memory.
Every agent communicates its decision to the next higher level in the hierarchy, where it becomes
evidence for the superior agent. This process is repeated until the decision o f the highest possible
layer o f middle managers reaches the organizational leader (or top decision maker), who then makes
the organizational decision. The hierarchical process inherently causes the decision to be made with
more information loss than expert or democratic teams. In the simplest hierarchy where there are
nine agents and three middle managers (Figure 3), the decision-maker would only see three bits o f
information, which w ill never be as informative as receiving all nine, as would happen in an expert
team. On the other hand, this reduced number o f possibilities allows for easier pattern recognition
and faster learning.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

31

Figure 3

Hierarchy
(final decision Leader's judgement)

Leader

Middle
Managers

M iddle
MgiUBCt I

Agents

A*Mi

M iddle
M anegcr:

A A

M""

M iddle
M auger I

4 A a

***

Aaa>

A A ^

Aim I

AfM

A|MI

I 0 0 I 0 ............................................. Bits o f F.vidence........................................... 0 I

ai

II

0 I

These three organizational models are foundational for all organizational structures. A ll
decision-making structures employ one or more o f these models. A hierarchy might use democratic
teams at some point and a middle manager might not be a person, but a committee. Regardless
o f formal organizational structure, most portions o f an organizations structure can be mapped into
one o f these three decision-making forms. In this study the focus is on these simpler organizational
designs, which are a subset o f all those that could exist. This is done for the sake o f simplicity,
tractability, and the ability to compare results with those o f prior studies.
Organizations, unfortunately, oflen do not match these clean designs perfectly. Because
o f many factors, organizations tend to have non-symmetric designs, as well as non-hierarchical links.
In today's environment applications o f information technology such as e-mail and workflow are

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

32

often used to create other structures. Some organizational cultures emphasize the chain o f command,
while others encourage cross-functional (and therefore lateral) communication.
In order to capture some o f these non-standard structures Carley and Lin (Carley and Lin,
1995) studied what they called the matrix organization (Figure 4). This model operated much in the
same way as a hierarchical organizational, but with each agent reporting to two different middlemanagers.
Figure 4

Matrix
(fm *l decision - Leader's judgement)

Lawler

Middle
Managers

Agents

A fB M I

Middle
M m uki I .

4 k*

4 |M l

A p N l

Middle
-M tn ig c r :
^

A |M 4

A | li

I 0 0 I 0 ............................................... Bits of Evidence

Middle
M in iic r

4i

k *

|M l

0 I

S |M

I I 0 I

Example o f the Model


Let us take the simplest and smallest organization studied: 3 agents, with 3 agents, 3 bits per
agent, with no information sharing, uniform information weight, and a simple decision function
where the correct decision is a simple majority o f all information bits. The size o f this task is 9 bits,
and the input to it could be represented as a string o f 9 bits, such as those found in Table II. Hie
table for the memory o f a typical agent is found in Table III.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

33

Table II-Sample Decision-making input patterns

Case

\ Input Pattern i

101010101

: m o o io io

110010111

When the task is new, agent decision-making is random. It is basically tantamount to


flipping a coin. For the first decision, all agents w ill have a 50% probability o f making a correct
decision. Each agent receives feedback based on the decision finally made by the organization. For
example, it would be entirely possible for the organizational decision in case #3 (Table U) to be a
"0", even though the correct decision should be a 1". However, all agents would receive feedback
that the correct decision was a 1", enabling them to learn for the future.
Table IIIAgent Memory Map

!
i

Input Evidence

HO

HI

000

57

00 1

42

22

0 10

43

22

0 11

22

42

100

42

21

10 1

23

40

110

20

41

M l

56

Table III represents one possible state for an agent s memory map after a number o f
decisions. In this situation, the results o f our three cases o f evidence would not be random.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

34

Assuming ail agents memory to be the same (a simplification for this example), we can express the
outcomes from each o f these cases as seen in Table IV.
Table IV-Sample Agent and Organizational Decisions
!

Case

Input Pattern

Agent 1

Agent 2 ! Agent 3

Organiza

Correct?

tion

101010101

yes

111001010

no

110010111

yes

As can be seen, case #2 w ill always produce incorrect results, given the lack o f cooperation
between agents and the distribution o f the evidence. It should be noted that the case #3. or in any
other case where two thirds o f the evidence points in one direction, regardless o f their distribution,
the organizational decision w ill be correct after learning has stabilized.
The feedback received by each agent w ill depend on the mechanism used. When generalized
feedback is given, it is based on the organizational decision. After a task such as Case # 1, where the
organizational decision is the correct decision, the memory map would now look as appears in Table
V, with all the changes from the original memory shown in the shaded cells.
I f localized feedback is used, all agents would be informed they had made the correct
decision. This is supportive o f all the agents, but w ill reinforce agent #2's belief that a 0 I 0 pattern
supports a ,,0" outcome, even though the overall decision was a 1. The resulting memory map is
shown in table VI, with the changes highlighted.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

35

Table V-M em ory map after decision with generalized feedback


Before decisions
Input Evidence j

Agent #1

Agent #2

HO

HI

HO

HI

HO

57

57

57

00 1

42

22

42

22

42

0 10

43

22

42

22

42

22

42

22

42

100

42

21

42

10 1

23

40

I 10

20

111

000

Agent #3

HI

HO

HI

57

j.

42 '
.... ,

22

23

iti

42

i
1
'

22

22

42

22

42

21

42

2!

42

21

23

41

23

40

23

4!

41

20

41

20

41

20

41

56

56

56

56

22
1

Table VI-M em ory map after decision with localized feedback


Before decisions

Agent #1

Agent #2

Agent #3
.

Input Evidence

HO

HI

HO

HI

HO

HI

HO

HI

000

57

57

57

57

00 1

42

22

42

22

42

22

42

22

0 10

43

22

42

22

43

22

42

22

22

42

22

42

22

42

1 22

42

100

42

21

42

21

42

21

21

'

10 1

23

40

23

41

23

40

42
l
1 23

41

110
............... 1 - t"
111
i

20

41

20

41

20

;
i

41

20

41

56

56

56

56

j
I
1

0 11

---- f"

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Past Studies Using the Model


Past studies have used ten ditTerent information processing and organizational design
parameters to study these different models o f organizational structure and their impact on
organizational performance and organizational learning:
1.

Number o f Agents. The number o f individuals at the bottom layer in the organizational
structure.

2.

Bits per agent. The number o f elements o f evidence that each agent processes for any given
organizational decision.

3.

Decision-making structure. The organizational structure used to evaluate learning ability


(Expert teams, democratic teams, hierarchies and matrix organizations).

4.

Evidence weighting. The distribution o f evidence weights for inputs into the organizational
decision. Weighting can be assigned randomly or intelligently. It can be evenly distributed
or clustered. Typically weights have been assigned in one o f three different ways: uniform,
non-uniform dispersed, or non-uniform clustered. A ll evidence bits have a weight. The
default mechanism has been uniform weighting, where each evidence bit is assigned
identical weight (typically a weight o f one).

W ith non-uniform dispersed weights, the

weights for different evidence bits are uneven, but each agent has access to evidence with
similar weights. When non-uniform clustered weights are used, the weight o f the evidence
available to different agents varies, with one group o f agents having heavily weighted
information, a second group having information with average weights, and a third group
having information o f relative unimportance.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

37

5.

Task Decomposition. Evidence is defined as seen by only one agent (non-overlapping), or


by more than one (overlapping). Overlapping task decomposition can be seen from two
perspectives: (a) partial, where only some o f the evidence is seen by more than one agent,
or total, where all evidence is seen by others; and/or (b) blocked, where the overlapping
takes place within a constrained portion o f the organization or distributed, where the
overlapping takes place across the whole organization.

6.

Decision type.

While the majority o f the research has focused on binary (go/no-go)

decisions, there has been one study where the decision was three-valued (go/unsure/no-go).
7.

Incorrect information. The percentage o f the evidence seen by an agent which is incorrect.

8.

Missing information. The percentage o f the evidence required by an agent which is not
available.

9.

Incorrect feedback. The percentage o f decisions for which the feedback received by an agent
is inaccurate.

10.

Missing feedback. The percentage o f decisions for which the agent receives no feedback.

Past Applications o f the OMC Model


Previous studies have been limited to using a subset o f these organizational design
parameters. Kathleen Carley (1990, 1992) first used a simple case o f this model to determine the
impact o f organizational structure and personnel turnover on organizational learning. She compared
hierarchies and majority teams, using uniform evidence weights, varying the problem size (but not
the number o f agents) and the turnover rate. Each organizational structure was evaluated over 2,500
decisions. She found that majority teams learned faster and generally outperformed hierarchies when

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

38

facing a stable task and environment. Hierarchies were more resilient in the face o f turnover,
however, and outperformed majority teams as the personnel turnover rate increased. This study
presumed that the task which the organization faces is relatively stable, and that organizations would
not want to "wipe the slate clean and have a fresh start. These results were the first in this stream
o f research to point to the usefulness o f the hierarchy in a changing environment.
Lin and Carley (1993) focused on the impact o f agent styles on organizational decision
making. They introduced the expert team concept, as well as the matrix organization. They also
allowed agents to share information with each other. They found that "agent style [proactive or
reactive| is a relatively weak factor in organizational decision-making performance, compared with
factors such as organizational structure, task-decomposition scheme and task environment (Lin and
Carley, 1993, p. 284), again pointing to the importance o f organizational design.
Carley (1995) studied the impact o f agent decision-making methods on organizational
performance. Using a 3-valued decision (go. no-go, unsure) the study found that varying training
and procedures (agent model) had a significant effect on organizational performance. The results
showed that (a) teams performed substantially better than hierarchies within a stable environment,
supporting the previous research, and (b) there was a strong interaction between agent model and
organizational design. This indicates that more attention must be paid to the way human decision
making is characterized. It is important to realize that many agents w ill make decisions based on
more complex rules, including the influence o f the primacy and recency effects, among others.
A study by Lin and Carley (1995) used parameters o f organizational design, task
environment, stress, training, and agent style which resulted in over 460,000 different organizational
structures. Each structure was simulated for 1,000 decisions. The results demonstrated that an

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

39

increase in information sometimes results in poorer decisions than when less information is
available.

Because the number o f decisions simulated was limited, there are two possible

explanations for this: (a) information overload because agents are incapable o f processing the larger
amounts o f information (Evaristo, Adams and Curley. 1995), or (b) slower organizational learning
due to the larger number o f possible evidence patterns, without a negative impact on their maximum
potential (Evaristo, Adams and Curley, 1995). The present study also supports the belief that in may
cases, organizational learning does not stabilize until thousands o f decisions take place. Regardless,
both situations are caused by an increase o f information in today's business environment.
Studies by Ye and Carley (1995) and Carley and Lin (1997) focused on the feedback
(information on what the decision should have been) that each agent received after making a
decision. These studies sought to understand the impact o f different feedback mechanisms on
organizational performance. They found that democratic teams outperformed expert teams in
organizational performance as measured by the percentage o f correct decisions. Democratic teams,
however, were significantly more likely than expert teams to make a severe (and potentially costly)
classification error. Flattened organizations may therefore make more correct decisions, but their
errors are likely to be more costly than those made by a hierarchy. These results may however not
be conclusive, since they are based on only 30 decisions per organization, and there is no indication
o f how the performance would be affected by simulating a larger numbers o f decisions.
Carley and Lin (1996) studied the impact o f information distortion (incorrect or misinter
preted evidence) on organizational performance. They found that (a) agents who received incorrect
evidence or misinterpreted it were more likely to make mistakes than those who were missing
information, and (b) teams generally outperformed hierarchies, especially when the task was simple

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

40

and decomposable.

They concluded that generally, complex organizations exhibit higher

performance when facing complex environments and simple organizations exhibit higher
performance when facing simple environments, regardless o f the information distortion' (Carley and
Lin, 1996, p. 26). This suggests that large, complex problems are better dealt with by hierarchies
than democratic or expert teams.
Mihavics and Ouksel (1996) simulated organizational decision-making using three
categorical variables: organizational structure, weighting mechanism, and task decomposition. This
is the first study which combined information weighting, using uniform, non-uniform clustered, and
non-uniform dispersed weights to the evidence used in a given problem, thus enabling the creation
o f a model that took into account the fact that not all information is equally important. Three
organizational structures were investigated: majority teams, expert teams, and hierarchies. Finally,
Mihavics and Ouksel analyzed organizations that used segregated task decompositions (no
information sharing), and those that used overlapping task decomposition (information sharing).
Eighteen different cases were studied, using simulation and mathematical modeling. The results
show that:
1.

Different organizational structures have significantly different learning speeds and vary
widely in the maximum performance level each can attain.

2.

Majority teams and expert teams perform better than hierarchies when weights are uniform
or dispersed.

3.

Hierarchies outperform majority teams when weights are clustered.

4.

Majority teams perform better with overlapping tasks when weights are clustered.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

41

5.

The performance o f majority teams is negatively impacted when weights are clustered rather
than dispersed.
Additionally, Mihavics and Ouksel developed equations for computing the production,

coordination, and vulnerability costs o f information processing. These cost equations allow the
comparison not only o f organizational performance across organizational designs, but also o f the
costs associated w ith a specific structure and, therefore, the information processing costs o f attaining
a given level o f performance. This is important, as communication and coordination costs have been
identified as significant factors in obtaining competitive advantage (Malone, 1987). Managers
should be able to make a more appropriate selection o f organizational design by combining the
information regarding the information processing costs and the performance for alternative
organizational designs.

Overall, Mihavics and Ouksel concluded that a new approach to

organizational design was needed: organizational structure should depend on the organizational
expectations o f learning speed and performance, as well as on the costs o f information processing
and communications.
These different studies have enhanced the comprehension o f the relationship o f organiza
tional structure and decision-making to organizational learning. The OMC model is simple enough
to comprehend, yet complex enough to capture more realistic, complex cases. This provides a
vehicle for a substantially better understanding o f how organizational design impacts organizational
learning and organizational performance.

At the same time, past studies leave a number o f

unresolved issues. In only one study (Carley, 1992) is the task size varied. There is no study o f an
organization with a varying number o f decision makers.

Whenever hierarchies or matrix

organizations have been used, there has been only one layer o f middle managers. In most cases.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

42

simulations span fewer than 1,000 decisions (2,500 and 5,000 in Mihavics and Ouksel (1996)), yet
a visual inspection o f the performance o f an expert team with 9 agents and 27 bits o f total evidence
clearly shows that learning has not stopped even after 5,000 decisions.
The results from past studies are also limited in that no study integrates all organizational
design and communication parameters. For example, it may be possible to understand the impact
o f feedback, missing information, or structure independently. However, it is necessary to capture
the interactions and to develop one model which uses the various aspects into one more complete
model o f organizations and organizational learning. Having a more comprehensive model would
enable the development o f a fuller understanding o f the impact o f the various factors in organiza
tional design on organizational learning, and more clearly understand the impact o f various
organization forms and reengineering on organizational effectiveness.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Chapter IV: Research Methodology


The objective o f this research is to build on the work by Ouksel. Mihavics, (Mihavics and
Ouksel, 1996; Ouksel, Mihavics and Carley, 1996) and Carley (Carley, 1992; Carley, 1995; Carley,
1996), helping to better understand the impact o f various organizational designs on organizational
learning and performance. First, the description o f the enhancements needed to make the model used
by Mihavics and Ouksel (1996) more complete is provided. In this step the concepts o f feedback
and information distortion are integrated, as well as localized versus generalized feedback.
Hierarchies are also expanded to more than one layer o f middle management. Secondly, the data
collection is described. Finally, the description o f how the data w ill be used to (a) verify the
robustness o f the model as well as past results using the model, and (b) to study the impact o f the
various design parameters on organizational performance at various points in time. It has been
demonstrated, for example, that hierarchies leant faster at first, but then are outperformed by
majority teams or expert teams. This is important, as past studies have always focused on a specific
time horizon, yet the results at different points in time can be very different.
This study focuses on intelligent agents (or software agents) for a variety o f reasons. First
o f all. software agents are inherently rational, which satisfies one o f the key assumptions to the OMC
model. Secondly, software agents are fundamentally more consistent and understandable in their
individual behavior than their human counterparts; we know that understanding and modeling the
decision-making behavior o f individual humans is notoriously difficult while on the other hand, the
behavior o f a software agent is codified in the form o f a computer program. Finally, models o f
software agents can be regarded as proposals for, rather than just approximate descriptions of. the

43

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

44

behavior o f boundedly rational individuals. This is in contrast to mathematical utility functions


which are often used as models o f human choices, which can only be taken to be a rough
approximation' (Ouksel and Klusch, 1999).
Rather than measuring our success in terms o f the ability to understand individual and
societal behavior, the intention is to design and study organizational structures that w ill work well
from the perspective o f an organization composed o f individual software agents. These agents may
be cooperative, such as when all agents belong to one organization, or uncooperative, such as when
the agents belong to competitors who are required to communicate (such as was the case for years
with American Airlines Sabre reservation system). This study o f the collective dynamics o f large
number o f software agents is not an end in itself; it is motivated by the hope that it is possible to
derive principles that w ill help design effective software agent organizations, interaction protocols,
and decision-support mechanisms (Ouksel and Klusch. 1999).

Model Enhancements

Feedback Assumptions
A ll organizational decision models must include feedback as a necessary component (Cyert
and March, 1963).

Feedback can be described as (a) complete or incomplete, (b) certain or

uncertain, (c) biased or unbiased, and (d) timely or delayed. The most accurate feedback is
complete-certain-unbiased-timely feedback.

Unfortunately, due to human nature and software

process, feedback is often missing or incomplete, not perfectly clear, and quite biased by individual
perceptions. When intelligent agents are used, the type o f situation where these problems occur
changes, and may be reduced.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

45

According to the organizational decision-making models used so far. every agent in an


organization receives complete-certain-unbiased-timely feedback from a given decision before
making the next decision. This means that no decision is made until feedback for the prior decision
is received, regardless o f the duration o f the feedback process. I f an agent makes a decision, but is
never informed what the organizational decision is, that decision has no educational value. No
learning o f any kind can take place without feedback, and the agent is forced to discard that instance
from memory. The maximum performance level for a given organizational structure should not be
impacted by missing feedback, but the speed o f learning would be affected. I f feedback is not
received in a timely fashion, learning is delayed, because o f the delay in updating the agent's
memory map, and it is possible that the number o f incorrect decisions w ill therefore increase.
Some might argue that in the case o f intelligent agents feedback w ill always exist. This is
not always the case, due to one o f several possible reasons. First o f all. it is possible that the
software agent who should provide the feedback has not been programmed correctly to do so (a bug).
Secondly, it is possible (depending on the implementation) that occasional communication
breakdowns w ill exist. Thirdly, in interorganizational systems it is possible that the cooperation
between agents o f different organizations is not perfect, and that occasionally an organization
chooses to not provide the normal feedback to the appropriate agents.
Another assumption is that when feedback exists, it is accurate. This is obviously not
realistic. Some may wonder how it is possible to have incorrect feedback in communication between
intelligent agents. There are at least two situations where this is possible. First o f all, especially in
an interorganizational system, it is possible for agents to attempt to deceive each other as to the
realities, in order to gain competitive advantage. Secondly, sometimes agents think they understood,

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

46

but the feedback was misread. An example o f this second case could occur during an on-line
purchasing experience. A buyer at Amazon.com browses and prepares to order a book. Amazon.com
recommends another book. He believes it is interesting, so selects it to look at it further. As he does
that, he realizes that this book is exactly what he needs, but he needs it NOW! So, rather than
ordering this new book, he now exits the browser and goes to the nearest major bookstore and
purchases the book. The software agent at Amazon.com w ill perceive that this recommendation was
not good, since no action was taken on it, while in reality the recommendation was extremely useful
to the consumer.
Carley and Lin (1996) found that incorrect information inputs or feedback into a decision
(such as described above) have a greater negative impact on decision-making performance than
missing inputs. This makes intuitive sense, since no information or feedback is better than wrong
information or feedback. Incorrect or 'noisy" feedback w ill lead to an erroneous entry (as opposed
to no entry i f the feedback is missing) into the individual's memory. I f erroneous feedback is
provided only a small percentage o f the time, agents can still rely upon whatever feedback they get
for future decision-making. I f the erroneous feedback is given often enough, however, the end result
w ill be to reinforce the belief that the appropriate decision for a given set o f inputs is the one which
is in reality incorrect. It should be expected that as incorrect feedback increases, there would be a
decrease in both the maximum organizational performance and the speed o f learning. The difference
in speed o f learning would be least noticeable in the fastest learning organization and most
noticeable in the slowest learning organizational form. When the agent concerned is a member o f
a voting team, the impact o f the erroneous learning should be more noticeable than in an expert team,

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

47

because the erroneous results can be more easily ignored by an expert leader than by a voting
structure.
Because middle management is by definition an information filtering and compressing
mechanism for any organization, and hierarchies are, therefore, used to having information loss and
compression, hierarchies would be expected to be the least impacted by missing, delayed, or
incorrect feedback. This can be stated in these hypotheses:
H la :

Hierarchies arc less impacted by unavailable feedback than other


organizational forms, while expert teams are impacted most.

H 1b:

Hierarchies are less impacted by incorrect feedback than other organiza


tional forms, while expert teams are impacted most.

The model used by Mihavics and Ouksel represented a relatively small organization (only
9 agents with 3 bits per agent). As the organizational size increases and the information processing
load on each o f the agents in the organization increases, it would be expected that the advantage that
flatter organizations held under certain circumstances w ill be lost to hierarchies. The advantage
formerly enjoyed by flat organizations is expected to exist in the flatter hierarchies, while the
advantage which hierarchies held w ill now be seen in hierarchies which possess a higher number o f
layers o f middle management.

Information Availability and Correctness


Most past applications o f the OMC model have assumed that all inputs to any given agent
are present whenever a decision needs to be made. In other words, all decisions are made with a full
deck o f cards. This assumption limits the usefulness and generalizability o f the results in that it is
unreasonable to presume that all information w ill always be available when decisions need to be

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

48

made. Information can be missing even in the case o f software agents, for the same reasons that
feedback can be missing: communications problems, bugs, or lack o f cooperation o f other agents.
While this problem may be more controllable for software agents, it is still one which must be dealt
with. For some decisions where information is missing, the weight o f the evidence for or against a
given proposal w ill be sutTicient to keep missing information from having a significant impact on
the outcome o f the decision-making process; in other, more contested decisions, even one missing
piece o f information may adversely affect the outcome, or even lead to inconclusive evidence.
Because hierarchies naturally compress and summarize information, it is expected that on the one
hand they would be more resilient when faced with missing or incorrect information. At the same
time, this loss could lead to more errors, since one piece o f missing information may be decisive in
making the correct decision for a given set o f inputs.
Further, when agents use memory they need to create a memory map which allows for
decision-making in the presence o f missing information. There are two possible ways o f doing this:
(a) missing information is considered a third possible value; or (b) the agent keeps the normal
memory map, but is forced to examine all patterns which match the available information. When
information is not missing, an agent with three inputs has 8 possible patterns to evaluate (23). I f
missing information is represented as a third value, the number o f patterns increases to 27 (3J). If
the same agent has 9 input bits, the number o f patterns climbs from 512 to 19,683, making it
impractical or uneconomical, i f not impossible, for the agent to remember all outcomes.
Alternatively, each agent can examine all patterns which are possible given the missing
information.

In the case where o f a 3 bit input pattern 1 1 * (where * represents missing

information), the agent combines the past results for 1 1 0 and 1 1 1 before making a decision.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

49

Feedback from the decision is to be provided to both 1 1 0 and 1 1 1 . This method was used by
Carley and Lin (1996).

Because the latter alternative is more representative o f the way in which

people operate, and therefore more generalizabie, it is the method used in this study.
In all cases it is presumed that all information for any given task exists, yet may not be
available to the agents attempting to make decisions. When voting takes piace, as in a majority team,
the decision is a majority o f those agents present and voting. For all other cases where the agent
operates based on his or her memory, the decision is made based on the available information.
There are may ways in which incorrect information can be introduced into the system. First
o f all, it is possible to have a data read error, which would cause information to be misread.
Secondly, it is possible for users to deceive the system by entering incorrect information (lying about
one s age, gender, income or other item in an on-line profile, for example). We must recognize,
however, that software agents w ill normally have a lower rate o f incorrect information, just because
they are relatively unbiased information processors. As the proportion o f incorrect information
increases, organizational performance w ill decrease, just because it is introducing confusing data
points into the agent's memory.
When evidence is missing during the decision-making process, learning should be noticeably
slower for all organizational structures, especially for expert teams, since they are already the slowest
organizational learners. This slowness o f learning is caused in part by more incorrect decisions, and
in part by the problems o f multiple possible interpretations o f the available evidence. I f the overall
incorrect and/or missing information proportion remains stable, this problem o f multiple
interpretations would cause overall performance to be even lower for flattened or downsized
organizations due to the uncertainty and errors introduced. Thus the following is hypothesized:

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

50

H2a:

Hierarchies are less impacted by unavailable data than other organiza


tional forms, while expert teams are impacted most.

H2b:

Hierarchies are less impacted by incorrect data than other organiza


tional forms, while expert teams are impacted most.

Hierarchical Structure
Past studies o f hierarchies based on the OMC model have typically used only one level o f
middle managers (Carley, 1996; Mihavics and Ouksel. 1996), with every middle manager, as well
as the top-level manager, acting as a leader in an expert team. This means that each decision made
by a manager is based on his or her memory, not directly on the information received from the
subordinates. The organizational decision is finally made by one agent for the entire organization,
again as the expert leader o f a group o f middle managers. Decision-making is not based on voting,
but on the agent's memory. This structure implies that the role o f middle managers is not only to
communicate the information they receive, but also to synthesize information passed up by lower
levels.
Carzo and Yanouzas (1967) found that communications took longer in a taller structure,
incurring additional processing costs. At the same time, however, taller organizations dealt better
with conflict resolution and coordination. Additionally, Carley (1992) found that personnel turnover,
especially in the managerial ranks, had a significantly smaller impact on taller organizations than
on flatter ones-taller organizations were capable o f resiliency in the face o f adversity. Research by
Mihavics and Ouksel (1996) demonstrated that there are different communication costs which w ill
vary across organizational structures, and that the selection o f the organizational structure with the
least expensive communications is dependent on the specific cost factors. Regardless o f these

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

51

results, today organizations are being flattened in order to be more responsive. While improved
communications tends to make for better decisions, it is possible that organizational learning is more
unstable over time. For those organizations that might desire to be in constant change, improving
communications may be most effective means to success. On the other hand, large and fairly stable
organizations (General Motors. 3M, government) might find the resiliency o f the hierarchy
advantageous.
Research to date using the OMC model is unrealistic in that it limits all organizations to only
one level o f middle management. It is unreasonable to expect all tasks within an organization with
even a few hundred employees to have only one level o f middle management. Each additional layer
o f administration in a hierarchy acts as a filter, consolidating information into single bits o f evidence
to be transmitted to the higher level. Prior research has shown that hierarchies initially learn faster
than other organizational forms, yet in the long-term tend to underperform teams (Mihavics and
Ouksel, 1996). In order to more fully understand how organizations make decisions in a real-world
environment, it is important to investigate the impact o f additional layers o f management. Some o f
the questions which must be answered are: What happens to decision-making when hierarchies have
more than one layer o f middle managers? Does breaking down the information into small(er)
quantities (fewer subordinates) lead to better initial learning, even when maintaining the same
number o f inputs? I f having one layer o f middle managers helps organizations learn faster, might
two levels help even more? Would three help even more? This leads to the following hypothesis:
H3:

As organizations become larger, adding layers of middle management improves initial


learning speed, yet reduces the maximum potential learning capacity of the organiza
tion.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

52

Appropriate Feedback
Until now, all studies using this model have presumed that all agents in the organization get
the same feedback for the same decision. This means that even when one agent's decision was
correct for the evidence that agent saw, the feedback received would indicate that the decision was
incorrect. Tins o f course would cause confusion, and deiay the learning process. In this study both
kinds o f feedback are looked at: (a) the generalized, one size fits all, where all agents receive the
same feedback for the same decision, and (b) the localized, or specific, feedback, where the agent
receives an accurate feedback that their decision was correct or incorrect, regardless o f what the
organizational decision was. For example, an engineer could propose a given design, yet due to
marketing constraints it is not produced. Under generalized feedback, the engineer would receive
the feedback that his/her decision was incorrect. Under localized feedback the same engineer would
be informed that their decision was correct. A ll results w ill be presented for both the generalized
and the localized feedback, showing where the results are significantly different.

Data Collection
Data was collected using computer simulations o f various organizations, as has been done
in all prior research using this model. This method o f data collection is particularly appropriate for
this model, since the model assumes all agents are rationally bound, and by eliminating human
actors, it is possible to ensure that this assumption is met. Because o f the desire to (a) verify the
robustness o f past results and (b) study the asymptotic behavior o f organizational performance, each
numerical parameter used must have multiple values. Based on past results it is known that (a) that

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

53

organizational performance is a non-linear phenomenon (Carley, 1992, Mihavics and Ouksel, 1996);
and (b) that each parameter used must have at least high, medium, and low values (Carley, 1996b).
In all cases, it is necessary to avoid using parameters which could lead to inconclusive information.
This means that both the problem size and the total number o f bits seen by any agent w ill always be
odd (see Mihavics, 1995, for a complete discussion o f the problem).
This study uses the same organizational design parameters as Mihavics and Ouksel (1996),
incorporating additional parameters to verify the impact o f feedback assumptions, information
availability, and hierarchical structure. Hiree categorical variables are used: organizational structure
(majority team, expert team, and hierarchy), weighting scheme (uniform, dispersed, and clustered),
and feedback type (localized and generalized) resulting in eighteen distinct groups o f organizations.
In addition, the following numerical variables are used:
1.

Layers o f middle management in the hierarchy. One (1), two (2) and three (3) layers o f
agents are studied in order to have the minimum number o f values for comparison.

2.

Agents. Using an even numbers o f agents would lead to decisions with inconclusive evidence
when using uniform weights. A ll odd numbers beginning with 3. up to and including 81 w ill
therefore be used. It is necessary to have 81 agents in order to have the minimum number
o f agents in a hierarchy with 3 layers o f middle management.

3.

Bitsperagent. A ll odd numbers o fbits beginning with 3. up to and including 11. By having
5 data points for each agent (3, 5, 7, 9, and 11) it should be possible to determine the
behavior o f the performance curve.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

54

4.

Overlap bits. Symmetrical overlap o f 0,2,4, and 6 bits symmetrically. When overlap exists,
agents w ill view the same number o f bits from the agent immediately to their lefl and to their
right.

5.

Missing information, incorrect information, missing feedback, and incorrect feedback are
each evaluated at 0%, 5%, and 10%.
The total number o f possible combinations using these parameters is extremely large. The

existence o f two categorical variables (structure and weighting scheme) allows the division o f the
possible set o f results into 9 groups. In addition, one major constraint is imposed: the total number
o f bits including overlap seen by any agent does not exceed 11.

This means that no agent or

manager sees more than 11 bits o f information at any time. This is necessary because with 11 bits,
the number o f possible evidence patterns is already 2,048. Increasing this value would only slow
the learning process further, making simulations impractical.

Given these parameters and

constraints, the total number o f simulations needed for each organizational structure1 can be
determined by enumeration. These are presented in Table V II.
Table V II - Possible Organizational Structures
Simulations
Needed
45,360

Majority
Team
Expert Team

Hierarchy

5,670 i
251,748

'Each o f these structures for each o f the weighting schemes as well as the feedback
mechanisms, for a total o f 6 distinct categories per structure.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

55

Because the number o f remaining simulations needed to exhaustively test the model is
excessively high, some sampling technique is needed in order to select representative cases for
thisstudy. The sample size must satisfy two conditions: (a) it must have a sufficient number o f cases
to enable testing o f the robustness o f past results o f the OMC model, and (b) it must be large enough
so the results o f the regression analysis o f the asymptotic behavior are statistically powerful. Two
alternatives exist: random sampling and fractional-factorial design (FFD).
FFD is a technique which is used to simplify a full factorial design. In a full factorial design,
all possibilities are examined, and all main effects can be studied. FFD is used especially when it is
known that the interaction between some variables is minimal. It is also used extensively when the
variables used are categorical in nature and not numerical. In thisstudy it is impossible to state
whether any interactions are minimal. In addition, there are only two categorical variables (structure
and weighting), which are already accounted for. FFD does not appear to give any additional power
over random sampling, and may in effect lim it the possibilities o f testing for interactions.
According to Kendall (1980), the appropriate sample size is 10 times the number o f variables.
Cohens power tables (Cohen, 1988), show that using 10 independent variables (the maximum
needed for any organization studied), a minimum o f 69 cases are needed, with a=.01. which is less
than what Kendall suggests. While 100 cases per organizational structure would be sufficient to
ensure statistical significance o f the statistical results, a minimum o f 200 organizations were
simulated for each group. This ensures the validity and statistical power o f the results, even when
adding calculated variables.
In order to have valid simulation results, one additional question needs to be addressed: how
many decision iterations are necessary? This question arises for two reasons: ( I ) the simulation

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

56

should run long enough to approximate a steady state, enabling researchers to understand the
maximum performance which a given organization is able to attain; and (2) an inordinately large
number o f decisions should not be simulated, since organizations only look at similar decisions a
finite number o f times. This second limitation makes it unrealistic to simulate a structure where one
m illion decisions arc necessary for a pattern to repeat itself (as would be the case o f an agent facing
a 20-bit input pattern). This constraint is less important in some cases such as when the decision
is being made by an organization o f intelligent agents at a busy web site, where there might be
m illions o f decisions every day.
In the case presented by Mihavics and Ouksel (Mihavics and Ouksel 1996) maximum
organizational performance occurred in 2,500 or 5,000 decisions using only three bits o f evidence
per agent. For a larger number o f bits this number would obviously increase. Testing shows that
in many cases even 10,000 decisions are not sufficient to approach a stable learning curve, but that
even the larger organizations have relatively stable learning curves after 100,000 decisions.
Therefore, all organizational structures were simulated for 100,000 decisions.

In addition, to

eliminate the effects o f the random simulations, the results o f 50 simulation runs for each
organizational structure were aggregated.
For each simulated organization the results were stored every 10 decisions. Four results were
stored:

(a) the raw number o f correct decisions, (b) the cumulative average, (c) the average

performance over the last 100 decisions, and (d) the smoothed average over the last 100 decisions.
The program source for the simulations can be found in Appendix 1.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Chapter V: Model Robustness


In order to verity the robustness o f the model, the results achieved in this study are compared
to past results (Mihavics and Ouksel. 1996; Ouksel. Mihavics, and Carley 1996; Carley 1990,1992,
1995, 1996, Lin and Carley, 1993; Carley and Lin, 1995; Ye and Carley, 1995), looking for both
similarities and differences. Where differences are found, is is noted whether the original results
could at least be verified for the same cases.
Because the model presented by Mihavics and Ouksel (1996) is the most complete to date,
it is used as a baseline. The results presented by Mihavics and Ouksel (1996) can be divided into
two types: analytical results and simulation results. Analytical results, which are presented first,
only give one figure; the maximum performance which a given organizational structure can attain.
Computation o f analytical results is done by evaluating all possible input bit patterns for the decision
task to the resulting decision. Because o f the computational complexity, only analytical results for
models with (a) uniform weights, (b) overlapping evidence, and (c) no communication distortion
were computed. Furthermore, only cases where the total evidence was less than 64 bits were
computed. The results are found in Table V III. A graph o f these same results can be found in Figure
5. It clearly shows the negative impact o f increasing the number o f bits and/or the number o f agents.
The present results are consistent with what has been found in prior research.

57

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Table V III- Analytical Results


Maximum Percentage o f Correct Decisions
I
i

I
I

-------

i 3 89.45
;
5 87.18
7 86.21
y,
a. ~c 2 & :. 9 85.66
CQ <
ll
85.32

Agents

h*

91

11

86.82

7|
85.72|

85.15|

84.80

84.68

83.731

83.24

82.94

83.83

82.941

82.481

N/A

83.37

82.52j

N/A|

N/A

83.09

N/Ai

n /a

N/A

Figure 5

A n a ly tic a l R e su lts
E rro rs E x p re s s e d a s a P e rc e n ta g e
20
15
10
5
0

7
A g e n ts

11

Bits p e r A g e n t

11
S

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

59

Simulation results, which are presented second, not only give the percentage o f correct
decisions for a given structure, but also enables a better understanding o f the speed at which an
organization learns. In order to verify that the simulations ran correctly, they are first compared to
the same 18 organizations which were simulated in the study by Mihavics and Ouksel. The results

Table IX-Comparison to Past Results

1
I
1
!
1

Non-Overlapping Task Decompositio

Overlapping Task Decomposition

Majority
Team

i Hierarchy
i

Majority
Team

Expert
Team

| [ i = 79.4

I [ i = 76.2

j \ i = 83.5

I [ i = 80.9

= 83.4

|4 = 82.1

^ = 78.2

! Expert
Team

Uniform
Weights

| i = 84.8

= 82.3

Clustered
; Weights
I_________

H = 70.9

j 11 = 83.5

Dispersed
I Weights

| i = 82.6

| i = 80.9

= 78.2

Hierarchy
!

for both the original simulations as well as the new ones are found in Table IX, with the new results
in shaded text. The results are substantially similar, with only slight differences, verifying that the
present simulations are equivalent to what was done by Mihavics and Ouksel.

The results confirm Mihavics results that majority teams and expert teams perform better
than hierarchies when weights are uniform or dispersed for organizations facing a 27-bit problem.
As the problem size increases, however, the expert team loses this advantage and actually performs
worse than hierarchies (see Table X). An additional finding is that when both overlap and problem
size

increases,

h ie ra rc h ie s

a c tu a lly

o u tp e rfo rm

b o th

(see

T a b le

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

X I) .

60

Table X I Impact of Evidence Overlap on Organizational Performance under Dispersed


or Uniform Weights
Problem size

Weighing
Scheme

Dispersed

Overlap Bits

!
!

27

Uniform

Uniform

Dispersed
45

Uniform

Majority
Team

77.85 !

74.98 |

77.97

80.09

78.80

76.36

84.04 i

80.96

84.82

85.69 i

81.95

86.35

82.63 i

80.70 j

82.23

79.15 j

80.26

80.25

84.76

82.31 :

85.33

76.05

71.66

74.10

80.58

76.89

81.89

82.33

78.60

83.41

83.18

80.64

84.80

77.41

76.17

79.30

81.38

79.07

84.78

82.58

79.57

85.01

81.88

79.14

84.31

78.29

74.30

76.38

75.22 1

76.43

72.30

77.46 |

76.97

76.84

74.71 !

77.20 !

75.88

79.00

78.36

80.98

74.51

78.22

76.46

78.19

78.18

79.61

72.35

77.21

75.94

33

Hierarchy

l
1

Dispersed

Expert Team

I
I

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

61

The assertion that hierarchies outperform majority teams under clustered weights is still true
for the cases Mihavics explored. As the problem size increases, the difference in performance
remains similar with no overlapping. It should be noticed, however, that as overlap increases, the
performance difference disappears, and the majority teams performs slightly better (See Table X II).

Table X Organizational Performance Using Dispersed and Uniform Weights

Problem
Size
27
33
45
55
63
77

Expert
Team

Hierar
chy

Majority
Team

Dispersed

80.93

79.17

79.97

Uniform

82.01

81.09

82.68

Dispersed

79.64

76.95

79.66

Uniform

80.13

78.49

82.54

Dispersed

76.53

76.22

75.21

Uniform

76.26

77.99

78.77

Dispersed

72.98

72.49

75.17

Uniform

72.22

72.94

76.38

Dispersed

72.76

74.43

71.91

Uniform

71.34

75.25

73.38

Dispersed

67.42

69.4

70.54

Uniform

66.1

69.75

71.58

Dispersed

69.08

73.12

68.38

Uniform

66.27

73.8

70.51

Dispersed

62.98

70.04

66.48

Uniform

61.08

70.76

67.45

Dispersed

58.78

64.02

64.74

56.21

61.11

64.37

Weighting
Scheme

81

99
121

Uniform

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

62

A t the same time adding overlap or increasing the problem size does not negatively impact the
advantage which the expert teams held when weights were clustered.
Another finding from Mihavics (199S) was that majority teams facing clustered weights
perform better with overlapping tasks. (Table X III) This is especially true in cases where the agents
had only 3 bits o f evidence initially. I f agents already have 9 bits o f evidence, the additional overlap
only serves to slow the learning process, which makes the organizational performance after 10,000
decisions lower than the performance without overlap. It should be noted that it is clear from the
graph o f organizational performance that these organizational forms with large numbers o f bits per
agent and/or large amounts o f overlap have not achieved their maximum potential after 10,000
decisions (See Figure S).
Table X II Organizational Performance under Clustered Weights
1

Problem Size i

11/
L

Overlap Bits

Expert Team

Hierarchy

1 M ajority Team

84.51

80.00

75.23

82.39

79.41

76.97

85.37

81.40

82.20

85.46

81.40 j

82.43

80.81

75.73 |

73.54

79.72

76.27 !

77.85

82.28 '

80.29

80.84

83.34

79.66

81.59

r - -

r
11 1

j j

j \ 1

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

63

Table X I I I
Impact of Overlap on Clustered Majority Teams
Overla
P

Resul
t

0 169.04
1 70.43
2 172.51
3 75.08

It is possible to also confirm that the performance o f majority teams is negatively impacted
when weights are clustered rather than dispersed. It is interesting to note that for larger decision
tasks the negative impact o f clustered weights is smaller. Adding overlap bits also reduces the
negative impact o f clustered weights.
Overall, it is possible to state that past results were accurate for the cases analyzed, however,
the reasons given were not necessarily accurate, as shall be seen later in this study. The results
obtained for this study demonstrate the importance o f being able to study a larger data set as well the
importance o f studying various aspects at the same time.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

64

Chapter VI: New Results


The first step after confirming both the validity o f past results as well as the accuracy o f the
present simulations, was to perform a visual analysis on the data from the simulation runs. From a
visual analysis o f the data, a pattern to organizational learning and performance became apparent.
It can clearly be seen that the results for the cumulative averages give a much smoother curve over
time, while both the smoothed and raw results for a window give a somewhat ragged curve (?). A
further analysis o f the curves shows three important moments in the learning process: (a) the point

Figure 6

Org. Performance Over Time


80
75
70 /
65 : /
i
60 ! '
i

55
Q_
T
50

Average
Smoothed

45
0

2000

4000
6000
Decisions

8000

10000

when learning begins in earnest, (b) the point when the organizational learning slows, and (c) the
point when learning stabilizes. The point when learning begins in earnest is defined as the last time
when the organizational performance is less than 52% or the cumulative average is less than 51%.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

65

Figure 7

Org. Performance Over Time


62
60
3 58
|

56

54

Average
Smoothed

I 52
50
48
0

50

100
Decisions

150

200

This can clearly be seen in Figure 7. The second point, where organizational learning slows
substantially, can be readily ascertained by finding the point at which the difference between the
performance at a given time and the cumulative average to that time is maximized.
performance can be determined as the point

Stable

when the average performance increase o f the

organization over 1,000 decisions is less than 0.2%.


A curve-fitting analysis o f the performance curve shows that the portion o f the learning curve
from the beginning through the point when learning slows is best modeled by a cubic function. The
curve for average performance after learning has begun can be approximated using an exponential
function, taking the form shown in Formula 2.

performance =

e lb0' b,/,)

(2)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

66
It should be noted that as t (time) increases and approaches infinity, maximum performance becomes
et0. This allows the maximal performance for any simulated organization to be determined, even
though it may not have reached its maximum performance after 100,000 decisions. When the curve
over the last 10,000 decisions is analyzed, the average error o f the curve fitting function is always
less than .004%, and typically less than .0015%. An example o f the trend line o f the original data
and the fitted curve is shown in Figure 8.

Figure 8

Actual Versus Fitted Data


Last 10000 decisions

(A

68.54
o
68.52
*
o 68.5
? 68.48
o
O 68.46
(?)

d) 68.44
o>
$ 68.42
c

80)

68.4

Time

Q.

Raw Data

Fitted Data

While these general findings are interesting, this research seeks to describe the learning curve
for each o f the organizational structures in a mathematical form. These formulae allow the clear
testing o f the hypotheses presented in Chapter 4.
The results for each hypothesis w ill be examined using both the lim it values as well as during
the learning process, using both generalized and localized feedback. Because this study also uses

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

67
three organizational structures and three information weighting schemes, there are a total o f 18
different organization types. The numerical variables used to generate the simulations are presented
in Table XIV.

Table X V presents the variables which are calculated based on these first 10

variables, and are used in order to attempt to capture some o f the composite effects. Table X V I
presents the variables which are either computed directly or from the statistical analysis o f the
simulation results.
Table X IV - Variables Used

Variable Name

Short Description

AGENTS

The number o f Agents that exist in a given organizational structure

BPAGENTS

The number o f bits that each agent in a given organizational structure sees

MM1

The number o f first level middle managers

MM2

The number o f second level middle managers

MM3

The number o f third level middle managers

OVERLAP

The number o f bits that an agent sees which are also seen by other agents

INCOINFO

The percentage o f information that is incorrectly read into the problem

MISSINFO

The percentage o f information which is unavailable

INCOFEED

The percentage o f time that incorrect feedback is given

MISSFEED

The percentage o f time that feedback is not received

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

68
Table X V - Calculated Parameters Based on Original Parameters

Variable Name

Short Description

TOTINFO

INCOINFO + MISSINFO

TOTFEED

INCOFEED + MISSFEED

TOTMISS

MISSINFO + MISSFEED

TOTINCO

INCOINFO + INCOFEED

TOTERROR

INCOINFO + MISSINFO + INCOFEED + MISSFEED

PROBBITS

The total number o f bits o f information in the problem task

TOTBITS

The total number o f bits o f information seen by each agent, including shared
bits

TOTSEEN

The total number o f bits o f information seen by each agent, including shared ;
bits multiplied by the number o f Agents

PCTBITS

The percentage o f the problem that an agent has primary responsibility for

REDUND

The percentage o f information which is seen by more than one agent

PCTBITEX

The percentage o f the information seen by one agent that is not shared

PCTBITSN

The percentage o f information seen by one agent (including sharing) as


compared to what would be seen without sharing

PROBEX

The percentage o f Problem Bits which is seen by only one agent

PCTPROB

The amount o f information seen by all agents as compared to what would be


seen without sharing

DMUPERMM

The number o f Agents per First level o f management

PCTDMU

The percentage o f agents reporting to each member o f the first level o f


middle management.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

69

Table XVI-Variables Calculated from the Simulation Results

Variable Name

Short Description

BOSLASTK

The b0 for the exponential function o f the last 10000 decisions

PCTLEARN

The maximum learning potential as predicted by BOSLASTK

LASTAVG52

The last time when the cumulative average performance was below 52%

LASTSMT53

The last time when the performance at that point in time was below 53%

LASTSMT60

The last time when the performance at that point in time was below 60%

LASTAVG60

The last time when the cumulative average performance was below 60%

STABLENUM

The point in time where learning is defined as approximating stability

STABLEVAL

The value for performance when STABLENUM is reached

B0C1, B1C1,
B2C1, B3C1

The Beta coefficients for the Cubic function which goes from the beginning
o f learning to when learning stabilizes

M AXGAP

The time when the maximum difference exists between the cumulative
average and the performance at that point in time

M AXG APAM

The value o f the difference between the average performance and the
performance at time MAXGAP

LASTCROSS

The last time when the Average Performance and Smoothed Performance
curves cross

BOSALL,
B1SALL

The beta coefficients for the exponential function approximating the entire
curve after teaming begins

BOSASTAB,
B1SASTAB

The beta coefficients for the exponential function approximating the entire
curve after learning stabilizes

In order to determine what factors determine the behavior o f learning, backward elimination
stepwise regression analysis was used, using the variables mentioned in Tables X IV , X V , and X V I.
This method was preferred because it attempts to use all independent variables, yet excludes those
found to be statistically insignificant in the determination o f the dependent variable. The SPSS code
used to perform these analyses can be found in Appendix 2. Three questions were o f particular

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

70

interest: (1) when does learning begin in earnest?, (2) when does learning stabilize?, and (3), what
is the maximum predicted learning for each organization ? As the factors which are important in
determining each o f these points are identified, it is possible to clearly confirm or deny each o f the
hypotheses. The complete statistical results o f these regression analyses are presented in Appendix
3, and the summary results are presented in Table X V II2. Each row in the table presents the
hypothesis, with results for both generalized and localized feedback. The columns present (a) the
time when learning begins, (b) the time and value when stability occurs, and (c) the theorized
maximum value, derived from Formula 2.
Missing feedback is found to have a minimal impact on organizational learning. Hierarchies
are impacted more often than other organizational forms, while expert teams are never impacted.
The first hypothesis, that hierarchies are least impacted by missing feedback, is therefore rejected.
With generalized feedback, hierarchies are more negatively affected by incorrect information
than when localized feedback is given. Because hierarchies are negatively affected regardless o f

2Due to space constraints in this table, EX, HI, and M A are used to abbreviate the three
fundamental organizational structures, and UN, CL and DI are used to reflect the weighting
scheme used for the organizations discussed.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

tu n r n im a H r

la a p lp a

U n i^ S l^ ia a

Limit Vdacs

(BBOOUUQ

Wm c UmmcIh *
k f d r f lf M M iM i
ta te k O flS V E H Q

Leaning
(BEG1NL
MO

1 n ' u IW ilM I
Vriac
(STABLEV
AL)

Vrfue
(STABLEVAL)

Spaed
(STABLENUM)

N cga tii e im pact on


M AU N

M ight negative im pact


fo r H IU N

Slows sta b ility for


M A U N N o im pact
on other structures

No im pact on
any
organizational
structure

Negative
im pact on
H IC L

No im pact at
a ll

Slows
sta b ility fo r
H IC L No
im pact on
other
structures

No im pact, except slight positive


im pact on H IU N

N egatn e im pact on
E X D I. H IC L a id
H tU N

Negative im pact fo r a ll
Hierarchies and E XD I

O nly has negative


im pact on H IC L
and H IU N

M inim al
Negative
im pact on
H IC L and
H ID I. M A D I

Negative
im pact on
a ll Expert
Teams and
M AU N

S light
negative
im pact on
E X D I. H ID I.
H IU N a id
M AU N

Has negative
im pact on a ll
expert teams
and M AU N

M inim al Negative unpact on a ll


E xp a t structures and a ll structures
w ith uniform weights

Positive im pact on a ll
Hierarchies Positive
im pact iw i M A O I and
M AU N

No im pact on Expert
teams Positive im pact
fo r H ierarchies and
larger positive im pact
fo r M a jo rity Teams

Positive im pact on
a ll H ierardues and
M ajority teams
Impact higher on
H ierarchies

No im pact on
Expert teams
Negative
im pact on a ll
M a jo rity teams
and larger
im pact on
Hierarchies

Positive
im pact on
a ll expert
teams and
M AC L

Negative
im pact on a ll
o rg a n iza tio n
1 structures

Positive
im pact on a ll
Expert teams
and on
M AC L No
im pact on
Hierarchies

Negative im pact on a ll organizational


structures C oefficients low est fo r
Hierarchies

Negativ e im pact on a ll
H ierardues Negative
im pact ( M A O I.
M A U N and EXUN

Negative im pact on a ll
Hierarchies and
M a jo rity teams S light
negative im pact on
EXCL

Negative im pact on
a ll H ierarchies

No im pact on
E XU N . slight
negative
im pact on
EXC L and
EX D I Larger
im pact on
Hierarchies
and Largest
im pact on
M a jo rity
Teams

S lig h tly
negative
fo r Expert
Teams
Large
im pact on
H ierarchic
s No
im pact on
M a jo rity
Teams

Negative
im pact on a ll
structures
M a jo rity
Teams least
affected.
Expert teams
most
affected

Negative
im pact on a ll
Expert
Teams and
M AC L

Negative im pact on a ll structures


M ajorities impacted least, w ith Expert
teams impacted most

N e p tiv tf fo r H IC L

A ll hierarchies
negatively impacted

Negative im pact on
H ierarchies w ith
U niform and
D istributed weights

Negative
im pact on
performance
fo r a ll
Hierarchies

Negative
fo r H IC L

A ll
hierarchies
negatively
im pacted

Negative
im pact on
H ierarchies
w ith
Clustered
weights

Negative perform ance fo r a ll


hierarchies Performance penalty
greater than fo r generalized feedback

Cam ^M tapatM aH
M n c k ia a tln t

h p M ilf M M M I l
^ ( M 8 S I* 0 ) * a

ta k U ia p illa w
pisM aM L

Hhih^M m 09 k m
tm$mMlly tH iiW w .
^ (P C Q g g Q )lta
ia pM M iaH L

la p a . a f c | Ip a i r f
a iM p a a M p a a a a d i

S a a lte a B S M i
In a p a p a ip p r * *
? * (WO)

IM V B m

(StABLEN
UM)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

O a ir f n ir n a ir lr

72

the aspect looked at, the second hypothesis which posited that hierarchies were more resilient in the
face o f incorrect information is rejected in the case o f generalized feedback. While hierarchies are
negatively impacted even with localized feedback, it is apparent that the impact is greater on majority
teams and even greater on expert teams. Therefore the second hypothesis is sustained in the presence
o f localized feedback.
Missing information actually assists hierarchies using generalized feedback in beginning to
learn and reaching stability. This comes at a price, however: maximum learning is stunted, relative
to other organizational forms. Therefore, the third hypothesis, that hierarchies could better deal with
incorrect information, is partially supported: missing information is helpful until relative stability
is reached. After that, missing information is a hindrance to hierarchies. I f organizations have a
localized feedback mechanism, hierarchies appear to hold no particular advantage or disadvantage
relative to other organizational forms, therefore the third hypothesis would be rejected.
While the hypothesis was that hierarchies would be more resilient in the face o f incorrect
information, the results show that hierarchies are impacted more than any other organizational form
when generalized feedback exists, and more than majority teams even when localized feedback
exists. This leads to the rejection the fourth hypothesis.
Finally, it had been hypothesized that adding layers o f middle management would improve
initial learning speed, yet ultimately reduce the maximum performance o f the organization.

The

results show that the additional layer o f management negatively affects all aspects o f organizational
performance, regardless o f feedback mechanism. The fifth hypothesis must also therefore be
rejected.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

73

Through these results it is evident that hierarchies do not necessarily behave as is expected,
with resiliency in the face o f information processing difficulties. It is necessary to ask: why not?
While not pretending to have a definitive answer to that question, it should be noted that hierarchies
seem to perform better with less information overall, as well as less information per agent, and
missing information can even be positive for their initial learning process. Is it possible that their
initial performance gains are nothing other than the benefits o f a divide and conquer game? The
results in this study certainly suggest this. It appears that the initial performance advantage the
hierarchies hold is a function o f each agent seeing fewer bits o f information.

However, this

information segmentation and resulting information loss may be the cause o f their lower maximum
performance. Further research is needed to comprehend the reasons behind this behavior.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Chapter VII: Model Applications


The results o f this study show that there are significant differences in organizational
performance, and that these results depend on both the organizational structure as well as how the
information is processed. These results confirm the now-common belief that no one informationprocessing structure is best. What is important today is to find ways to select the appropriate
structure from the myriad o f choices.
Carley and Svoboda (1996) have suggested that organizations can adapt over time, eventually
finding the optimal organizational form for the decision task. W hile this may be true, this process
is time-consuming and can have significant financial costs. Most organizations are unwilling and/or
unable to invest the time, energy, and money in reorganizing their processes multiple times in search
o f the ideal solution, making this approach impractical.
Because it is clear that this research is more easily utilized in an organization composed o f
intelligent agents, the examples w ill focus on examples where intelligent agents are needed. Several
authors have recently discussed the use o f computer-based agents in decision-making in various
areas, including e-commerce (Ma, 1999; Maes, Guttman and Moukas, 1999), battle management
(Baxter and Hepplewhite, 1999), and workflow (Banerjee, Chrysanthis, and Pollack, 1999), as well
as many other areas. Each o f these authors point out the usefulness o f intelligent agents, yet no
mention is made as to how to structure the organization and/or information to maximize
performance.
Maes. Guttman and Moukas (1999) clearly show that electronic agents are capable o f
assisting in the commercial process, especially where the decision to be made is a binary decision,

74

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

75

such as whether to proceed or not, or whether to deal w ith one vendor or not. It is interesting to note
that, as has been pointed out, learning is extremely slow as the volume o f information seen by any
agent increases. This problem demands a solution, such as can be developed using multiple agents
for any one function. In e-commerce there are several points where this can be valuable, whether
in the decision o f selling/buying at a given price, or whether the item is appropriate or not, many
areas o f commerce use binary choices, making intelligent agents a very useful tool. The need to
coordinate the task and to structure the communication between the various agents leads to the use
o f this model.
A second significant area where binary decision-making is often needed is battle
management, where the decision to shoot or not is not nearly as trivial as whether to purchase or not.
There is often much more information than can be effectively processed by any agent, and
furthermore there is a large variety o f sources and forms in which information is received. In a
battlefield context, there are a greater number o f sources and forms o f information, resulting in more
information than can be effectively processed by any single agent. Because o f this, agents must be
organized, efficient and effective in their information processing so that appropriate decisions can
be made. In this situation again, the model presented here would be a useful tool in the design o f
the various processes.
A third example o f where intelligent agents need structured communications and
coordination is workflow systems. Agents in workflow systems need to make decisions in order to
route the task information to the appropriate task, as well as to make decisions during each o f the
tasks which may be a part o f the system. Traditionally these routing decisions were made on
relatively simple decision rules. Baneijee, Chrysanthis, and Pollack (1999) discuss how intelligent

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

76

agents could be used to enable more complex decision rules. They also point out that many o f the
functional decisions made by people in the past could potentially be assigned to intelligent agents,
enabling many processes to be completed in a shorter time-frame. I f intelligent agents in workflow
are studied as an organizational structure, again the results o f this model can be used to assist in the
structuring o f the process. In addition, it is quite probable that many o f the decisions that need to
be made require large amounts o f information, necessitating an array o f agents to accomplish a single
decision task.
In addition to the applications this model can have in designing organizations o f intelligent
agents, this research can inform organizational design in traditional organizations. For example, it
can inform managers during the redesign o f current business processes and the design o f new
processes. Also, the model can be applied to securities markets, where the decision to buy or sell
depends on the ability to process effectively large volumes o f information. Finally, this model can
be applied to personal security or policing, where human agents must make decisions whether or not
to further investigate a situation.
In todays environment, the final objective is to be cost-effective. While it may be possible
to design an organization which w ill perform better, ofien this comes at a cost. The results from this
study need to be combined with the cost model published by Mihavics and Ouksel (1996). In this
cost model, there are seven distinct costs: processing time, interprocess delay, processing cost,
communications cost, delay costs, personnel costs, and result o f link outage costs. It is useful to
notice that each o f these costs still exists when organizations are composed o f intelligent agents.
What has changed is the cost factor, where some o f these costs may be substantially lower in an
organization o f intelligent agents.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

77

Regardless o f whether the model is applied to intelligent agents or individuals, it is important


to remember that its focus is on the rational organization and information processing, not on the
behavioral aspects o f it.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Chapter VIII: Conclusions and Further Research


This study enhances the understanding o f the impact o f organizational design on
organizational performance in several ways. First o f all. it has been demonstrated that the OMC
model is a robust model, and one that is a useful tool for this kind o f study. It has also been
demonstrated that past results were accurate. It has been noted, however, that in several cases these
results were only true while looking at a limited set o f organizations.
It is important to note that through a series o f simulations, it was possible to determine that
the simulated learning curve follows the same pattern as a typical learning curve, with three phases:
start-up, learning, and stabilization. Not only do these three phases exist, but it is also known how
they can be approximated, and what the key factors are in determining at what point in time they
occur.

The results also shows that it is possible to reasonably approximate the maximum

performance through an exponential function. This enables the calculation o f the maximum possible
learning for an organization, regardless o f whether the maximum performance has been achieved
during a simulation run.
The ability to forecast outcomes for a given organization within the model without using
simulations is a significant step forward. The possibility o f understanding the impact o f various
organizational design parameters on both the maximum organizational performance as well as the
curve leading to that performance enables an a p rio ri evaluation o f some o f the results o f changing
an organization from one design to another. Organizations could combine the ability to determine
a p rio ri the learning characteristics o f a proposed organizational design derived from this study with
the information processing costs for that same organizational design using the cost measures

78

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

79

(production, coordination, and vulnerability costs) presented by Mihavics and Ouksel (1996). This
combination could be a valuable tool to assist them in determining which o f the proposed alternative
organizational structures would in fact be financially beneficial before having to invest resources in
the implementation o f the new design.
The hypotheses were largely unsupported, in large part due to the fact that prior results were
only correct for a limited case o f organizations. As the number o f cases and the time frame are
expanded, the results which might have been true for one case, are not in another. This reaffirms the
need to take a look at more complete study designs before making assertions about organizational
design and structure.
While this present study increases the understanding o f how organizational design impacts
organizational learning and performance, there are at least three areas in which further research is
necessary, but were beyond the scope o f this project. First, the impact o f various decision functions
needs to be evaluated. Concepts such as information interdependence (Mihavics and Ouksel, 1996)
and data mining must be explored., especially as databases become larger and more pervasive.
Second, the impact o f the primacy and recency effects needs to be studied, focusing on the possibility
agents have to adapt their decision functions. Finally, fieldwork would be necessary to test the
validity o f the simulation results, both in intelligent agents and human agents. .

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Cited Literature
Argote, Linda. Group and organizational learning curves: Individual, system and environmental
components. British Journal o f Social Psychology, 32: 31 -51.
Argote, Linda. Organizational Learning: Creating, Retaining and Transferring Knowledge.
Boston: Kluwer Academic Publishers, 1999.
Baker, Wayne E. "The Network Organization in Theory and Practice." Nohria, N itin and Robert
G. Eccles eds. Networks and Organizations: Structure, Form and Action. Boston, MA:
Harvard Business School Press. 1992.
Baligh, Helmy H., Richard M. Burton and Borge Obel. "Design or Organizational Structures: An
Expert System Method." Roos, J. L ed. Economics and A rtificial Intelligence. Oxford:
Pergamon Press. 1987.
Baligh, Helmy H., Richard M. Burton and Borge Obel. "Devising Expert Systems in Organization
Theory: The Organizational Consultant." Masuch, Michael ed. Organization, Management,
and Expert Systems. Berlin: Walter De Gruyter. 1990,35-57.
Banetjee, Sujata, Panos K. Chrysanthis and Martha E. Pollack (1999), Converging Technologies:
Workflow Management and Plan Management.
Baxter, Jeremy and Richard Hepplewhite. Agents in Tank Battle Simulations, Communications
o f the ACM. March 1993 (42:3), 74-75.
Burton, Richard M. and Borge Obel. Designing Efficient Organizations:
Experimentation. Amsterdam: Elsevier Science. 1984.

Modeling and

Carley, Kathleen. CCOR Workshop, Carnegie-Mellon University. Pittsburgh, PA: 1996.


Carley, Kathleen and Zhiang Lin. "A Theoretical Study o f Organizational Performance under
Information Distortion." Management Science. October, 1997 .
Carley, Kathleen M. "Computational and Mathematical Organization Theory: Perspective and
Directions." Computational and Mathematical Organization Theory. 1,1, 1995,39-56.
Carley, Kathleen. "Coordinating for Success: Trading Information Redundancy for Task
Simplicity." Proceedings o f the 23rd Annual Hawaii International Conference on System
Sciences. 1990.

80

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

81
Carley, Kathleen. "Organizational Learning and Personnel Turnover." Organization Science. 3,1,
February, 1992, 20-46.
Carley, Kathleen. Designing Organizational Structures to Cope with Communication Breakdowns:
A Simulation Model, Industrial Crisis Quarterly, 5, 19-57.
Carley. Kathleen M. and Zhiang Lin. "Organizational Designs Suited to High Performance Under
Stress." IEEE Transactions on Systems. Man. and Cybernetics. 25. 2. February. 1995.
221-230.
Carzo, Rocco and John N. Yanouzas. Formal Organization: A Systems Approach. Homewood. IL:
Richard D. Irwin. 1967.
Cascio, Wayne F. Guide to Responsible Restructuring.
Labor, 1995.

Washington, D.C.: U.S. Department o f

Cascio, Wayne F., Clifford E. Young and James R. Morris. "Financial Consequences o f
Employment-Change Decisions in Major U.S. Corporations." Academy o f Management
Journal. 40, 5, October, 1997. 1175-1189.
Cohen, Jacob. Statistical Power Analysis fo r the Behavioral Sciences, 2nd ed.. Hillsdale, NJ:
Lawrence Erlbaum Associates. 1988.
Cyert, Richard M. and James G. March. A Behavioral Theory o f the Firm. Englewood Cliffs, NJ:
Prcntice-Hall. Inc.. 1963.
Davenport, Thomas H. Process Innovation: Reengineering Work through Information Technology.
Boston, M A: Harvard Business School Press. 1993.
Davenport, Thomas H. "The Fad that Forgot People." Fast Company. 1,1, February. 1996.70-74.
Davenport, Thomas H. and James E. Short. "The New Industrial Engineering: Information
Technology and Business Process Redesign." Sloan Management Review. 31, Summer
1990, 11-27.
Davis, Stanley et al. Matrix. Reading, M A: Addison-Wesley, 1977.
Drucker, Peter F. The Age o f Discontinuity: Guidelines to our Changing Society. New York:
Harper & Row. 1969.
Drucker, Peter F. Management: Tasks, Responsibilities, Practices. New York: Harper & Row.
1974.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

82

Evaristo, Roberto, Carl Adams and Shawn Curley. "Information Load Revisited: A Theoretical
Model." DeGross, Jan and et al eds. Proceedings o f the Sixteenth International Conference
on Information Systems. Amsterdam. The Netherlands: 1995,197-206.
Fayol, Henri. Administration Industrielle et Generate: prevoyance. organization, commandement,
coordination, controle.. Paris: Dunod. 1918.
Galbraith, Jay. Designing Complex Organizations. Reading, MA: Addison-Wesley Publishing
Company. 1973.
Galbraith, Jay R. Organizational Design. Reading, M A: Addison-Wesley Pub. Co.. 1977.
Galbraith, Jay. "Organization Design: An Information Processing View." Interfaces. 4,3,1974,
28-36.
Hailey, Alexis A. "Downsizing." The Meridian International Institute on Governance, Leadership,
Learning, and the Future. Washington, D. C.: 1995.
Hammer, Michael and James Champy. Reengineering the corporation: a manifesto fo r business
revolution. New York: HarperCollins, 1993 .
Herzberg, Frederick. The Motivation to Work. New York: Wiley. 1959.
Herzberg, Frederick. Word and the Nature o f Man. New York: Thomas Y. Crowell. 1966.
Johansen, Robert and Rob Swigart. Upsizing the Individual in the Downsized Organization:
Managing in the wake o f reengineering, globalization, and overwhelming technological
change. Reading, M A: Addison-Wesley, 1994.
Keen, Peter G. W. Shaping the Future: Business Design through Information Technology.
Cambridge, M A: Harvard Business School Press. 1991.
Kendall, Maurice. Multivariate Analysis, 2nd ed..
Company. 1980.

New York, NY:

Macmillan Publishing

Krackhardt, David. CCOR Workshop. Pittsburgh, PA: Camegie-Mellon University. 1996.


Krackhardt. David. "Graph Theoretical Dimensions o f Informal Organizations." Carley, Kathleen
and M. J. Prietula eds. Computational Organization Theory. Hillsdale, NJ: Lawrence
Erlbaum Associates. 1994.
Krackhardt, David and Jeffrey R. Hanson. "Informal Networks: the Company Behind the Chart."
Harvard Business Review. 71, July-August 1993, 104-111.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

83

Lawrence, Paul R. and Jay Lorsch. Organization and Environment; Managing Differentiation and
Integration. Homewood, IL: Richard D. Irwin. 1967.
Lawrence, Paul R. and Jay Lorsch. Developing Organizations: Diagnosis and Action. Reading,
M A: Addison-Wesley, 1969.
Levitt, Barbara and James G. March. "Organizational Learning." American Review o f Sociology.
1988.
Lin, Zhiang and Kathleen M. Carley. Proactive or Reactive: An Analysis o f the Effect o f Agent
Style on Organizational Decision-making Performance," International Journal ofIntelligent
systems in Accounting, Finance and Management. 2:4, 1993.271-287.
Lin, Zhiang and Kathleen M. Carley. "DYCORP: A Computational Framework for Examining
Organizational Performance Under Dynamic Conditions." Journal o f Mathematical
Sociology. 20, 1995, 193-217.
Lin, Zhiang and Kathleen M. Carley. "Proactive or Reactive: An Analysis o f the Effect o f Agent
Style on Organizational Decision-making Performance." International Journal o f Intelligent
systems in Accounting, Finance and Management. 2,4, December. 1993, 271-287.
Ma, Moses. Agents in E-Commerce," Communications o f the ACM, March 1993 (42:3), 81.
Maes, Pattie, Robert H. Guttman and Alexandras G. Moukas, Agents that Buy and Sell,"
Communications o f the ACM, March 1993 (42:3), 82-83.
Mackenzie, Kenneth D. Organizational Structures. Arlington Heights, IL: AHM Publishing Corp..
1978.
Malone, Thomas W. "Modeling Coordination in Organizations and Markets."
Science. 33,10,1987,1317-1332.

Management

March, James G. and Herbert A. Simon. Organizations. New York: John Wiley & Sons, Inc..
1958.
Martin, James. Cybercorp: The new Business Revolution. New York: American Management
Association. 1996.
Maslow, Abraham. Motivation and Personality. New York: Harper. 1954.
Mayo, Elton. The Human Problems o f an Industrial Civilization. New York: The Macmillan
Company. 1933.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

84

McGregor, Douglas. The professional manager. New York: McGraw-Hill Book Company. 1967.
Mihavics, Kenneth. A Model fo r (he Study o f the Effects o f Organization Structure on Organiza
tional Learning. Univ. o f Illinois at Chicago: 1995.
Mihavics, Ken W. and Aris M. Ouksel. Learning to Align Organizational Design and Data,
Journal o f Computational and Mathematical Organization Theory. 1:2. 1996.
Mintzberg, Henry. Structure in Fives: Designing Effective Organizations. Englewood ClilTs, NJ:
Prentice Hall. 1983.
Moingeon, Bertrand and Amy Edmondson. Organizational Learning and Competitive Advantage.
Thousand Oaks, CA: Sage Publishers. 1996.
Morton, Michael S. The Corporation o f the 1990s: Information technology and organizational
transformation. New York: Oxford University Press. 1991.
Morvat, J. "The Problem Solvers: Quality Circles." Robson, M ed. Quality Circles in Action.
Aldershot, UK: Gorver Publishing. 1984.
Nolan, Richard L. and David C. Croson. Creative Destruction: A Six-Stage Process fo r
Transforming the Organization. Boston, MA: Harvard Business School Press. 1995.
Ouksel, Aris M. and Mathias Klusch. Dynamic Coalition Formations O f Software Agents in
Cooperative and Non-Cooperative Environments. Working Paper, University o f Illinois at
Chicago, 1999.
Ouksel. Aris M., Kenneth W. Mihavics and Kathleen M. Carley. "A Mathematical Model o f
Organizational Performance." University o f Illinois at Chic: 1996.
Peters, Thomas J. and Robert H. Waterman. In Search o f Excellence: Lessons from America's
Best-run Companies. New York: Warner Books. 1982.
Peters, Thomas J. and Nancy Austin. A Passion fo r Excellence: The Leadership Difference. New
York: Random House. 1985.
Peters, Thomas J. Liberation Management: Necessary Disorganization fo r the Nanosecond
Nineties. London: Macmillan. 1992.
Reuters. Dying for Information? An Investigation into the Effects o f Information Overload in the
USA and Worldwide. London, England: Reuters Limited. 1996.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

85

Savage, Charles M. 5th Generation Management. Bedford, M A: Digital Press, 1990.Schein, Ed.
"Three Cultures o f Management: The Problem o f Managing Across Conceptual Bound
aries." M IT: 1996.
Scott, W. Richard. Organizations: Rational, Natural, and Open Systems , 3d ed. Englewood
ClilTs, NJ: Prentice Hall, Inc. 1992.
Senge, Peter M. Art Kleiner. Charlotte Roberts and Richard B. Ross.
Fieldbook. New York: Doubleday. 1994.

The Fifth Discipline

Senge, Peter M. Thefifth discipline: The art and practice o f the learning organization. New York:
Doubleday Currency. 1990.
Shenk, David. Data Smog: Surviving the Information Glut. London: Abacus. 1997.
Simon, Herbert A. "Administrative Decision Making." Public Administration Review. 65, March.
1965,31-37.
Taylor, Fred Manville. Principles o f Economics. Ann Arbor, M I: University o f Michigan. 1911.
Vahtera, Jussi. Mika Kivim dki and Jaana Pentti. "Effect o f Organisational Downsizing on Health
o f Employees." Lancet. 350, October 18, 1997, 1124-1128.
Vroom. V. and P. Yetton. Leadership and Decision Making.
Pittsburgh Press. 1973.

Pittsburgh. PA:

University o f

Vroom. Victor H. and Arthur G. Jago. "Decision Making as a Social Process: Normative and
Descriptive Leader Behavior." Decision Sciences. 5, 1974,743-755.
Walton, Mary. The Deming Management Method. New York: Putnam. 1986.
Weber, Max. The Theory o f Social and Economic Organization. New York: Oxford University
Press. 1947.
Woodward, Joan. Industrial Organization: Theory and Practice. London: Oxford University
Press. 1965.
Ye, Mei and Kathleen M. Car ley. "Radar-SOAR: Towards an A rtificial Organization Composed
o f Intelligent Agents." Journal o f Mathematical Sociology. 20,1995,219-246.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Appendix 1
The programs for the simulations use three program files and one text file, in addition to a
temporary file and the simulation results. The first text file is a file that contains the parameters o f
all the organizational structures to be simulated. This text file is read by a stub (stub.c) which begins
each o f the organizational simulations by calling simulate.c. This second program then generates
a temporary text file which contains the information processing structure o f the simulated
organization, using the KrackPlot format (a standard format for recording social network links). The
second program then calls the simulator (start.out). which performs the simulation o f the
organization and writes out the results. Because two different feedback mechanisms were simulated,
two different sets o f source exist- startgen.c, startloc.c.

stub.c
#include
#include
#include
^include
^include
^include

<sys/stat.h>
<sys/types.h>
<string.h>
<stdio.h>
<stdlib.h>
<time.h>

int main(void)
{
struct stat statbuf. statbufnew;
time_t orig_time;
char fname[23];
char textbut{200];
char commline[200];
FILE stream;
/* open a file for update * /
strcpy(fname,"doit.txt");
stream = fopen(fname, V ) ;
printf(fname);

86

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

87

/* get information about the file */


fstat(fileno(stream), &statbuf);
/* read one line at a time */
while (1 = 1) {
orig_time = statbuf.st_mtime;
i f (feof( stream))
return 0;
fgets(textbuf,200,stream);
strcpy(commline, "./simulate.out");
strcat(commline, textbuf);
system(commline);
stat("doit.txt", &statbufnew);
i f (statbufnew.st_mtime != o rig tim e )
{

fclose(stream);
stream = fopen(fname,"r");
fstat(fileno( stream), &statbul);
}

} /* end while */

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

88

Simulate.c
^include <stdlib.h>
^include <stdio.h>
^include <ctype.h>
#include <math.h>
^include <time.h>
^include <string.h>
#ifdef DOS
#include <dos.h>
#else
#include <sys/types.h>
#endif
#define mmlevels
1
# define
# define
# define
# define
# define
# define

AGENTM AX
MAXBITS
12 /*
AGENTPOSS
LEADOPTIONS 2049 /*
MMOPTIONS
MAXTOTBITS

82 /* agents + 1 */
bpagent + bleft + bright +1 */
2049 /* 2Amaxbits + 1 */
max(2Amidmanagers + 1, 2Aagents + 1 )*/
2049 /* 2Ammagents * 1 */
892 /* agents bpagent + 1 */

/* ------------------ 02-25-96 11:41amDcfinitions o f global variables


-*/
int main(int argc, char *argv[]){
int x, agentsper;
int initialrun = 1;
int dmu, bpdmu, overlap.mmgrsl, mmgrs2, mmgrs3, instances;
int howmanydec, strutodo, wttodo;
int infomiss, infoerror, feedmiss, feederror, manager;
char fname[25];
char kpout[30];
int i, j, totalcount, midpoint, balanced;
FILE kpoutf;
int missing[4];
unsigned short evidstr[ 1200][ 1200];
int realweights[1000];
int totalrecs, begbit, firstrow, firstcol;
int dmusper[30];

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

89

int realbits, avgbits, midbits, bit, avgdmus;


char commline[200];
dm 11 = atoi(argv[l]);
bpdmu = atoi(argv[2]);
overlap = atoi(argv[3]);
mmgrsl = atoi(argv[4]);
mmgrs2 = atoi(argv[5]);
mmgrs3 = atoi(argvf61);
i f (mmgrs3 <3)
manager = 0;
strcpy (fname,argv [ 7]);
strcpy(kpouU'name);
strcat(kpout,".kp");
instances = atoi(argv|8]);
howmanydec=atoi(argv[9]);
strutodo= atoi(argv[10]);
i f (strutodo<3)
mmgrsl = 1;
wttodo= atoi(argv[ 11 j);
infomiss = atoi(argv[12]);
infoerror = atoi(argv[ 13]);
feedmiss = atoi(argv[14]);
feederror = atoi(argv[ 15]);
p rin tf ("File Name
");
printf (fname);
p rin tf ("\n");
printf ("Decision Type: %d\n", strutodo);
printf ("Agents
%d \n", dmu);
printf ("Weights
%d \n", wttodo);
printf ("Overlap
%d \n". overlap);
printf ("Bits
%d \n", bpdmu);
printf ("Midmanagers #1 %d \n", mmgrsl);
printf ("Midmanagers #2 %d \n", mmgrs2);
p rin tf ("Midmanagers #3 %d \n". mmgrs3);
p rin tf ("Instance");
i f ( access(fname,0)~0 )
exit(0);
totalcount = bpdmu*dmu + dmu + mmgrsl + mmgrs2 + mmgrs3 + manager;
kpoutf = fopen(kpout."w");
fprintf( kpoutf, "%3d\n", totalcount);

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

90

realbits = dmu * bpdmu;


avgbits = realbits/3+.5;
midbits = realbits - 2 * avgbits;
switch(wttodo)

I
case 1: for(bit=l;bit<=realbits;bit-H-)
realweights[bit]=l;
break;
case 2: for(bit=l;bit<=avgbits;bit++)
realweights[bit]=9;
for(bit=avgbits+l;bit<=(avgbits+midbits);bit-H-)
realweights[bit]=l;
for(bit=(avgbits+midbits+1);bit<=realbits;bit++)
real weights[bit]=5;
break;
case 3: for(bit=l;bit<=realbits;bit+=3)
{

realweights[bit]=l;
if(bit+l<=realbits)
realweights[bit+l]=5;
if(bit+2<=realbits)
realweights[bit+2]=9;
}
break;
} /* end switch weights */
fo r(i= 0 ; i< 1000; i++){
for (j=0; j< l 000; j+ + ){
evidstr[i][j] = 0;
}
}

for (i = 1; i<= realbits; i++){


fprintft kpoutf, 25 25 EVID");
fprintf( kpoutf, "%d", i);
fprintf( kp o utf," EVIDENCE ");
fp rin tf (kpoutf. "%2d", realweights[i]);
fprintf( kpoutf, "%3d", infomiss);
fprintf( kpoutf, "%3d", infoerror);
fprintf( kpoutf, "%3d", feedmiss);
fprintf( kpoutf, "%3d", feederror);
fprintf( kpoutf, "\n");

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

91
}

for (i = 1; i<=dmu; i++){


fprintf( kpoutf, "25 25 D M U");
fprintf( kpoutf, "%d", i);
fprintf( kp o u tf," MEMORY ");
fprintf( kpoutf, "%3d", 1); //weights
fprintfC kpoutf, "%3d", infomiss);
fprintft kpoutf, "%3d", infoerror);
fprintf( kpoutf, "%3d", feedmiss);
fprintfX kpoutf, "%3d", feederror);
fprintf( kpoutf, "\n");
begbit = (i-1 ) * bpdmu +1 - overlap;
for (j= begbit; j<=begbit + bpdmu + (overlap * 2) -1; j++) {
if(j<D
bit = realbits + j;
else i f (j > realbits)
bit = j - realbits;
else
bit = j;
evidstr[bit][realbits + i] = 1;
}
}

avgdmus = dmu/mmgrsl;
if (avgdmus%2 == 0)
avgdmus--; /* drop one i f 1 wind up with an even number) */
midpoint = mmgrsl/2 + 1;
m issing[l] = dmu - avgdmus* mmgrsl ;
firstrow = realbits + 1;
firstcol = realbits + dmu +1;
for (i= I ; i<=m m grsl; i++){
dmusper[i] = avgdmus;
i f ((missing[ 1] = 2) & & (i == midpoint)){
dmusper[i] +=2;
}
i f ((m issing[l] ~ 4 ) & & ((i =
dmusper[i] +=2;

1) ||(i==mmgrsl))){

i f ((m issing[l] = 8) & & ((i < 2) ||(i>= mmgrsl -1 ))){


dmusper[i] +=2;
}
i f ((m issing[l] 6) & & ( ( i = l ) || (i = midpoint) || (i = mm grsl))) {
dmusper[i] +=2;
}

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

92

fprintfl kpoutf, "25 25 MMGR1");


fprintf( kpoutf, "%d . i);
i f (strutodo = 2 )
fprintf( kpoutf," VOTE ");
else
fprintf( kpoutf," MEMORY ");
fprintf( kpoutf. "%3d". 1); //weights
fprintft kpoutf, "%3d". infomiss);
fprintf( kpoutf, "%3d". infoerror);
fprintf( kpoutf. "%3d", feedmiss);
fprintf( kpoutf, "%3d", feederror);
fprintf( kpoutf, "\n");
for (j=firstrowy<=dmusper[i] + firstrow -1; j+ + ){
evidstr[j][i + firstcol - 1] = 1;
>
firstrow += dmusper[i];
} /* end for ull mmgrsl */
i f (mmgrs2 > 0){
avgdmus = mmgrsl/mmgrs2;
i f (avgdmus%2 == 0)
avgdmus--; /* drop one i f I wind up with an even number) */
midpoint = mmgrs2/2 + 1;
missing[2] = mmgrsl - avgdmus* mmgrs2:
firstrow = realbits + dmu + 1;
firstcol = realbits dmu + mmgrs 1 + 1;
for (i= l; i<=mmgrs2; i++){
dmusper[i] = avgdmus;
i f ((missing[2] == 2) & & (i = midpoint))!
dmusper[i] +=2;
}
i f ((missing[2] = 4) & & ((i =
dmusper[i] +=2;

1) ||(i=m m grs2))){

i f ((missing[2] = 8) & & ((i < 2) ||(i>= mmgrs2 - ! ) ) ) {


dmusper[i] +=2;

I
i f ((missing[2] = 6) & & ( ( i = l ) || (i == midpoint) || (i = mmgrs2))) {
dmusper[i] +=2;

1
fprintf( kpoutf, "25 25 MMGR2");
fprintf( kpoutf, "%d", i);
fprintft kpoutf," MEMORY ");

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

93

fprintf( kpoutf, "%3d", 1); //weights


fprintf( kpoutf, "%3d", infomiss);
fprintf( kpoutf, "%3d", infoerror);
fprintf( kpoutf, "%3d", feedmiss);
fprintf( kpoutf, "%3d", feederror);
fprintf( kpoutf, "\n");
for (j=firstrowy<=dmusper[i] +firstrow -1; j+ + ){
evidstrfjlfi + firstcol -11 = 1;
}

firstrow += dmusper[i];
}
} /* end i f mmgrs2 > 0 '/
i f (mmgrs3 > 0){
avgdmus = mmgrs2/mmgrs3;
if (avgdmus%2 == 0)
avgdmus--; /* drop one i f I wind up with an even number) */
midpoint = mmgrs3/2 + 1;
missing[3] = mmgrs2 - avgdmus* mmgrs3;
firstrow = realbits + dmu + mmgrsl +1;
firstcol = realbits + dmu + mmgrsl + mmgrs2 +1;
for (i= 1; i<=mmgrs3; i++){
dmusper[i] = avgdmus;
i f ((missing[3] = 2) & & (i = midpoint))!
dmusperfi] +=2;
}

i f ((missing[3] == 4) & & ((i == 1) ||(i=m m grs3))){


dmusper[i] +=2;

I
i f ((missing[3] == 8) & & ((i < 2) ||(i>= mmgrs3 - 1 ))){
dmusper[i] +=2;
}

if((missing[3) == 6) & & ( ( i = l ) || (i = midpoint) || (i = mmgrs3))) {


dmusper[i] +=2;
}

fprintf( kpoutf, "25 25 MMGR3");


fprintf( kpoutf, "%d", i);
fprintf( kpoutf," M EM ORY ");
fprintf( kpoutf, "%3d", 1); //weights
fprintfX kpoutf, "%3d", infomiss);
fprintf( kpoutf, "%3d", infoerror);
fprintt! kpoutf, "%3d", feedmiss);
fprintft kpoutf, "%3d", feederror);
fprintf( kpoutf, "\n");

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

94

for (j=firstrowy<=dmusper[i] +firstrow -1; j++){


evidstr[j][i + firstcol - 1] = 1;
}
firstrow += dmusper[i];
}

} / end i f mmgrs3 > 0 */


i f (manager > 0){
avgdmus = mmgrs3;
midpoint = 1;
firstrow = realbits + dmu + mmgrsl + mmgrs2 + 1;
firstcol = realbits + dmu + mmgrsl + mmgrs2 + mmgrs3 + 1;
for ( i = l ; i<=manager; i++){
dmusperfi] = avgdmus;
fprintfi kpoutf, "25 25 TOPMGR");
fprintfi kpoutf, "%d", i);
fprintf( kp o utf," MEMORY ");
fprintfi kpoutf, "%3d", 1);//weights
fprintf( kpoutf, "%3d", infomiss);
fprintfi kpoutf, "%3d", infoerror);
fprintfi kpoutf, "%3d", feedmiss);
fprintfi kpoutf. "%3d", feederror);
fprintfi kpoutf. "\n");
for (j=firstrowy<=dmusper(i] +firstrow -1; j++){
evidstr[j][i + firstcol] = 1;
}

firstrow += dmusper[i];
}
} /* end i f manager > 0 */
for (i= l;i< = totalcount;i++){
for (j= l U<=totalcounty++){
fprintfi kpoutf. "% ld ", evidstr[i][j]);
}

fprintfi kpoutf, "\n");


}

fprintfi kpoutf, "arrows=y\n");


fp rin tfi kpoutf, "names=y\n");
fp rin tfi kpoutf, "fields =TYPE WEIGHT MISSINFO INCOINFO MISSFEED INCOFEED\n");
fclose(kpoutf);
strcpy(commline, "./start.out");
strcat(commline, kpout);
strcat(commline," 100000 50 0 ");

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

95

system(commline);
strcpy(commline, " rm ");
strcat(commline, kpout);
system(commline);

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

96

Startgen.c
//include <stdlib.h>
//include <stdio.h>
//include <ctype.h>
//include <math.h>
//include <time.h>
//include <string.h>
//ifdef DOS
//include <dos.h>
//else
//include <sys/types.h>
//end i f
// define DMUPOSS 2049
# define POSSOUTCOMES 2
//define MAXDECISIONS 100000
struct evidence {
char name[20];
int weight:
int value;
int incoinfo:
int missinfo;
int lineno;
};

struct dmu {
char name[20];
int layer;
int memory[DMUPOSS][POSSOUTCOMES];
int lastdecision:
int incoinfo;
int missinfo;
int incofeed:
int missfeed;
int decisiontype;
int evidcount;
int dmucount;
int lineno;
int evidread[l 1];
int dmuread[l 1];

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

97
};

int missinfoord, incoinfoord, missfeedord, incofeedord, weightord, typeord;


struct evidence evidall[900];
int resuits[ 10000];
struct dmu dmuall[150];
int dmus, evids;
short int correct;
void initdmu(FILE *cnf){
char lines[2420][ 1200];
char currfield[20];
char name[20], values[7][20];
int reallines. linestodo;
int i, j, k;
fgets(lines[0], 200, cnf);
reallines = atoi(lines[0]);
linestodo = reallines*2 + 3;
for (i = 1; i<=linestodo; i++){
i f (i <=reallines || i==linestodo)
fgets(lines[i],60, cnO;
else
fgets(lines[i],linestodo-3, cnO;
for 0=0; j<strlen(lines[i]); j++ ){
lines[i][j] = toupper(lines[i][j]);
}
}
fclose(cnf);
/while (lines[linestodo](0] = 32){
lines[linestodoj[0] = lines[linestodo][ 1];
}
/

lines[linestodo][strlen(lines[linestodo])-l] = 0;
strcpy(currfield, strtok(lines[linestodo],"="));
i f (strcmpC'FIELDS ", currfield) == 0){
i = 0;
while (strlen(currfield)>3){
strcpy(currfield,strtok(NULL,""));
i++;
i f (stmcmpCMISSINFO", currfield, 8 )= 0 )
missinfoord = i;
i f (stmcmpC INCOINFO", currfield,8)==0)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

98

incoinfoord = i;
i f (stmcmpC'MISSFEED", cu rrfield,8)= 0)
missfeedord = i;
i f (stmcmpC INCOFEED", currfield,8)==0){
incofeedord = i;
break;
}

i f (stmcmpC WEIGHT", currfield, 6)==0)


weightord = i;
i f (stmcmpCTYPE", currfield,4)=0)
typeord = i;
}
}
else
printf("No Field Definitions Found \n");
k = i;
dmus = 0;
evids = 0 ;
for (i=0; i< reallines; i++){
s trto k(lin e s[i+ l],""); /* remove row */
strtok(N U LL ,""); /* remove column */
strcpy(name, strtok(N U LL,""));
for (j = 0; j < k; j+ + ){
strcpy(values[j], strtok(N U LL," " ) ) ;
}

i f (strcmp(values[typeord-l], "E V ID E N C E ")= 0 ){


evidall[i].weight = atoi(values[weightord-l]);
evidall[i], incoinfo = atoi(values[incoinfoord-l]);
evidall[i]. missinfo = atoi(values[missinfoord-l]);
strcpy(evidall[i].name, name);
evidall[i].lineno = i+1;
evids+-+-;
}

else // is a dmu o f some sort


{
strcpy(dmuall[i-evids J.name, name);
dmuall[i-evids].incoinfo = atoi(vaiues[incoinfoord-l ]);
dmuall[i-evids|. missinfo = atoi(values[missinfoord-l]);
dmuall[i-evids).incofeed = atoi(values[incofeedord-l ]);
dmuall[i-evids].missfeed = atoi(values[missfeedord-l]);
dmuall[i-evids].evidcount = 0;
dmuall[i-evids].dmucount = 0;

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

99

i f (strcmp(values[typeord-l], "M EM O RY") = 0)


dmuall[i-evids].decisiontype = 1;
else
dmuall[i-evids].decisiontype = 2;
dmuall[i-evids].lineno = i+1;
dmus++;
} //end the else section
} // end for all reallines
for (i = reallines + 1; i<= reallines*2; i++){ // do all the correspondence parameters lines
for (j=evids ; j< reallines; j+ + ){ //do all the items in each row
i f (lin e s [i][j]= 4 9 ){ // there is a connection
i f (i - reallines <= evids ) { / / is a line for an evidence
dmuall[j-evids].evidread[dmuall[j-evidsj.evidcount] = i-reallines-1;
dmuall[j*evids].evidcount++;
dmuall[j-evids].layer = 1;

}
else {// is a line for another dmu
dmuall[j-evids].dmuread[dmuall[j-evids].dmucount] = i-reallines-1-evids;
dmuall [j -evids ] .dmucount++;
dmuall[j-evids].layer = (dmuall[i-reallines-1-evids].layer + 1);
}
} // end i f there is a connection
} // end for all row elements (j's)
} //end for all lines in the connections (i's)
} // end procedure
short int genevid(FILE *evid. int dmode){
int i. totalweight, evidprod;
totalweight = 0;
evidprod = 0;
for (i=0; i< evids; i++){
evidall[i].value = rand()%POSSOUTCOMES;
evidprod += evidallfi].value * evidall[i].weight;
totalweight += evidall[i].weight;

>
if(dm ode = 1 )
fprintfi(evid, "%5d %5d", evidprod, totalweight);
retum((evidprod*POSSOUTCOMES)/totalweight);

} //end genevid

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

100

void makedecisions(short int corrvalue){


int dmuno, epullno, dpullno;
int decnum;
int difT;
int missing, m issbit[l I];
int i, done;
int evidnumnew, poss, oldposs, cvidcalc;
int miss;
for (dmuno = 0; dmuno < dmus; dmuno++){
decnum = 0;
missing = 0;
for (i=0; i< l 1; i++){
missbit[i] = 0;
} // end for i clearing out missing bit information
i f (dmuall[dmuno).decisiontype = 1 ) {
for (epullno =0 ; epullno < dmuall[dmuno].evidcount; epullno++){
i f ((evidall[dmuall[dmuno].evidread[epullno]].missinfo) < (rand()/327.68)) {
if(rand()/327.68 > evidall[dmuall[dmunoj.evidread[epullno]].incoinfo)
decnum += (evidalI[dm uall[dm uno].evidread[epullno]].value)*
pow(POSSOUTCOMES.epullno);
else
decnum += (!evidall(dm uall[dm uno].evidread[epullno]].value)*
pow(POSSOUTCOMES.epullno);
} // end i f missing information
else {// i f information is missing
missbit[missing] = epullno;
missing++;
} //end i f missing information
} // end pulling all evidence
for (dpullno =0 ; dpullno < dmuall[dmuno].dmucount; dpullno++){
i f (rand()/327.68 > dmuall[dmuall[dmuno].dmuread[dpul!no]].missinfo) {
i f (rand()/327.68 > dmuall(dmuall[dmuno).dmuread[dpullno]].incoinfo)
decnum += (dmuall[dmuall[dmuno].dmuread[dpullno]].lastdecision
)* pow(POSSOUTCOMES,dpullno+dmuall[dmuno].evidcount);
else
decnum += (!dmuall[dmuall[dmuno].dmuread[dpullno]].lastdecision
)* pow(POSSOUTCOMES,dpullno+dmuall[dmuno].evidcount);
} // end not missing information
else {// information is missing
missbit[missing] = dpullno;
missing++;
} // end informatin is missing

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

101

} // end pulling all dmus


d iff= 0;
i f (missing > 0){
for (poss = pow(2, missing)-1; poss >= 0; poss- ) { / / start at top and end at bottom
evidnumnew = decnum;
oldposs = poss;
for (miss = missing-1; miss >=0; miss--){ // let's look at each bit
evidcalc = pow(2, miss);
i f (oldposs >= evidcalc) {
evidnumnew += pow(2, missbit[miss]);
oldposs -= evidcalc;
}

} //next missing bit


diff + = (dmuall[dmuno]. mem ory[evi dnumnew][O J
dmuall[dmuno].memory[evidnumnew][ 1]);
if(rand()/327.68 > dmuall[dmuno].missfeed) { / / give feedback to all possibilities
if(rand()/327.68 > dmuall[dmuno).incofeed)
dmuall[dmuno].memory[evidnumnew][corrvalue]++;
else
dmuall[dmuno].memory[evidnumnew][!corrvalue] ++; // opposite feedback
for errors
} // end o f feedback
} // for end o f possibilities
} // end i f missing
d iff += dmuall[dmuno).memory[decnum][0] - dmuall[dmuno).memory[decnum][l ];
i f (rand()/327.68 > dmuall[dmuno].missfeed) { / / give feedback to all possibilities
i f (rand()/327.68 > dmuall[dmuno].incofeed)
dmuall[dmuno].memoryldecnum][corrvaiue]-H-;
else
dmuall[dmuno].memory[decnum][!corrvalue] ++; // opposite feedback for errors
} // end o f feedback
i f ( d iff > 0 )
dmuall[dmuno].lastdecision = 0;
i f ( d iff < 0 )
dmuall[dmuno].lastdecision = 1;
i f ( d iff 0)
dmuall[dmuno].lastdecision = rand()%POSSOUTCOMES;
} // end for all decisiontype = 1 (is memory based)
i f (dmuall[dmuno].decisiontype = 2 ) {
done =0;
decnum = 0;
for (epullno =0 ; epullno < dmuall[dmuno].evidcount; epullno++){
i f ((evidall[dmuall[dmuno].evidread[epullno]].missinfo) < (rand()/327.68)) {

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

102

done++;
i f (rand()/327.68 > evidall[dmuall[dmuno].evidread[epullno]].incoinfo)
decnum += evidall[dmuall[dmuno).evidread[epullno)].value;
else
decnum += !evidall[dmuall[dmuno].evidread[epullno]].value;
} // end i f missing information
else {// i f information is missing
missing++;
} //end i f missing information
} // end pulling all evidence
for (dpullno =0 ; dpullno < dmuall[dmuno].dmucount; dpullno++){
i f (rand()/327.68 > dmuall[dmuall[dmuno].dmuread[dpullno]].missinfo) {
i f (rand()/327.68 > dmuall[dmuall[dmuno].dmuread[dpu!lno]].incoinfo){
done++;
decnum += (dmuall[dmuall[dmuno].dmurcad[dpullno]].lastdecision
);

}
else
decnum += (!dmuall[dmuall[dmuno).dmuread[dpullno)].lastdecision
);

} // end not missing information


else {// information is missing
missing++;
} // end informatin is missing
} // end pulling all dmus
i f ((decnum * POSSOUTCOMES) = done || done 0)
dmuall[dmuno].lastdecision = rand()%POSSOUTCOMES;
else
dmuall[dmuno).lastdecision = (decnum * POSSOUTCOMES)/done;
} // end i f decisiontype = 2 (VOTING)
} // end for all dmus
} // end makedecisions and getting feedback

void outputresults(char * filename, int loops, int outformat, int decisions)!


int i;
int total[ 10000];
FILE *outfile;

total [0]=results[0];
for ( i = l ; i< (decisions/10); i++){
total[i] = to ta l[i-l] + results[i];

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

103
}

outflle = fopen( filename, "a");


fprintf(outfile, "Block RAW
TOTALS Point AverageVn");
fprintfi(outfile,"------------------------------------------- \n");
for (i=0; i< 10; i++){ // no smooting yet...
fprintfloutfile, "%5d ", i+1);
fprintf(outfile, "% 6d", results[i]);
fprintf(outfile, "% 6d", totalfi]);
fprintfi[outfile, "%8.4f ", (float) (total[i)*10)/((i+l)*loops));
fprintf(outfile, "%8.4t\n", (float) (total[i]*10)/((i+l)*loops));
}

for (i=10; i<decisions/10; i++){ // no smooting yet...


fprintfloutfile. "%5d ", i+1);
fprintfloutfile, "% 6d", results(i));
fprintfloutflle. "% 6d", total[i]);
fprintfloutfile, "%8.4f ", (float) (total[i]- total[i-10))/loops);
fprintffcutflle. "% 8 .4 tV \ (float) (total[i]*10)/((i+l)*loops));
}

fclose (outfile);

}//end output o f results


int main(int argc, char *argv[]){
char fname[30), fnameout[30], fnameevid[30);
int decisionstodo.loops, outformat, dmodc;
FILE *cfgfile, *evifile;
int loop, decisionnum;
int passes;
int i, j, k;
time_t seed;
passes=0;
i f (argc < 4 ){
printf("N ot enough paramtersVn");
exit(0);
}

decisionstodo = atoi(argv[2]);
loops = atoi(argv[3]);
outformat = atoi(argv[4]);
i f (argc = 4 )
dmode = 0;

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

104

else
dmode = atoi(argv[4]);
k = strcspn(argv[l],
stmcpy(fnameout,argv[ 1),k);
i f (strlen(fnameout) > k)
fnameout[k] = 0;
i f (access! fn a m eout.0)=0)
exit(0);
else
{

cfgfile = fopen(fnameout,"a");
fclose(cfgfile);
}

i f ((cfgfile = fopen(argv[l],"r")) == N U L L ){
printf("Open configuration file %s error!\n", argv[l]);
exit(2);
}

/* now, parse the input file */


if(d m od e = = l){
strcpy(fnameevid, fnameout);
strcat(fnameevid. ".evi");
evifile = fopen(fnameevid. "w");

}
initdmu(cfgfile);
for (k=0; k< 10000; k++){
resultsfk] =0;
}
for(loop=0; loop < loops; loop-H-){ //fo r all necessary loops
printf("% 3d", loop-*-1);
ffiush(stdout);
for (i=0; i< dmus; i++){
for 0=0; j < DMUPOSS; j+ + ){
for (k=0; k < POSSOUTCOMES; k++){
dmuall[i].memory(j][k] = 0;
} //end POSSOUTCOMES
) //end DMUPOSS
} // end dmus
seed = tim e(NULL);
srand((unsigned)time(&seed;
for (decisionnum = 0; decisionnum < decisionstodo; decisionnum++){ //for all decisions

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

105

correct = genevid(evifile , dmode);


if(dm ode = 1 )
fprintf( e v ifile ," corr % ld ", correct);
makedecisions(correct);
i f (dmode = 1 )
fp rin tf(
e v ifile ,
"C orr#2
% ld
Made
% ld \n " ,
dmuall[dmus-1].lastdecision);
i f (dmuallfdmus-ll.lastdecision == correctH
results[decisionnum/10] ++; // only add the correct decisions together
passes++;
}

} //end for all decisions


} // end for all loops
outputresults(fnameout, loops, outformat, decisionstodo); // send the output file name */
i f (dmode = 1 )
fclose(evifile);

>

Startloc.c
#include <stdlib.h>
#include <stdio.h>
#include <ctype.h>
^include <math.h>
#include <time.h>
#include <string.h>
#ifd ef DOS
#include <dos.h>
#else
#include <sys/types.h>
#endif
# define DMUPOSS 150
# define POSSOUTCOMES 2
# define MAXDECISIONS 10000
struct evidence {
char name[20];
int weight;
int value;
int incoinfo;

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

correct.

106
int missinfo;
int lineno;
};

struct dmu {
char name[20];
int layer;
int memory[DMUPOSS][POSSOUTCOMES];
int lastdecision;
int incoinfo;
int missinfo;
int incofeed;
int missfeed;
int decisiontype;
int evidcount;
int dmucount;
int lineno:
int evidread[l1];
int dmurcad[l 1];
int evidnum;
int evidweight;
>:

int missinfoord, incoinfoord, missfeedord, incofeedord, weightord, typeord;


struct evidence evidall[900];
int results[ 10000];
struct dmu dmuall[150];
int dmus, evids;
short int correct;
void initdmu(FILE *cnf){
char lines[2420][ 1200];
char currfield[20];
char name[20], values[7][20];
int reallines, linestodo;
int i, j, k;
fgets(lines[0], 200, cnf);
reallines = atoi(lines[0]);
linestodo = reallines*2 + 3;
for (i = 1; i<=linestodo; i++){
i f (i <=reallines || i=linestodo)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

107

fgets(lines[i],60, cnf);
else
fgets(lines[i],linestodo-3, cnf);
for (j=0; j<strlen(lines[i]); j+ + ){
lines[i][j] = toupper(lines[i][j]);
}
}

fclose(cnf);
/"w hile (lines[linestodo][0] = 32){
lines[linestodo)[0] = lines[linestodo][l];
}

*/
lines[linestodo)[strlen(lines[linestodo])-l) = 0;
strcpy(currfield, strtok(lines[linestodo],"="));
i f (strcmpf'FIELDS ", currfield) = 0){
i =0;
while (strlen(currfield)>3){
strcpy(currfield,strtok(NULL.""));
i++;
if (stmcmp("MlSSINFO". currfield, 8)==0)
missinfoord = i;
i f (stmcmpC'INCOINFO". currtield,8)==0)
incoinfoord = i;
if (stmcmp("MISSFEED", currfield.8)=0)
missfeedord = i;
i f (stmcmpC'INCOFEED", currfield,8)=0) {
incofeedord = i;
break;
}
i f (stmcmpC WEIGHT", currfield,6)=0)
weightord = i;
i f (stmcmpCTYPE", currfield,4)==0)
typeord = i;
}
}
else
printfC'No Field Definitions Found \n");
k = i;
dmus = 0;
evids = 0 ;
fo r(i= 0 ; i< reallines; i++){
strtok(line s[i+ l),""); /* remove row */

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

strtok(N U LL ,""); /* remove column */


strcpy(name, strtok(N U LL ,""));
for (j = 0 ; j < k; j-H-){
strcpy(values[j], strtok(N U LL," " ) ) ;
>
i f (strcmp(values[typeord-l], "E V ID E N C E ")= 0 ){
evidall[i].weight = atoi(values[weightord-l]);
evidallfi].incoinfo = atoi(values[incoinfoord-l]):
evidall[i].missinfo = atoi(values[missinfoord-l]);
strcpy(evidall[i].name, name);
evidallfi]. lineno = i+1;
evids++;
}

else /*// is a dmu o f some sort */


{

strcpy(dmuall[i-evids ].name, name);


dmuall[i-evids].incoinfo = atoi(values[incoinfoord-l]);
dmuall[i-evids]. missinfo = atoi(values[missinfoord-l ]);
dmuall[i-evids].incofeed = atoi(values[incofeedord-l]);
dmuall[i-evids].missfeed = atoi(values[missfeedord-l]);
dmuall[i-evidsj.evidcount = 0;
dmuall[i-evids].dmucount = 0;
if (strcmp(values[typeord-l], "MEMORY") == 0)
dmualI[i-evids].decisiontype = 1;
else
dmuall[i-evids).decisiontype = 2;
dmuall[i-evids],lineno = i+ l;
dmus++;
} /* //end the else section */
} /* / end for all reallines */
for (i = reallines + I; i<= reallines*2; i++){ /* / do all the correspondence parameters lines
for (j=evids ; j< reallines; j+ + ){ /*/do all the items in each row */
i f (lin e s[i]L j]= 49 ){ /* / there is a connection */
i f (i - reallines <= evids ) { 1*1 is a line for an evidence *1
dmuall[j-evids].evidread[dmuall[j-evids].evidcount] = i-reallines-1;
dmuall[j-evids].evidcount++;
dmuall[j-evids]. layer = 1;
else {/* / is a line for another dmu */
dmuall[j-evids].dmuread[dmuall[j-evids].dmucount] = i-reallines-1-evids;
dmuall(j-evids].dmucount++;
dmuall[j-evids].layer = (dmuall[i-reallines-l-evids]. layer + 1);
}

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

109

} 1*1 end i f there is a connection */


} 1*1 end for all row elements (j's)
} /*/end for all lines in the connections (i's) */
} /* / end procedure */

*/

short int genevid(FlLE *evid, int dmode){


int i, totalweight, evidprod;
totalweight = 0;
evidprod = 0;
for (i=0; i< evids; i-H-){
evidall[i].value = rand()%POSSOUTCOMES;
evidprod += evidallfi].value * evidall[i].weight;
totalweight += evidall[i],weight;
}
i f (dmode = 1 )
fprintf(evid, "%5d %5d", evidprod, totalweight);
retum((evidprod*POSSOUTCOMES)/total weight);

} /*/end genevid */
void makedecisions(short int corrvalue){
int dmuno. epullno, dpullno;
int decnum;
int d iff;
int missing. m issbit[l I ]; int i, done; int evidnumnew, poss, oldposs, evidcalc; int miss; int
dmuevidnum, dmuevidweight;
for (dmuno = 0; dmuno < dmus; dmuno++){
decnum = 0;
missing = 0;
dmuall[dmuno].evidnum =0;
dmuall[dmuno].evidweight = 0;
for (i=0; i< l 1; i++){
missbit[i] = 0;
} 1*1 end for i clearing out missing bit information */
i f (dmuall[dmuno].decisiontype 1){
for (epullno =0 ; epullno < dmuall[dmuno].evidcount; epullno++){
d m u all[dm uno].evidnum += e v id a ll[d m u a ll[d m u n o ].e v id re a d [e p u lln o ]].v a lu e *
evidall[dmuall[dmuno].evidread[epullno]].weight;
dmuall[dmuno].evidweight += evidall[dmuall[dmuno].evidread[epullno]].weight;
i f ((evidall[dmuall[dmuno].evidread[epullno]].missinfo) < (rand()/327.68)) {
i f (rand()/327.68 > evidall[dmuall[dmuno].evidread(epullno)].incoinfo)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

110
decnum += (evidall[dm uall[dm uno].evidread[epullno]].value)*
pow(POSSOUTCOMES.epullno);
else
decnum += (!evidall[dm uall[dm uno].evidread[epullno]]. value)*
pow(POSSOUTCOMES,epullno);
} /* / end i f missing information */
else {/*/ i f information is missing */
missbitfmissing] = epullno;
missing+-t-;
} /*/end i f missing information */
} /* / end pulling all evidence */
for (dpullno =0 ; dpullno < dmuall[dmuno].dmucount; dpullno-n-){
dm uall[dm uno].evidnum += dm uall[dm uall[dm uno].dm uread[dpullnoj].evidnum *
dmuall[dmuall[dmuno].dmuread[dpullno]].evidweight;
dmuall[dmuno].evidweight += dmuall[dmuall[dmuno].dmuread[dpulinoj].evidweight;
i f (rand()/327.68 > dmuall[dmuall[dmuno].dmuread[dpullno]].missinfo) {
i f (rand()/327.68 > dmuall[dmuall[dmuno].dmuread[dpullno]].incoinfo)
decnum += (dmuall[dmuall[dmuno].dmuread[dpullno]].lastdecision
)* pow(POSSOUTCOMES,dpullno+dmuall[dmuno].evidcount);
else
decnum += (!dmuall[dmuall[dmuno).dmuread[dpullno]].lastdecision)*
pow(POSSOUTCOMES,dpullno+dmuall[dmuno].evidcount);
} /* / end not missing information */
else {/*/ information is missing */
missbit[missing] = dpullno;
missing++;
} /* / end informatin is missing
*/
} /* / end pulling all dmus */
d iff = 0;
corrvalue = (dmuall[dmuno].evidnum * 2)/dmuall[dmuno].evidweight;
i f (missing > 0){
for (poss = pow(2, missing)-!; poss >= 0; poss- ) { / / start at top and end at bottom */
evidnumnew = decnum;
oldposs = poss;
for (miss = missing-1; miss >=0; m iss-){ /* / let's look at each bit */
evidcalc = pow(2, miss);
i f (oldposs >= evidcalc) {
evidnumnew += pow(2, missbit[miss]);
oldposs -= evidcalc;
}
} /*/next missing bit */
d i f f += ( d m u a i l [ d m u n o ] . m e m o r y [ e v i d n u m n e w ] [ 0 ]
dmuall [dmuno]. memory [evidnumnew][ 1]);

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Ill

if(rand()/327.68 > dmuall[dmuno].missfeed) { / * / give feedback to all possibilities */


i f (rand()/327.68 > dmuall[dmuno]. incofeed)
dmuall[dmuno].memory[evidnumnew][corrvalue]-H-;
else
dmuall[dmuno].memory[evidnumnew][!corrvalue] ++; /*/ opposite feedback
for errors */
} /* / end o f feedback */
} 1*1 for end o f possibilities */
} 1*1 end i f missing *1
d iff+ = dmuall[dmuno].memory[decnum][0] - dmuall[dmuno].memory[decnum][l];
i f (rand()/327.68 > dmuall[dmuno].missfeed) { 1*1 give feedback to all possibilities */
if (rand()/327.68 > dmuall[dmuno|.incofeed)
dmuall[dmuno].memory[decnum][corrvalue]++;
else
dmuall[dmuno].memory[decnum][!corrvalue] ++; /* / opposite feedback for errors
*/

} 1*1 end o f feedback */


i f ( d iff > 0)
dmuall[dmuno).lastdecision = 0;
i f (d iff < 0)
dmuallfdmunoj.lastdecision = 1;
i f (d iff = 0 )
dmuallfdmunoj.lastdecision = rand()%POSSOUTCOMES;
} /* / end for all decisiontype = 1 (is memory based) */
i f (dmuall[dmuno].decisiontype = 2 ) {
done =0;
decnum = 0;
for (epullno =0 ; epullno < dmuall[dmuno].evidcount; epullno++){
i f ((evidall[dmuall[dmuno].evidread[epullno]].missinfo) < (rand()/327.68)) {
done-H-;
i f (rand()/327.68 > evidall[dmuall[dmuno].evidread[epullno]].incoinfo)
decnum += evidall[dmuall[dmuno].evidread[epullno)].value;
else
decnum += !evidall[dmuall[dmuno].evidread[epullno]].value;
} 1*1 end i f missing information */
} 1*11 end pulling all evidence *1
for (dpullno =0 ; dpullno < dmuall[dmuno].dmucount; dpullno-H-){
i f (rand()/327.68 > dmuall[dmuall[dmuno].dmuread[dpullno]].missinfo) {
done++;
i f (rand()/327.68 > dmuall[dmuall[dmuno].dmuread[dpullno]].incoinfo){
decnum += (dmuall[dmuall[dmuno].dmuread[dpullno]].lastdecision);
}
else

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

112

decnum += (!dmuall[dmuall[dmuno].dmuread[dpullno]].lastdecision);
} /* / end not missing information */
} /* / end pulling all dmus */
i f ((decnum * POSSOUTCOMES) = done || done = 0)
dmuall[dmuno].lastdecision = rand()%POSSOUTCOMES;
else
dmuall(dmuno].lastdecision = (Jecnum * POSSOlJTCOMES)/done;
} 1*1 end i f decisiontype 2 (VOTING) */
} /* / end for all dmus *1
} // end makedecisions and getting feedback */

void outputresults(char filename, int loops, int outformat. int decisions){


int i;
int total[ 10000];
FILE outfile;

total(0]=results[0];
for ( i= l; i< (decisions/10); i++){
total[i] = total[i-l ] + results[i];
}
outfile = fopen( filename, "a"),
fprintfloutfile, "Block RAW
TOTALS Point AverageVn");
fp rin tf(o u tfile ,"------------------------------------------- \n");
for (i=0; i< l0 ; i++){ // no smooting yet...
fprintfioutfile, "%5d ", i+1);
fprintfioutfile, "% 6 d ", results[i]);
fprintfioutfile, "% 6 d ", total[i]);
fprintf(outfile, "%8.4f ", (float) (total[i]*10)/((i+l)*loops;
fprintf(outfile, "%8.4t\n", (float) (total[i]*10)/((i+l)*loops));
for (i=10; i<decisions/10; i++){ // no smooting yet...
fprintfioutfile. "%5d ", i+1);
fprintfloutfile, "% 6 d ", results[i]);
fprintfioutfile, "% 6 d ", total[i]);
fprintf(outfile, "%8.4f ", (float) (total[i]- total[i-10])/loops);
fprint flout file, "%8.4f\n", (float) (total[i] 10)/((i+1)* loops));
}
fclose (outfile);

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

113

}//end output o f results


int main(int argc, char *argv[]){
char fname[30], fnameout[30], fnameevid[30];
int decisionstodo.loops, out format, dmode;
FILE *cfgfile, evifile;
int loop, decisionnum;
int passes;
int i,j, k;
time_t seed;
passes=0;
i f (argc < 4){
printf("Not enough paramtersVn");
exit(O);
}

decisionstodo = atoi(argv[2]);
loops = atoi(argv[3]);
outformat = atoi(argv[4]);
i f (argc = 4 )
dmode = 0;
else
dmode = atoi(argv[4]);
k = strcspn(argv[l],
stmcpy(fnameout,argv[l j,k);
i f (strlen(fnameout) > k)
fnameoutfk] = 0;
i f (access(fnameout,0)=0)
exit(0);
else
{
cfgfile = fopen(fnameout,"a");
fclose(cfgfile);
}

i f ((cfgfile = fopen(argv[l],"r'')) = N U LL){


printf("Open configuration file %s error!\n", arg v[l]);
exit(2);
}

/* now, parse the input file */


if(d m o d e = l){
strcpy(fnameevid, fnameout);

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

114

strcat(fnameevid, .evi");
evifile = fopen(fnameevid. "w");
}

initdmu(cfgfile);
for (k=0; k< 10000; k++){
results[k] =0;
}

for (loop=0; loop < loops; loop -H -){ //for all necessary loops
printtV%3d", loop+1);
fllush(stdout);
for (i=0; i< dmus; i++){
for 0=0; j < DMUPOSS; j+ + ){
for (k=0; k < POSSOUTCOMES; k++){
dm uall[i].memory[j][k] = 0;
} //end POSSOUTCOMES
} //end DMUPOSS
} // end dmus
seed = time(NULL);
srand((unsigned)time(&seed));
for (decisionnum = 0; decisionnum < decisionstodo; decisionnum++){ //for all decisions
correct = genevid(evifile , dmode);
i f (dmode = 1 )
fprintf( e v ifile ," corr % ld ", correct);
makedecisions(correct);
i f (dmode = 1 )
fp ri ntf(
ev ifi le ,
"Corr#2
% ld
Made
% ld \n",
correct,
dmuall[dmus-l j.lastdecision);
i f (dmuall[dmus-l j.lastdecision correct){
results[decisionnum/lO] ++; // only add the correct decisions together
passes++;
}
} /*/end for all decisions */
} /* / end for all loops */
outputresults(fnameout, loops, outformat, decisionstodo); /* / send the output file name */
i f (dmode ==1)
fclose(evifile);

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Appendix 2
The source for each regression run used a database file disstsml.dbf as its data source.
GET TRANSLATE
F IL E - D:\rv\dissert\DISSTSML.DBF'
/TYPE=DBF / M A P .
compute totbits =bpdmu+overlap*2.
compute probbits=bpdmu*dmu.
compute totmiss=missinfo+missfeed.
compute totinco=incoinlb+incofeed.
compute totinfo=missinfo+incoinfo.
compute totfeed=missfeed+incofeed.
compute toterror=totfeed+totinfo.
compute totseen=totbits*dmu.
compute pctbits =bpdmu*100/probbits.
i f (bpdmu - overlap * 2 >0 ) pctbitex = (bpdmu - overlap*2)/bpdmu.
i f (bpdmu - overlap * 2 <=0 ) pctbitex = 0.
i f (bpdmu - overlap * 2 >0 ) probex = (bpdmu - overlap*2)/probbits.
i f (bpdmu - overlap * 2 <=0 ) probex = 0.
compute pctbitsn = (bpdmu + overlap*2)/bpdmu.
compute pctprob = (bpdmu + overlap/2 )/probbits.
compute dmupermm=dmu/mm 1.
compute redund=(totseen/probbits -1 )* 100.
compute pctleam=exp(bOslastk).
compute pctdmu=dmupermm/dmu* 100.
compute beginlm=max(lastavg5. Iastsmt5)
execute.
COMPUTE filter_$=(feedback = 'NEW' AND weights = 'UN' AND stru = HI' ).
VAR IABLE LABEL filter_$ "feedback = 'NEW' AND weights = UN' AND stru = HI'"+
(FILTER)".
VALU E LABELS filter_$ 0 'Not Selected' 1 'Selected'.
FORMAT filte r_ $ (fl.0 ).
FILTER BY f i l t e r j.
E X E C U TE .
REGRESSION
/DESCRIPTIVES MEAN STDDEV
/MISSING LISTWISE
/STATISTICS COEFF R ANO VA
/CRJTERIA=PIN(.05) POUT(.IO)
/NOORIGIN
/DEPENDENT bOslastk

115

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

"

116

/M ETHO D=BACKW ARD bpdmu dmu incofced incoinfo missfeed missinfo overlap totbits
probbits totmiss totinco totinfo totfeed toterror totseen pctbits pctbitex pctbitsn dmupermm redund
pctdmu mm 1 mm2 mm3
/casewise OUTLIERS(2.5).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Appendix 3
Regression Results for dependent BEGINLRN

Regression Results for STABLENUM

117

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

118

Regression results for StableVal

Regression Results for PCTLearn

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

119

V IT A
Name:

Ronald Elmar Vyhmeister

ED UC A TIO N
Courses in Business Administration, Andrews University, Berrien Springs,
Michigan, 1981
Bachelors o f Theology. Universidad Adventista del Plata. Entre Rios. Argentina,
1982
M B A with a concentration in Finance, Andrews University, Berrien Springs,
Michigan, 1985
Studies in Finance & Information Systems, Western Michigan University, 1985
Studies in Computer Science, Andrews University, Berrien Springs, Michigan,
1990
Ph.D. in Business Administration, with a concentration in Management
Information Systems, University o f Illinois at Chicago, 2000
EXPERIENCE
C hief Technology Officer, Universidad Adventista del Plata, Entre Rios, Argentina,
8/98Director, Information Systems Programs. Universidad Adventista del Plata, Entre
Rios, Argentina, 1/98 Consultant, Martin's Supermarkets, South Bend, Indiana. Designed and
implemented a system for tracking gift certificates, including integration with the
administrative database, 8/95-2/98
Consultant, Teachers Credit Union, South Bend, Indiana. Designed the database
and procedures for analysis o f customer information and transaction system.
Currently involved in the process o f converting administrative systems to a new,
client-server environment, 7/95-3/96
Consultant, World Bank. Performed initial needs assessment for updating o f
financial administration systems o f the institutions o f higher education in Guinea.
Assessment included budgeting process, expenditure control, and information
requirements needed for computerizing the entire system, 9/94
Assistant Professor o f Management Information Systems, Andrews University.
Responsibilities include management and supervision o f the shared computing
facilities o f the School o f Business, 7/93-12/97

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

V IT A (continued)

120

Director, College o f Business Microcomputer Lab, University o f Illinois at


Chicago, 1/92-7/93
Consultant, Montemorelos University, Mexico. Primary resource person for the
development o f the curriculum for a new Masters in Business Administration
degree, 1/92-10/92
Instructor, Dept, o f Information and Decision Science, University o f Illinois at
Chicago. 8/91-12/91
Acting Treasurer, Adventist University o f Central Africa, Rwanda, 7/91-8/91

Executive Officer, MicroRwanda (University Computer Assembly, Repair, and


Software Development Industry), Adventist University o f Central Africa, Rwanda.
6/89-8/91
Dean, School o f Business, Adventist University o f Central Africa, Rwanda.
Achieved Rwanda Government accreditation for Bachelor's in Business
Information Systems, 6/89-7/91
Visiting Professor in Finance, Management Information Systems, Finance, and
Accounting, Andrews University, 6/88-8/88
Assistant Professor, School o f Business, Adventist University o f Central Africa,
Rwanda, 2/87-6/89
Instructor in Finance and Information Systems, Andrews University, Berrien
Springs, Michigan. Secretary o f School o f Business Curriculum Committee.
Revamped curriculum for the Bachelor's o f Business Administration in
Management Information Systems, 6/85-12/86
Graduate Instructor, Andrews University, Berrien Springs, Michigan, 9/84-6/85
Accountant. Enterprise Academy, Enterprise, Kansas, 1/84-8/84
Production Manager, University Bakery, Dominican Adventist University, Bonao,
Dominican Republic, 9/93-12/93
Assistant Manager, Westico Foods, Mandeville. Jamaica. Responsibilities
included supervision o f marketing, sales and accounting, 6/93-9/93
PU BLIC ATIO N S
Vyhmeister, Ronald and Aris M. Ouksel. A Study o f the Impact o f Organizational
Design on Organizational Learning and Performance, Computational and
Mathematical Organizational Theory Workshop, Cincinnati, May, 1999.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

V IT A (continued)

121

Ya'ir Babad, Sharon Reeves, and Ronald Vyhmeister. "University-Industry


Alliance: Past, Present, and Future" in Working Papers, Center for Research in
Information Management, University o f Illinois at Chicago, June 1994.
Quelle Education Informatique au Rwanda? In proceedings o f the Premier Forum
Informatique au Rwanda, February 1990

PROFESSIONAL M EM BERSHIPS
Association o f Computing Machinery (ACM)
Institute for Operations Research and the Management Sciences (Informs)
HONORS
Membership in Delta Mu Delta, Business Honors Society, November 1981.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

A STUDY OF THE IM PACT OF


ORGANIZATIONAL DESIGN
ON ORGANIZATIONAL LEARNING AN D PERFORMANCE
Ronald E. Vyhmeister, PhD.
Department o f Information and Decision Sciences
University o f Illinois at Chicago
Chicago, Illinois (2000)

This research explores the relationship between organizational design and organizational
performance, focusing particularly on organizational learning.

The specific focus is how

organizational design and information processing impact decision-making in organizations facing


two-valued decisions.
The primary contribution is to extend the work done by Mihavics and Ouksel (1996) using
the model formalized by Ouksel, Mihavics and Carley (1996), and incorporating all organizational
design and information processing variables into one study. By so doing, we are able to study not
only the impact o f a specific variable, but also the interaction between the variables.
Our results demonstrate that there is no one best organizational structure. Rather, the choice
o f organizational and task design must depend on (a) a multitude o f factors relating to organizational
design and information processing in the organization, (b) the time frame being analyzed, and (c)
the economic factors associated with the task, such as coordination costs (Malone, 1987) and other
information processing costs (Mihavics and Ouksel, 1996).
Various examples o f areas where these results can be applied are presented. While the model
and the present results are applicable in a variety o f situations, it is demonstrated that they are
particularly applicable to areas where intelligent agents are used, such as workflow, battle
management, and e-commerce.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Você também pode gostar