Você está na página 1de 7

PHOTODISC

Creating Moral
Buffers in Weapon
Control
Interface Design

M.L. CUMMINGS

28

To the extent that humans inappropriately attribute


agency to [computer] systems, humans may well consider the computational systems, at least in part, to be
morally responsible for the effects of computer-mediated
or computer-controlled actions [2].

0278-0079/04/$20.002004IEEE

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

FALL 2004

ith recent tremendous leaps in


computer technology, significant improvements in computer interfaces now
provide greater computing accessibility to a diverse population, and
indeed, an intense focus in all areas
of human-computer interaction has
been to make human-computer
interfaces more user-friendly.
While in general most individuals
would agree that making computer
technology accessible to more people in a friendly and easy-to-use
format is more equitable and
indicative of a socially progressive
trend, the ubiquitous move to make
human-computer interface environments more user-friendly can actually lead to new ethical dilemmas
that did not previously exist with
older technologies. For example,
when designing human-computer
interfaces that control weapons in
real time, the concept of userfriendly can be a dual-edged sword.
The ability for weapon controllers
to easily interact with a computer
and to ensure commands are both
well understood and easily executed provide obvious tactical and
safety advantages. However, it is
possible that designing friendly
interfaces for the purpose of
destruction can afford a moral
buffer that diminishes a controllers
sense of responsibility and autonomy, which could then allow people
to make decisions more quickly and
without proper consideration of all
the consequences.
With the increasing presence of
computing in all facets of life, the
impact of computer-control on a
users sense of autonomy and moral
responsibility and the responsibilities
of designers when building such systems has become a topic of considerable ethical interest. Friedman and
Kahn [2] contend that computer decision-making systems have the ability
to diminish a users sense of moral
agency or may even mitigate moral
responsibility for computer-initiat-

ed actions. Thomas Sheridan, a noted researcher in the field of supervisory behavior and automation, has
expressed concern that operators
interfacing with technology could
have the tendency to trust technology
without question and abandon responsibility for their own actions [17].
This paper will examine the ethical and social issues surrounding
the design of human-computer
interfaces that are designed for control of highly autonomous weapons
systems. With the improvements of
global positioning satellite communications and control algorithms, it
is currently possible to launch
weapons such as medium and
short-range missiles and redirect
them during flight in a matter of
minutes to emerging targets. These
systems are highly autonomous in
that they do not need constant
human supervision and manual
control for operation and indeed,
only require human intervention
when a change in a goal state is
required. In addition, in the not-sodistant future, the United States
military envisions that it will be
able to deploy swarms of flying
robots for reconnaissance and prosecution of potential threats. In the
swarming vision [19], a group of
highly autonomous unmanned aerial vehicles (UAVs) will be able to
communicate amongst themselves
to determine the best course of
action with cursory input from
human agents. In the swarm concept, human decision makers are
removed even further from the decision-making loop.
The implementation of these current and future smart weapons systems means that not only will battlefield commanders have more
flexibility and options; it also means
that a more abstract layer of human
cognitive control will be needed
where none previously existed. In
place of manually controlling or constantly monitoring a weapon state,
commanders will now be asked to
make near-instantaneous decisions
about networks of smart weapons

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

FALL 2004

that can easily generate more information than a human can process. As
the human is further removed from
direct manual control of a system,
the decision making process
becomes more abstract and often difficult for the human to grasp. For
example, currently UAVs are flown
by a pilot and this direct manual control allows the pilot to be more in
tune with the intentional states
because the human is generating
them. However, as autonomous control improves, it will be possible for a
human to monitor perhaps two or
three UAVs simultaneously (indeed
this is a current goal of the U.S. military). In this case, the pilot will only
intervene in the case of system failure or due to a need to redirect the
weapons to respond to changing battlefield conditions. Because the pilot
is further removed from the control
scheme, it is more difficult for the
pilot to determine quickly how to
intervene in the case of an unanticipated event. This difficulty in understanding intentional states will only
become more difficult with swarming autonomous vehicles that make
decision intra-swarm and only
include the human for extremely
high-level decisions, such as final
approval for weapons release.
Highly autonomous weapons
provide the military with undeniable tactical advantage. However,
developing technologies of this sort
also have the potential to provide
for the creation of moral buffers
that allow humans to act without
adequately considering the consequences. I argue that when directing highly autonomous weapons
through a human-computer interface that provides a virtual userfriendly world, moral buffers can be
more easily created as a consequence of both psychological distancing and compartmentalization.
Indeed, this sense of both physical
and emotional distancing can be
exacerbated by any automated system that provides a division
between a user and his or her
actions, which could become more
|

29

30

High

A sense of distance, both physically


and emotionally is a significant element in the development of a moral
buffer. Computer interfaces provide
a sense of distance both physically
and emotionally, as well as a sense
of remoteness because they allow
humans to act through a device to
reach a particular goal instead of
having to act on a sentient being to
achieve this same goal. The Milgram
obedience studies of the early 1960s
illustrate how the concept of remoteness from the consequences of ones
actions can influence human behavior in a negative, sometimes highly
unethical fashion. The primary focus
of Milgrams obedience research
was to determine how a legitimate
authority (a scientist in a laboratory
coat) could influence the behavior of
subjects, and if the subjects would
be obedient to requests from
someone in a position of authority,
even if they could cause serious
physical harm. The experiments
were deception experiments in that
the subjects thought the real purpose
of the research was to study learning
and memory, and that they, the subjects, would be the teachers. As
teachers, the subjects would administer increasing levels of electric
shocks to a learner, who was actually a confederate participant, when
this person made mistakes on a
memory test. There were several
experimental sessions conducted
varying different conditions, however, the trial most pertinent to this discussion of moral buffers is the experiment in which the teacher could or
could not see the learner. When the
learner was in sight of the teacher,
70% of the subjects refused to

Indeed, these factors which contribute to the formation of a moral


buffer can be seen in current U.S.
military policy. The military is
increasingly relying on the use of
precision weapons, which are
weapons that can be delivered from
safe, remote locations with a high
degree of accuracy, which protect
friendly troops and minimize loss
for both civilian life and property.
These smart weapons are sometimes referred to as surgical strike
weapons, which generally are seen
as in keeping with the principle of
discrimination in Just War Theory.
The principle of discrimination in
just war theory prohibits intentional
attacks on noncombatants as well as
against nonmilitary targets [13]. In
other words, targets must be legitimate military targets and civilian
damage should be avoided to the
greatest extent possible.
The desire to achieve destruction
of a target through increasing physical distances is clearly seen in the
increasing military funding for the
development of highly autonomous
weapons, which can be used to
destroy targets at great distances
with little human intervention.
This desire to destroy targets from
afar is termed distant punishment
by military historian and psychologist Dave Grossman. He contends

Sexual Range
Hand-to-Hand Combat Range
Knife Range

Resistance to Killing

Distance, Remoteness, and


the Creation of a Moral Buffer

administer the shocks as compared


to the 35% who resisted when the
subject was remotely located [10].
Milgram [10] hypothesized that
several factors could influence the
tendency for subjects to resist shocking another human when the human
was in sight, one of which could be
the idea of empathetic cues. When
people are remotely administering
potentially painful stimuli to other
humans, they are only aware in a
conceptual sense that suffering
could result from their actions.
Milgram proposed several other
factors that account for the distance/obedience effect to include
narrowing of the cognitive field for
subjects, which is essentially the
out of sight, out of mind phenomenon. In addition, the actual physical
separation between an action and the
resultant consequences, known as
the experienced unity of act, is
another factor that contributes to the
ability for humans to inflict greater
suffering through remote conditions.
All of these factors play into the formation of a moral buffer: lack of
empathetic cues, out of sight, out of
mind, and physical distance. Moreover, these same factors are present
in the design and use of a weapons
delivery computer interface that can
launch weapons from hundreds of
miles away from an intended target.

Bayonet Range
Close Range (Pistol/Rifle)
Handgrenade Range
Long Range
(Sniper, Anti-Armor, Missles, etc.)
Max Range
(Bomber, Artillery)

Low

pronounced in systems that directly


impact human life such as those in
military domains, but also in medical settings. These moral buffers,
in effect, allow people to ethically
distance themselves from their
actions and diminish a sense of
accountability and responsibility.

Close

Physical Distance From Target

Far

Fig. 1. Resistance to killing as a function of distance [4].


IEEE TECHNOLOGY AND SOCIETY MAGAZINE

FALL 2004

that the desire to kill the enemy from


afar is an innate human construct and
has deep historical roots. Grossman
[5] contends that military personnel
have an instinctive desire to avoid
personal confrontation, and the
desire to use weapons from a distance is an attempt to exert military
will without having to face the consequences of personal combat.
Fig. 1 illustrates Grossmans projection of human resistance to
killing as a function of proximity to
the enemy.
He reports that there are significant instances of refusal to kill when
presented with hand-to-hand combat scenarios, yet there have been
virtually no instances of noncompliance in firing weapons from
removed distances such as dropping
bombs [6]. This sentiment was
echoed in the Milgram results, and
Milgram had this to say about the
lack of empathy in military
weapons delivery:
The bombardier can reasonably suppose that his weapons will
inflict suffering and death, yet this
knowledge is divested of affect and
does not arouse in him an emotional response to the suffering he
causes [10].
In addition to the actual physical
distance that makes it easier for
people to kill, Grossman [4] contends that emotional distance is a
significant contributor as well,
which is akin to Milgrams concept
of a lack of empathetic cues. A
sense of moral superiority and cultural elements such as racial and
ethnic differences can contribute to
emotional distance, however, for
engineers of complex sociotechnical systems, the primary emotional
distancing element that factors into
design is that of mechanical distancing. In this form of emotional distancing, some technological device
provides the remote distance that
makes it easier to kill. These devices
can be any mechanical apparatus
that provides a psychological buffer
IEEE TECHNOLOGY AND SOCIETY MAGAZINE

Computer decision-making systems


have the ability to diminish a users
sense of moral agency.
such as TV and video screens and
thermal sights, which Grossman
contends will provide an environment of Nintendo warfare [4].
Smart weapons and UAVs are
now supported by computer interfaces that resemble popular video
games (although it could be argued
that the video gaming market is
influenced by current military technology), and as a result an even
greater sense of psychological
detachment can occur through not
only physical and emotional distancing, but also desensitization that
blurs the lines between reality and
virtual gaming. In general interfaces
are currently designed to promote
mission accomplishment, and incorporation of sensor data and symbology occur more as a function of
mission requirements than any conscious attempt to promote emotional distancing. However, as this
author can personally attest as a former fighter pilot, it is more palatable
to drop a laser guided missile on
building than it is to send a Walleye
bomb into a target area with a live
television feed that transmits back
to the pilot the real-time images of
people who, in seconds, will no
longer be alive.

Assigning Moral Agency to


Computers
It has been well-established that
humans have a strong tendency to
anthropomorphize computers [16].
As part of this tendency to assign
human attributes to computers,
researchers have demonstrated that
when interacting with a computer,
people apply social rules to this
interaction, despite their self-reports
that such attributions are inappropriate [12]. When coupled with the
physical and emotional distance,
|

FALL 2004

sense of remoteness, and psychological detachment a computer interface


can provide with the human tendency to anthropomorphize, it is possible that without consciously recognizing it, people assign moral
agency to computers, despite the fact
that they are inanimate objects. In a
research study designed to determine subject views about computer
agency and moral responsibility,
results suggested that educated individuals with significant computer
experience do hold computers at
least partially responsible for computer error [2]. If computer systems
can diminish users senses of their
own moral agency and responsibility, this could lead to erosion of
accountability, especially when the
stakes are high as in military and
medical settings [2]. Because of this
diminished sense of agency, when
errors occur in the operation of complex automated equipment, computers can be seen as the culprits. When
this diminished sense of agency
occurs, human dignity is eroded
and individuals may consider themselves to be largely unaccountable
for the consequences of their computer use [2].
As mentioned previously, military settings are not the only settings
in which humans can distance themselves from their action, thus eroding accountability. Medical computer interfaces which directly affect
human well-being can also provide a
potential moral buffer. The Acute
Physiology and Chronic Health
Evaluation (APACHE) system provides an example of just how a computer decision support tool can provide a moral buffer and blur the line
between human and computer moral
status. The APACHE system is a tool
used in hospitals to determine at
|

31

what stage of a terminal illness treatment would be futile. While it could


be seen as decision support tool to
provide a recommendation as to
when a person should be removed
from life support systems, it is generally viewed as a highly predictive
prognostic system for groups, not
individuals [7]. Despite the intent to
use the APACHE system as a posthoc predictive system, this type of
technology could provide a moral
buffer in that medical personnel
could begin to rely on the system to
make the decision and then distance
themselves from a very difficult
decision (I didnt make the decision
to turn off the life support systems,
the computer did). By allowing the
APACHE system the authority to
make a life and death decision, the
moral burden could be seen as shifting from the human to the computer.
Most engineers and users recognize that automation can never be
100% reliable, however, when faced
with decisions in complex sociotechnical systems such as air traffic
control and large problems spaces
such as inventory management, people have a tendency to rely heavily
on automated recommendations,
especially as they prove to work
consistently. There is a well-established cognitive decision-making
bias that often occurs when automa-

tion is introduced in the decision


making process in complex systems, which is known as automation bias. Humans have a tendency
to disregard contradictory information in light of a computer-generated recommendation, and this
automation bias can cause erroneous decisions, often with high
costs [11], [14], [15]. In an experiment in which subjects were
required to both monitor low fidelity gauges and participate in a tracking task, 39 out of 40 subjects committed errors of commission, i.e.,
these subjects almost always followed incorrect automated directives or recommendations, despite
the fact that contraindications existed and verification was possible
[18]. The more complex and difficult a decision is, the more likely it
is that automation bias could result.
In the case of the APACHE system, the designers recommend that it
only be used as a consultation tool to
aid in the difficult decisions of
removing life support, and should
not be a closed loop system [2].
The ethical difficulty arises when
technologies like APACHE become
entrenched in the culture. The
APACHE system has been shown to
make consistent and accurate recommendations, and because of this, the
propensity for automation bias and

Fig. 2. A military strike planning human-computer interface screenshot.


32

over-reliance on the technology


could allow medical personnel, who
are already overwhelmed, to increasingly rely upon this technology to
make tough decisions. When systems like the APACHE system are
deemed to be a legitimate authority
for these types of decisions, the system could in effect become a closedloop system, which was not originally intended. Instead of guidance, the
automated recommendations could
become a heuristic, a rule-of-thumb,
which becomes the default condition
that requires little cognitive investigation. This potential could become
especially problematic in the complex and abstract decision making
that will result in control of highly
autonomous weapons.

Designing a Moral Buffer


The consequences of distance, automation bias, and psychological detachment potentially create moral
buffers in the use of computer interfaces for difficult decisions whether
they be in military or medical
domains and indeed, any computerized system that potentially could
inflict harm upon people would be
subject to this possibility. Acting
through an apparatus like a computer
interface instead of directly interacting with another person can diminish
human dignity, especially when an
operator could make a potentially
fatal decision in directing weapons
through the click of a mouse. The
loss of human dignity and sense of
agency could permit people to perceive themselves as unaccountable
for whatever consequences result
from their actions, however indirect.
An example of how an interface
design element could possibly aid in
the eroding of accountability in the
use of computer interfaces can be
seen in Fig. 2. Fig. 2 is a screenshot
of the conceptual design for a planning decision support computer program based in Microsofts Excel
software package that aids a missile
strike planner in optimally assigning
missiles to predetermined targets and
generating required communication

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

FALL 2004

messages as a result. The task of


strike planning carries with it great
responsibility as millions of dollars
in missiles, countless hours in manpower, and scheduling of resources
are at the disposal of the planner. In
addition, loss of life is likely to result
in the strikes that result from this
planning. With such serious responsibility that profoundly impacts both
the U.S. military and enemy forces, it
is curious that the interface designers
chose to represent the help feature
using a happy, cute, and non-aggressive dog. A help feature is no doubt a
useful tool in any human interaction
with complex computing systems,
but adding such a cheerful, almost
funny graphic only helps to enforce
the moral buffer that is created by the
remote and detached element of
planning certain death through such
an innocuous and anthropomorphic
medium. The friendly icon no doubt
helps to make the task easier, familiar, and more comfortable, but it also
makes the task more abstract, and
introduces emotional distance as discussed earlier.
In designing a human-computer
interface for a weapon control system, it is imperative that engineers
understand not only what physical
and cognitive human limitations
exist, but also how human behavior
in response to automation can compromise both the systems mission
and a sense of moral responsibility.
In the recent conflict in Iraq, the
U.S. Armys Patriot missile system
shot down a British Tornado and an
American F/A-18, killing three aircrew. While there are a number of
issues that led to these tragic events,
a fundamental flaw in the Patriot
system is that in both cases and in
typical operations, the Patriot missile operates under the management-by-exception (MBE) principle. In MBE, the Patriot missile is
set to automatically fire on a threat
unless stopped by a human, who has
on the order of 10-15 seconds to
stop the firing sequence. A more
appropriate design strategy would
have been management-by-consent

in which a system will only fire


when directed to by a human. While
the Army was understandably concerned about incoming Iraqi missile
threats, enabling a system to essentially fire at will removes a sense of
accountability from human decision
makers, who then can offload
responsibility to the inanimate computer when mistakes are made.
Understanding that responsibility
and accountability for the deployment and operational use of highly
autonomous weapons resides primarily with military officials; engineers
can approach weapon interface
design with both functionality and
morality in mind. The possibility
exists that interfaces for autonomous
weapons, even with the most elegant
user design, will provide a moral
buffer, which allows users to distance
themselves from the potential lethality of their decisions and lessens any
sense of accountability for negative
consequences. Acting through a
seemingly innocuous apparatus like a
computer interface and making
potentially fatal decisions in directing
weapons through the click of a mouse
can diminish human dignity, which
would then allow people to perceive
themselves as unaccountable for
whatever consequences result.

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

Designer Awareness
Engineers should be aware of the
potential to create a moral buffer
when designing weapons that require
a very quick human decision, and be
very careful when adding elements
that make a computer interface more
like a form of entertainment than an
interface that will be responsible for
lost lives. In addition, if computers
are seen as moral agents (i.e., I was
only following the recommendations
of the automation), the temptation
may exist to succumb to automation
bias and use highly autonomous
weapons systems in a speedy and
reckless manner and not with the
same forethought that was required
in older, less user-friendly systems.
In his evaluation of the ethics of
computer systems design, Brey [1]
FALL 2004

states, To analyze design features


for their social and political implications, one must first determine what
social and political implications are
relevant. Addressing changes in
technology, Johnson [8] states,
Evaluation can and should take
place at each stage of a technologys
development, and can and should
result in shaping the technology so
that its potential for good is better
realized and negative effects are
eliminated or minimized. For engineers designing user-interfaces,
examining the ethical and societal
impact of new technological advancements like the addition of a layer of human control for a weapons
system is critical from both a design
and moral perspective. In a prophetic
statement, philosopher Hans Jonas
stated almost 25 years ago, The
speed of technologically fed developments does not leave itself the
time for self-correction the further
observation that in whatever time is
left the corrections will become more
and more difficult and the freedom to
make them more and more restricted [9]. The future of autonomous
weapon control interface design
embodies Jonass concerns quite literally the ease and speed of delivering lethal weapons with a click of
mouse from hundreds of miles away
and the culture that this technology
creates leaves little room for correction of errors. Faced with the task of
designing an interface for a complex
sociotechnical system that can
impact human life with the click of a
mouse, the designing engineer must
understand the social and ethical
implications of critical design elements, as well as of cognitive biases
such as automation bias and the shifting of accountability through a
potential moral buffer.

Author Information
The author is with the Massachusetts Institute of Technology, 77
Massachusetts Avenue, Room 33305, Cambridge, MA 02139; email:
MissyC@mit.edu. An earlier version
(Continued on p. 41.)
|

33

and evading surveillance. Identity


systems are best used only when the
threat is one of misidentification,
rather than for attribute forgery.
Paper-based identity systems link
attribute, identity, and authentication
into a single stand-alone document.
The availability of networked data
opens entire new vistas of possible
systems. Simultaneously, networked
data are undermining the assumptions of secret information on
which paper systems depend.
Identity in government is sufficiently critical that an identity system is to some degree inevitable.
However, an identity system that
builds upon biometric-confirmed
pseudonyms can provide privacy
and enhance security. Current identity management systems increasingly
harm privacy and security, rather
than enhancing either. Anonymous
and pseudonymous attribute systems
should be used whenever feasible, as
imperfect attribute authentication is
often weakened, not strengthened,

by the addition of imperfect identity


authentication.

Author Information
L. Jean Camp is Associate Professor of Informatics at Indiana University, 901 East 10th St., Bloomington, IN 47408-3912; email:
ljeanc@gmail.com. An earlier
version of this paper was presented at ISTAS03, Amsterdam, The
Netherlands.

References

[1] LA Times Editors, No Fly list traps innocent, LA Times, Apr. 14, 2004.
[2] E. Alderman and C. Kennedy, The Right to
Privacy. New York, NY: Knopf, 1995.
[3] R.E. Anderson, D.G. Johnson, D. Gotterbarn, and J. Perrolle, Using the ACM Code of
Ethics in decision making, Commun. ACM,
vol. 36, pp. 98-107, 1993.
[4] T. Aslam, I. Krsul, and E. H. Spafford, A
taxonomy of security vulnerabilities, in Proc.
19th Nat. Information Systems Security Conf.,
Baltimore, MD, Oct. 6, 1996, pp. 551-560.
[5] K. Bowyer, Face recognition technology:
Security versus privacy, IEEE Technology &
Society Mag., vol 23, no. 1, pp. 9-19, 2004.
[6] L. Camp, Trust and Risk in Internet Commerce. Cambridge MA: M.I.T. Press, 1999.

[7] S. Chakrabarti and A. Strauss, Carnival


booth: An algorithm for defeating the computer-assisted passenger screening system, First
Monday, vol. 7, no. 10, Oct. 2002; http://www.
firstmonday.org/issues/issue7_10/chakrabarti/
index.html.
[8] E. L. Eisenstein, The Printing Press as an
Agent of Change. Cambridge, U.K.: Cambridge Univ. Press, 1979.
[9] J. Fountain, Building the Virtual State:
Information Technology and Institutional
Change. Brookings, 2001.
[10] C. Landwhere, Bull, McDermott & Choi,,
"A taxonomy of computer program security
flaws, with examples, ACM Computing Surveys, vol. 26, pp. 3-39, Sept. 1994.
[11] Lindorff Is a federal agency systematically harassing travelers for their political beliefs?
In These Times, Issue 27, No 2, Nov 22, 2002.
[12] H. M. McLuhan, The Gutenberg Galaxy:
The Making of Typographic Man. Toronto, CN:
Univ. of Toronto Press, 1962.
[13] E. M. Newton,. and J. D. Woodward, Biometrics: A technical primer, adapted from J.D.
Woodward, K.W. Webb, E.M. Newton et al.,
Biometrics: A technical primer, Army Biometric Applications: Identifying and Addressing
Sociocultural Concerns, App. A, RAND/MR1237-A. Santa Monica, CA: RAND, 2001.
[14] B. Yee and R. Blackley, Trusting code and
trusting hardware, Digital Identity Civic Scenario, Harvard Univ., Apr, 2003; available at
http://www.ksg.harvard/edu/digitalcenter/conference/references/htm.

Creating Moral Buffers in Weapon Control Interface Design (Continued from page 33)

[1] P. Brey, The politics of computer systems and


the ethics of design, in Computer Ethics: Philosophical Enquiry, J.v.d. Hoven, Ed. Rotterdam,
The Netherlands: Rotterdam Univ. Press, 1998.
[2] B. Friedman and P.H. Kahn, Human
agency and responsible computing: Implications for computer system design, in Human
Values and the Design of Computer Technology, B. Friedman, Ed. Stanford, CA: CSLI Publications, 1997, pp. 221-235.
[3] B. Friedman and L.I. Millet, Reasoning
about computers as moral agents: A research
note, in Human Values and the Design of
Computer Technology, B. Friedman, Ed. Stanford, CA: CSLI Pubs., 1997, pp. 205.
[4] D. Grossman, On Killing. Boston, MA: Little Brown & Co, 1995.
[5] D. Grossman, The morality of bombing: Psychological responses to distant punishment, presented at the Center for Strategic and International

Studies, Dueling Doctrines and the New American


Way of War Symp., Washington, DC, 1998.
[6] D. Grossman, Evolution of weaponry,
Encyclopedia of Violence, Peace, and Conflict.
Academic Press. 2000.
[7] P.R. Helft, M. Siegler, and J. Lantos, The rise
and fall of the futility movement, New England
J. Medicine, vol. 343, no. 4, pp. 293-296, 2000.
[8] D.G. Johnson, Computer Ethics, 3 ed.
Upper Saddle River, NJ: Prentice Hall, 2001.
[9] H. Jonas, The Imperative of Responsibility:
In Search of an Ethics for the Technological
Age. Chicago, IL: Univ. of Chicago Press, 1979.
[10] S. Milgram, Obedience to Authority. New
York, NY: Harper and Row, 1975.
[11] K.L. Mosier and L.J. Skitka, Human decision makers and automated decision aids: Made
for each other?, in Automation and Human Performance: Theory and Applications, R. Parasuraman and M. Mouloua, Eds. Mahwah, NJ:
Lawrence Erlbaum Assoc., 1996, pp. 201-220.
[12] C. Nass, J. Steuer, and E.R. Tauberm,
Computers are social actors, presented at the
CHI'94: Human Factors in Computing Systems, Boston, MA, 1994.
[13] W.V. O'Brien, The Conduct of Just and
Limited War. New York, NY: Praeger, 1981.

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

of this paper was presented at


ISTAS03, Amsterdam,
The
Netherlands.

References

FALL 2004

[14] R. Parasuraman, and V. Riley, Humans and


automation: Use, misuse, disuse, abuse, Human
Factors, vol. 39, no. 2, pp. 230-253, 1997.
[15] R. Parasuraman, T.B. Sheridan, and C.D.
Wickens, A model for types and levels of
human interaction with automation, IEEE
Trans. Systems, Man, and Cybernetics, vol. 30,
no. 3, pp. 286-297, 2000.
[16] B. Reeves and C. Nass, The Media Equation: How People Treat Computers,Television
and New Media Like Real People and Places.
Stanford, CA: CSLI Pubs., and New York. NY:
Cambridge Univ. Press, 1996.
[17] T.B. Sheridan, Speculations on future
relations between humans and automation, in
Automation and Human Performance, M.
Mouloua, Ed Mahwah, New Jersey: Lawrence
Erlbaum Assoc., 1996, pp. 449-460.
[18] L.J. Skitka, K.L. Mosier, and M.D. Burdick, Does automation bias decision-making? Int. J. Human-Computer Studies, vol. 51,
no. 5, pp. 991-1006, 1999.
[19] D.L. Wells, Opening remarks, presented
at the Swarming: Network Enabled C4ISR,
McLean, VA, 2003.

41

Você também pode gostar