Você está na página 1de 12

Running Head: MACHINES AND MORALITY

Machines and Morality: A Review of the Literature


Taylor R. Dodson
RWS 1302
Riley Welcker

Introduction

MACHINES AND MORALITY

As humans quickly approach the peak of the capabilities of automated computer


processing, many researchers involved in the fields of artificial intelligence (also known as
A.I) and robotics have begun to ask serious questions about the possibilities of the mechanical
mind. The question is this: if scientists were to create a robot with intellectual abilities
matching that of a living human being, how might one go about making such a machine safe?
Some A.I theorists have come to the conclusion that such machines need two things:
consciousness on par with the human mind, and a system of morality; some way of determining
the rights and wrongs of any given situation.
Since its conception, the idea of an intelligent, moral A.I has been the subject of a
considerable amount of discussion within the scientific community and beyond. The main
questions of concern are:
1. Is machine morality worth attending to?
2. What are the requirements of creating an intelligent, morally behaved A.I?
3. What are the possible dangers or complications of intelligence or morality in
machines?
Because artificial intelligence is quickly becoming a central part of society from
communication to transportation - it is important to understand just what their intellectual and
ethical limits are in order to ensure the safety of any and all users of new technology. Thus, in
this literature review, various opinions and arguments in response to the questions above will be
presented in order to aid the public in reaching an informed conclusion.
Why or why dont some believe that machine morality and ethics is worth
attending to?

MACHINES AND MORALITY

Participants in this debate often begin with the question of whether or not implementing a
morality system in machines is worth consideration. In this regard, there a few perspectives:
Adam Keiper (director of the Science and Technology Program at the Ethics and Public Policy
Center) and Ari Schulman (Computer science major and former research assistant at The New
York Times) (2011) detail a balanced but critical opinion of the concept in their article for the
journal The New Atlantis, asserting that while implementing morality and ethics in the
development of artificial intelligences in an increasingly autonomous world is definitely worth
taking seriously, friendly A.I theorists are far removed from the practical realities of
programming and building functioning moral machines (p. 81). This, according to them, is a
result of expostulating which system of rules or ethics should we program robots to follow?
They point out that, because this would require reaching a consensus on, not only robotic ethics,
but ethics in general, scientists have not been able to give a satisfactory answer when asked is
creating a moral robot possible? Hence, they are decidedly skeptical of the concept, but not at
all dismissive (p. 82).
In contrast, Alan Winfield (2013), a professor of electronics engineering at the University
of the West England, offers another tenable argument: he believes that morality and ethics are an
unquestionably possible and necessary part of developing intellectually advanced robots, stating
that for any cognitive system to be trusted, it must be safe and ethical (Ethical Robots). He
argues this by use of some brief examples, pointing out that we trust potentially dangerous
technology such as passenger planes because they are held to stringent design and safety
standards, and thus, all other machines interacting with humans must be held to the same
standards. In order to do this, Winfield claims, it needs to be able to follow a set of strict rules of
its own, those being, in this case, moral codes.

MACHINES AND MORALITY

Laszlo Versenyi (1974), a philosophy professor at Williams college, uses his knowledge
of philosophical concepts in relative agreement with Winfield: he believes that developing a
moral robot is both possible and useful, but for a different reason. Versenyi posits that virtue is
a result of a combination of skills and experience, all of which are things that, because of their
allegedly objective nature, are possible to re-create via computer algorithms. What is more, he
believes that such an agent would have practical use, for a wise and virtuous robot would
recognize the extent to which its own well-being depends on its cooperation with men [ . . ]
precisely to the extent that men and robots remain different (isomorphic but not identical), each
species will have capabilities to perform what the other cannot (p. 258). By this, he means that
by creating an intelligent yet benevolent A.I, we would be creating a race that would willingly
use its special properties in order help humans accomplish things they normally could not.
Note the rhetorical significance of this argument: Versenyi presents a scenario that is
logical (logos), explicitly connecting the nuances of human thinking with the possibilities of
algorithm; a display of quality (if not subjective) reasoning. He also subtly but effectively
appeals to his audiences emotions by presenting the desirable scenario of advanced machines
using their capabilities to aid mankind.
Insight from the past, however, returns us to the air of skepticism created by Keiper and
Schulman; according to the article Can Computers Think? in the newspaper known as The
Ledger, Joseph Weizenbaum (1977), a computer science professor at the Massachusetts of
Technology, argues that the idea of programming a moral robot is dangerous to pursue because
a computers knowledge might become limited to what a computer could understand. In other
words, Joseph believes that human minds and computer minds are too different to unite, and

MACHINES AND MORALITY

thus, a computer simply has limits that bar it from being truly equal to the thinking processes of a
human being (p. 2A).
Bernd Carsten Stahl (2004), a professor of Critical Research in Technology, agrees with
Weizenbaum, stating in his article for the journal Minds & Machines that there are no
algorithms to decide a priori which data is relevant and which is not. Human beings are able to
make that decision because they are in the situation and they create the relevant reality through
interaction. Computers [. . .] do not know which part of the data they process is relevant for the
ethical evaluation because they miss its meaning (p.78). In saying this, he conforms to Keiper,
Schulman, and Weizenbaums idea that human and artificial minds have capacities that are too
complex to recreate by means of code: computers, as he puts it, simply cannot understand
information in terms of what is relevant and what is not, and thus, cannot truly decide how to
behave morally in most situations, given that any moral scenario would require the processing
and sorting of more than a billion inputs and outputs.
Further debate on this question can be found among the masses as well. In a survey titled
Morality and Artificial Intelligence conducted from March 13th, 2015 to March 16th , 2015
using the online Survey Monkey application, 19 individuals were questioned on their views on
this subject. The results are pictured in the graphic to the right. In this graphic, the percentage of
those that believe morality in artificial intelligences is worth attending to is represented by the
green bar, while the percentage of those opposed to it is displayed in blue. As shown, most of
these individuals are as skeptical as Stahl, Keiper, and Weizenbaum are. Upon being questioned
further, the majority of participants gave criticisms similar to that of Keiper and Schulman
(asking just whos moral system they would be operating on) or expressing doubts about the idea
that something as allegedly nuanced as morality could ever be created by means of code, much

MACHINES AND MORALITY

like Stahl. These results, while they are by no means general or conclusive (given the small
sample size), it does give insight into opinions about machine morality from a commoners
perspective.

What do some propose as the


requirements for creating morally
behaved, human-like robots?
What, exactly, does the mind
of a moral robot contain? Keiper and
Schulman, by use of a seemingly
trivial example, stress that the requirements for the morally infallible being A.I theorists
imagine may be difficult to fulfill with a hypothetical, yet detailed example:
A friendly robot has been assigned by a busy couple to babysit [. . .] During the day, one
of the children requests to eat a bowl of ice cream. Should the robot allow it? The
immediate answer seems to be yes [. . .] Yet if the robot has been at all educated in human
physiology, it will understand the risks posed by consuming foods high in fat and sugar. It
might then judge the answer to be no. Yet the robot may also be aware of the dangers of a
diet too low in fat, particularly for children [. . .] before the robot could even come close
to acting [. . .] it would first have to resolve a series of outstanding questions in medicine,
child development, and child psychology, not to mention parenting and the law [. . . ] (pp.
84-85)

MACHINES AND MORALITY

By presenting the scenario above, Keiper and Schulman assert that a moral machine
needs to able to consider over a thousand complex factors at once before it can even begin to
make moral decisions.
In this source, the use of rhetorical devices is quite notable, mainly in the form of pathos
and logos: not only will the example of a care-taking robot highlight the possible, personal harm
that could arise as a result of moral complications in A.I (thus warranting the readers concern),
but it is a simple, yet profound example that most readers can understand, and thus, find logic in.
Professor Alan Winfields (2014) idea is similar to Keiper and Schulmans, although his
explanation is more global and abstract; in his article for The Guardian, he explains that a
human-level A.I would need to be generalist, with the capacity to learn, understand meaning
and context, and be self-aware, for that is how morality is acquired by humans (Artificial
Intelligence will not turn into a Frankensteins Monster). By this, he means that, in order for a
robot to have the human-level capacities needed for morality, it must, like a human being, be a
fully-realized person.
Now, Elizer Yudkowsky (2001), a well-known artificial intelligence researcher and cofounder of the Machine Intelligence Research Institute, agrees with Alan in the regard that an
advanced A.I must have its own agency to some extent. However, unlike him, he stresses that a
moral A.I must be inherently friendly, stating that it will perform the Friendly action even if one
programmer gets the Friendly action wrong; a truly perfect Friendly AI will perform the
Friendly action even if all programmers get the Friendly action wrong (p. 5).
Unlike Yudkowsky, Luciano Floridi - a professor in philosophy and ethics - and J. W.
Sanders - a member of the Information and Ethics group at the University of Oxford - (2004) do

MACHINES AND MORALITY

not believe that an agent truly capable of morality is simply friendly at all times, claiming that
an action is said to be morally qualifiable if and only if it can cause moral good or evil. An
agent is said to be a moral agent if and only if it is capable of morally qualifiable action (p.
364). In other words, he believes that a truly moral artificial intelligence must be capable of both
good and evil its actions must simply be able to be measured from a moral standpoint.
James Hughes (2014), a professor of sociology, offers a more spiritual, less pragmatic
perspective than the sources above; approaching the matter a as a Buddhist, he claims that a
moral human being possesses 5 constantly evolving substrates (body, sensation, perception,
volition, and consciousness); thus, an A.I equal to a human in morality, understanding, and
intelligence must have these five things as well (p. 70).
What are some concerns about the dangers/complications of an intelligent,
moral A.I?
The primary concern of many A.I theorists are the possible risks of such technology.
Winfield (2014), however, while he has stressed the importance of ethics in the construction of
machinery in previous writings (such as his Ethical Robots slide show cited above), says that
we don't need to be obsessing about the risk of super-intelligent A.I, but I do think we need to
be cautious and prepared. He goes on to justify this stance by stating that A.I with the ability to
operate on dangerous levels are simply are too far in the future to be too concerned about at the
present time, and thus, present little risk (Artificial Intelligence Will Not Turn into
Frankensteins Monster).
Keiper and Schulman (2011), on the other hand, cite possible complications of a moral
A.I as their reasons for being skeptical about existing theories about it; as mentioned in the
previous section, they assert that morality has too many variations too many shades of gray
for an A.I to handle stating that the child and ice-cream example they gave above comprises

MACHINES AND MORALITY

just a few of the countless imaginable ethically fraught situations whose solutions cannot
obviously be found by increased powers of prediction and computation (p. 85).
Participants in the Morality in Machines survey seem to agree with Keiper and
Schulmans sentiments. The graphic to the
right displays the percentages of two
opposing opinions: those who believe that
giving machines morals is dangerous,
(green) and those who do not (blue). Out of
the 19 individuals surveyed, over 90%
believe that implementing morality systems
in robots is a dangerous affair, as opposed to
the 5% that do not. Upon being asked for their reasoning, many either stated that morality was
simply too complex to be safely implemented through the numerical operations of code much
like Schulman and Keiper while others expressed concern over using human morality in
general for robots, describing it as inherently imperfect, and thus, dangerous.
Conclusion
Perspectives on the matter of moral A.I, is seems, are mixed: while Winfeild, Versenyi,
and (although they are skeptical) Schulman and Keiper agree that robot morality is worth
attending to for safety reasons, The Ledger, Stahl, and the survey participants seem opposed to
the idea due to technological limitations and the inherent nature of morality. In addition, nearly
every source presented seems to agree that intelligent, moral robots need some sense of self or
agency in order to be moral, although they also seem to agree that morality in robots is a

MACHINES AND MORALITY

10

matter of considerable complications and even danger (only Winfield believes that the prospect
is too distant to consider so deeply at the present time).
Overall, it seems that many people participating in this discussion are beginning to reach
a a general agreement that robot morality could be difficult perhaps impossible -- and
dangerous. Only time and further innovations will tell just how accurate each of their unique
perspectives and theories are.

References
Can Computers Think? (1977, May 8). The Ledger, p. 2A.
Carsten Stahl, B. (2004). Information, Ethics, and Computers: The Problem of Autonomous
Moral Agents. Minds & Machines, 14(1), 67-83. Retrieved from: http://0-

MACHINES AND MORALITY

11

web.b.ebscohost.com.lib.utep.edu/ehost/pdfviewer/pdfviewer?sid=d7e34d36-b6a8-487d90d8-b845a96f57de%40sessionmgr110&vid=14&hid=107
Floridi, L, & Sanders, J. (2004). On The Morality of Artificial Agents. Minds & Machines, 14(3),
349-379. Retrieved from: http://0web.a.ebscohost.com.lib.utep.edu/ehost/pdfviewer/pdfviewer?sid=a28f762a-5049-477390b7-d39433824875%40sessionmgr4004&vid=1&hid=4114
Hughes, J. (January 10, 2014). Compassionate AI and Selfless Robots: A Buddhist Approach. In
P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot Ethics: The Ethical and Social
Implications of Robotics (pp. 130-139). Cambridge, Mass: MIT Press.
Keiper, A. and Schulman, A. (2011). The Problem with 'Friendly' Artificial Intelligence. The New
Atlantis: A Journal of Technology and Society, 32, 80-89. Retrieved from: http://0web.a.ebscohost.com.lib.utep.edu/ehost/pdfviewer/pdfviewer?sid=dbf58df2-1a93-48b19eb0-ea3c3234f637%40sessionmgr4002&vid=1&hid=4114
Survery.https://www.surveymonkey.com/s/7FLVYX3
Winfield, A. (2014, August 9). Artificial intelligence will not turn into a Frankenstein's monster.
The Guardian. Retrieved from:
http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-notbecome-a-frankensteins-monster-ian-winfield
Winfield, A. (2013, October). Ethical robotics: Some technical and ethical challenges
[Presentation slides]. Retrieved from:
https://drive.google.com/file/d/0BwjY2P_eeOeiOThZX3VlRFU1Xzg/view
Versenyi, L. (1974). Can Robots Be Moral? Ethics, 84, 248-259.

MACHINES AND MORALITY


Yudkowsky, E. (2001). Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal
Architectures. Machine Intelligence Research Institute. Retrieved from:
http://intelligence.org/files/CFAI.pdf

12

Você também pode gostar