Você está na página 1de 7

Benedict 1

Annotated Bibliography

What possible ethical problems may be associated with the creation of more human-like artificial
intelligence (AI)?

Aileen Benedict
Professor Malcolm Campbell
English 1103
October 26, 2015

Benedict 2

Annotated Bibliography
Bostrom, Nick. What happens when our computers get smarter than we are? TEDTalks,
TEDTalks. Mar. 2015. Web. 18 Oct. 2015.
In this Ted Talk, Nick Bostrom discusses the theoretical problems behind creating a super
intelligent machine. He states that machine intelligence is the last invention that
humanity will ever need to make. Once we are able to create an intelligence that is equal
to that of humans, it is likely that it wont just stop there, but will quickly, and
dangerously, pass that point. Bostrom is a philosopher at Oxford University, and is well
known for his work with various topics, one of which being super-intelligence risks. He
does a good job of explaining his ideas regarding the dangers of creating superintelligence in an understandable way by comparing it to the relationship between
humans and chimpanzees. While looking at our brains compared to that of chimpanzees,
human brains are larger than the chimpanzees, but the differences are still very minor.
Despite how small the differences are, however, we can see how big a difference it makes
between two species. The fate of these animals ends up relying on what we, as humans,
do. So what happens if artificial intelligence becomes superior to ours? Just like
chimpanzees to humans, we may end up relying on this super-intelligence. Bostrom also
talks about the concepts and steps that need to be taken in order to avoid catastrophe: how
to make sure the artificial intelligence works for the interest of humans and doesnt
mistakenly reinterpret its original purpose. An example given is to think about a computer
made to make people smile. At first, it would simply crack jokes or do amusing things in
order to achieve this purpose. If it became more powerful and intelligent, however, it may

Benedict 3

come up with ways that would be more efficient, such as sticking electrodes into
peoples faces. We obviously dont want this, so we would have to find ways to make
sure the artificial intelligence is safe before it actually becomes superior.
While no references to other material are made in this lecture, Bostrom has been a
recognized figure, leading the Future of Humanity Institute, a research group of
mathematicians, philosophers, and scientists at Oxford who investigate big ideas
regarding the human condition and its future. Its interesting to see different perspectives
on ethics relating to artificial intelligence, and this will be helpful when looking for
additional research. Currently, this topic is mostly just philosophical, as artificial
intelligence is still being developed and has not yet reached the point of superintelligence, but Bostrom made a lot of reasonable concepts that I would like to include in
my EIP and look more into.
Deng, B. "Machine Ethics: the Robot's Dilemma." Nature. 523.7558 (2015): 24-6. Web. 19 Oct.
2015.
In this article, Boer Deng looks at the Three Laws of Robotics, a concept written by Isaac
Asimov, and uses it to explore machine ethics. The Three Laws may sound familiar, as
theyve been included in works of science-fiction, like the movie I, Robot, and include:
1) A robot may not injure a human being or, through inaction, allow a human being to
come to harm; 2) A robot must obey the orders given it by human beings, except where
such orders would conflict with the first law; and 3) A robot must protect its own
existence as long as such protection does not conflict with the First or Second Laws. He
proposes many different questions and examples relating to ethics, and explores this topic
in a way unlike the other articles Ive read. Instead of looking only towards the possible

Benedict 4

dangers a super-intelligence could cause to humans, or ways to ethically protect a


machines growing conscience, he thinks more about how to insure that machines can
make ethical choices given their code and instructions, and some of the implications of
this. For example, think about how autonomous cars may behave in a crisis. What if the
vehicle had to slam on its breaks to avoid a crash, but then caused the vehicles behind it
to crash. How should a robot react if forced to choose between two bad choices? Deng
also mentions Nao, a robot programmed to remind people to take their medicine, as
another example in questioning machine ethics. How should she react if the patient
refused to take the medicine? She cant really allow them to skip a dose, because that
could cause the patient harm. But insisting that [the patient] take it would impinge on
[the patients] autonomy. To help get through these issues, the Andersons, who had
created Nao, gave her learning algorithms to sort through examples of cases in which bioethicists resolved conflicts regarding patient autonomy, harm, and benefit. Nao could then
find patterns in these example cases and use them as guidance on how to act. Boer Deng
is a news intern for Nature, and even though she herself may not have many credentials,
this is a peer-reviewed scholarly article, so I believe that it is reliable. This article is also
very recent, and Deng includes many trustworthy references, including a Technical
Report. I dont think that I will use this article in my writing, since it doesnt mention
human-like artificial intelligence specifically, only the ethics involved with machine
instructions. However, I liked the ideas proposed, especially Asimovs Three Laws of
Robotics and the problems that AI may have in deciding between two bad choices, and
will definitely use this information in gathering more research.

Benedict 5

Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelligence.
New York: Viking, 1999. Print.
Ray Kurzweil is a well-known author and Computer Scientists; he has received 20
honorary doctorates and has written 7 books. In this book, he examines the progression of
artificial intelligence, predicting that computers will soon exceed the memory capacity
and computational ability of that of humans. Not only will this happen, but machines will
proceed to grow more human-like, gaining personalities and claiming to have a
conscious. In the section Thinking Is as Thinking Does (61-66), Kurzweil talks about
what it takes for us to test for a conscious mind, and actually includes an interpretation of
a conversation between a human and a machine, which I thought was interesting. The
Turing Test puts emphasis on language, testing the machines abilities to converse by
putting it up against a group of judges. However, this is subjective and isnt an actual
scientific demonstration. Ray Kurzweil has much credibility in this field, and I believe
that his writing is a good source of knowledge. He brings up many points to think about,
and while much of this is just philosophical, so is my topic. I will reference this book to
help look more in depth at the definition of AI and what traits a more human-like AI
would consist of. This section specifically will be used to look at the possibilities of AI
developing a consciousness, a quality of living beings, and ways to verify the conscious
minds existence. While this is not my topic (AI having a conscious mind), its important
to have some kind of definition and foundation of these ideas before continuing on to
explore the ethical problems of creating human-like AI.

Benedict 6

Rothblatt, Martine. Can we develop and test machine minds and uploads ethically? Kurzweil
Accelerating Intelligence. KurzweilAINetwork, 25 Apr. 2011. Web. 18 Oct. 2015.
This essay on Kurzweil Accelerating Intelligence discusses the ethical problems with
creating and testing AI for a consciousness, but from the interest of the intelligent
machine. Martine Rothblatt looks at mind-clones and the development of a conscious
mind-clone specifically. A mind-clone is an artificially intelligent mind, created by
uploading the mind-files of a human being; a good example of one is Bina 48. Rothblatt
states that there are at least three different ways to look at this problem: approach it with
medical ethics, philosophically, or pragmatically. She focuses on the importance of
consent throughout the entire essay. How can you ethically test a mind-clone when it has
no ability to first give consent by itself? It is an agreed upon idea that others may give
consent for those who cannot, such as parents or a committee, when the best interests of
the patient are in mind. Many smaller complications still arise, however. For example,
how can an ethics committee, even when acting for the best interests of the mind-clone,
make decisions to avoid causing harm while little about the mind-clone itself is known?
Rothblatt proposes that we may develop the mind-clone in steps, and then make a
bridge between consciousness and the rest of the mind when nearing the final stages.
Another idea is that since the mind-clone is, in theory, a copy of the original persons
mind, that person should be allowed to give consent. It is his/her own mind, after all, and
he/she has the ethical right to change it. Lastly, Rothblatt looks at the practicality of the
situation. She states that having a mind-clone will be so enticing that any ethical
dilemma will find a resolution. Different companies will be competing to develop these
mind-clones for people, but there will still need to be some kinds of regulations made in

Benedict 7

order for these companies to legally sell them. According to Rothblatt, the first
mindclones will be produced without much ethical protection, but then government
agencies will later require safety and efficacy testing once the general public accepts that
there is a cyber-consciousness. Martin Rothblatt is a lawyer and the CEO of United
Therapeutics, a bio-tech company she founded to help save her daughter. She developed
the robot and mind-clone Bina48, so she does have a lot of experience and knowledge in
this field. Rothblatt may be a bit biased in her thinking because of her part in the
development of Bina48, but I believe that she still presents the information in an
objective and factual way. Even though the focus of this essay is on mind-clones, this is
still a type of artificial intelligence and I would like to use her ideas about ethical
problems and solutions while thinking and writing about my topic. She has no outside
references given, but her experience and history make up for it. The essay is also located
on a website dedicated to artificial intelligence and technology that I believe is reliable.

Você também pode gostar