Você está na página 1de 2

Perhaps one of the biggest problems with the future of Artificial Intelligence is the prospect of

machines eventually gaining consciousness. While this concept might seem like it is deep in the realm
of science fiction, it actually has very little to do with science at all. As philosopher Gary Gutting
explains in his essay "Mary and the Zombies: Can Science Explain Consciousness?" the question of
consciousness is one that fits best within the realm of philosophy (or theology depending on the person)
due to the fact that there is virtually no empirical understanding of the subject. We dont know what
consciousness is or what could or couldnt be considered conscious. The only thing each individual
human knows for a fact is that they themselves are conscious.
One might ask why this is such a big problem with the advent of AI. That becomes apparent
when dealing with the idea of human exceptionalism. Human exceptionalism is the belief that humans
have essentially more existential significance than anything else in the natural world. The main basis
for this concept is that language, rationality, and essentially any trait that can be linked to intelligence is
the foundation for being. It is an idea that is typically seen in opposition to animal rights, but with the
possibility of sentient AI in the future, it seems inevitable that it will have to oppose any potential
machine rights as well. Human exceptionalism, displayed in acts like deforestation, pastoral farming,
pollution, and poaching, seems to be the most dominant theory regarding our relationship to other
beings or what we think are non-beings. In regards to the possibility of sentient AI, this brings us to a
new moral dilemma. If machines ever appeared to display signs of consciousness, should we as humans
treat them as moral equals?

Well one argument that may help to fortify the side of human exceptionalism against machine
sentience is John Searles Chinese Room. In this famous thought experiment, Searle imagines himself
in a room which has Chinese speakers outside of it. He, having no idea how to speak Chinese, is given
a set of instructions which tell him how to react when a note written in Chinese is slid under the door.
Specifically, the instructions inform him on how to draw the correct characters on a separate sheet of
paper in response to the characters on the original note. Once he writes down the proper response he
slides his response back under the door to the other side. Hypothetically, if John was moving very
quickly and the instructions were very efficient, the Chinese speakers on the other side of the door
would be given the impression that they were passing notes to a Chinese-speaker who understood the
conversation that they were having.

This case gives an example of how a talking computer could be designed with the best and most
complex of algorithms (in this case the instructions that John was following regarding the characters),
and still not have any understanding of the conversations that it is a part of. Therefore, in Johns mind,
a computer can never actually be considered sentient since the activities in which it plays a role dont
require it to actually posses consciousness or mind. While it is easy to agree with the fact that
displaying signs of intelligence does not necessarily make one conscious, Searles thought experiment
doesnt completely rule out the possibility of that consciousness. In the years following the Chinese
Room there have been plenty of rebuttals which side on the side of possible consciousness.

Two very similar rebuttals are the Systems Response and the Brain Response which both argue
in a sense that just because the individual components of this system do not have an understanding of
the conversation in which they are taking part doesnt mean that the system as a whole doesnt. In the
case of the Brain Response specifically it is argued that while the different cells and structures of the
brain do not necessarily experience consciousness, the whole clearly does. Part of the problem with
Searles argument is that there is an inherent assumption that humans scientifically know where their
own consciousness comes from.
According to David Chalmers Jackson's philosophical zombie concept it is theoretically
possible to display elements of consciousness without being conscious. Since the inherent goal of
morality is a means to encourage behavior by rational beings that yields positive results for other
conscious beings, and we currently have no real way of objectively determining who or what is
conscious and what is not, then we have a moral obligation to treat all beings that show signs of feeling
and experience with the same ethical consideration. While I currently dont believe that machines will
ever be conscious or experience existence the way that we do, I understand that I have just as much
evidence that they wont as I have that the reader of this blog is conscious.

Você também pode gostar