Você está na página 1de 6

Examination number: B099411

Word Count: 1492

Could a computer ever think?

In this essay I will argue that computers could never think. I will first discuss the link
between the question and mind-body theory. Then I will focus on functionalism and the
counterarguments it faces, namely Searle's Chinese Room and Block's Absent Qualia
argument, and how these tie in with answering the question. I will address the
objections both counterarguments face, and conclude that it is not possible for
computers to think.

I will define 'computer' as a digital computer (Turing machine), as this is how it is


defined in Searle's paper which I will be addressing in this essay.1 I will also define
'thinking' as exhibiting mental states, such as thoughts or emotions.

The answer to the question 'Can computers think?' rests heavily on the mind-body
theory we choose to apply. The chosen theory will determine whether computers have
what is necessary for exhibiting mental states. For example, by substance dualism,
computers could never think because they lack the mental substance that is present
only in living beings.2 By functionalism, computers would be able to think, because
functionalism defines mental states based on the functional roles they carry out. 3 Since
we can conceive of a computer being able to perfectly replicate the functional roles of
human mental states (even if this is not yet the case with current technology), then
computers could think. I will focus on functionalism, as it is the most prominent mind-
body theory that allows computers being able to think. Therefore, in order to argue that
computers cannot think, it is necessary to show that functionalism faces problems it
cannot satisfactorily address.

The first key argument against functionalism is Searle's Chinese Room thought
experiment. Searle sets up a system consisting of a non-Chinese-speaking individual
locked inside a room, being given Chinese symbols and being asked to manipulate them
using a manual.4 The symbols being input correspond to questions and the symbols the

1 Searle.1983.p36.
2 Robinson.2016.
3 Levin.2013.

4 Searle.1983.p32.
Examination number: B099411
Word Count: 1492

human outputs correspond to answers. From outside the room, it looks as if the human
were fluent in Chinese, because the symbols being output are perfect answers to the
questions asked.5 However, the human cannot understand a single word of Chinese. All
they are doing is following the manual and manipulating the symbols without actually
understanding their meaning. Searle claims that this is analogous to how a computer
works; the manual corresponds to the computer program, and the human is the
computer. Therefore, all computers can ever do is manipulate symbols without knowing
their semantic content (understanding their meaning), which is why they can never
think.6

Searle anticipates and successfully refutes several objections, such as the systems reply7
or the robot reply8. He does not however discuss the possibility of implementing a
different program - one that will teach the room not only to manipulate symbols, but
also to understand their meaning (i.e. semantic content). If such a thing is possible, it
would invalidate Searle's Chinese Room argument. This suggestion was put forward by
Simon and Eisenstadt, who claim that by implementing the right program, the Chinese
room can be taught to understand a language.9 I will however follow a different
argument than they do.

First we need to ask, what constitutes human understanding? Let us continue with the

example of the Chinese. Intuitively, to understand what 树 means, you need to be able to

connect it with the English word 'tree', an image of a tree, and so on. Therefore, it seems
like what constitutes understanding is being able to make connections between certain
words, objects and concepts. A program which teaches this kind of understanding is
conceivable: instead of merely teaching the Chinese room to manipulate symbols, the
program could be expanded to also create connections and associations between

symbols. For example, when the room is given the symbol 树, it could also be given a

photograph of a tree and the English translation of the word.

5 ibid.p32.
6 ibid.p32.
7 ibid.p34.
8
ibid.p35.
9 Herbert, Eisenstadt.2002.p95.
Examination number: B099411
Word Count: 1492

One might however argue that this definition of understanding is too simplistic; that in
the human mind there are millions of complex connections and to make the room learn
these is impossible.

To that I would respond by pointing out human learning processes. Consider Bob, who
is beginning to learn Chinese. At first he has no syntactic nor semantic knowledge of the

language. He is given a textbook which teaches him, for example, that 树 means tree.

This creates a connection in his mind between the two and he gains some semantic

understanding . Just as the textbook teaches Bob to associate between 树 and 'tree' and

thereby makes him gradually understand, a computer's program could make a

computer associate between 树, tree, a photograph of a tree, types of trees, and

everything else that constitutes the human understanding of what a tree is. Both Bob
and a computer would begin with zero understanding in Chinese, and this would
develop gradually as more connections are added.

Now a further question arises: if we accept that computers are capable of


understanding, must we necessarily accept that computers can think? Intuitively,
thought seems to include much more than understanding. While understanding is based
on knowledge and connections between things, thought seems to include other mental
states too, such as feelings, emotions, and the qualities of experience (qualia). Therefore,
even if we accept that computers can be taught to understand and Searle's argument
fails, it is still unclear whether computers can think.

To address this issue, I will continue to the other major objection to functionalism:
Block's Absent Qualia argument. Block's argument is broader than Searle's, as it
considers mental states as a whole and attacks functionalism in general. Block describes
two systems which intuitively cannot have mental states, but according to functionalism
they do, thus creating a problem for functionalism.10 The first system is a being
('Blockhead') which is identical to you in all ways, except that instead of having 1 billion
neurons inside its brain cavity, 'Blockhead' has 1 billion homunculi, each of which
carries out the role of a neuron. The second system is the 'Nation of China', which
replaces all your neurons with 1 billion Chinese people (who again carry out the same

10 Block.2003.222-3.
Examination number: B099411
Word Count: 1492

functional roles as your neurons).11 Block argues that most of us would intuitively deny
that 'Blockhead' will exhibit the exact same mental states as you do, despite being
functionally identical to you. It is even clearer that Chinese system cannot exhibit the
same mental states as you - we are not even capable of imagining what it would mean
for a whole nation to collectively exhibit mental states.12 This is a major problem for
functionalism, as it would mean that mental states cannot be defined solely in terms of
functional roles.

Several objections can be raised against Block, but none of them significantly challenge
his argument. One such objection is that the Chinese system works on the wrong
timescale, and therefore cannot be functionally equivalent to a human.13 Block replies
by saying that if your own mental processes were slowed down, surely they would not
change in nature, they would just be slower 14, which is clearly a satisfying response
that we can intuitively agree with. A further objection that Block anticipates is the claim
that the Chinese system could be disrupted by various outside factors such as floods,
and so cannot be functionally identical to a human brain. Block responds by saying that
these disruptions are irrelevant to the mental inputs and outputs that are responsible
for mental states, and therefore will have no impact on the functional roles that are
equivalent to mental states.15 This too seems to be an intuitively plausible response:
humans can also suffer from 'system disruptions', such as our body being physically
harmed.

Due to the lack of strong counterarguments against Block, it seems that his argument
holds and functionalism fails as a theory, as it attributes mental states to a system that
we do not deem capable of exhibiting them. Therefore it seems that computers cannot
think, as functionalism is the main mind-body theory that allows them to think. While it
is possible to counter Searle's Chinese Room objection and show that computers can be
taught a kind of semantic understanding, this still does not prove to be sufficient for
computers being able to think (because thought seems to encompass more than simply
the ability to understand).

11 ibid. p223.
12 ibid.p223.
13 ibid.p224.
14 ibid.p.225
15 ibid.p.225
Examination number: B099411
Word Count: 1492

However, we need to keep in mind that the question of whether computers can think
rests heavily on the mind-body theory we apply. Since right now there is no theory
which is universally accepted, the question cannot be closed with a definite 'no' - we
need to keep in mind the possibility of some yet undiscovered mind-body theory that
would face no objections, and on which computers would be able to think. But as I have
concluded above, considering our present mind-body theories, it seems that computers
could never think.
Examination number: B099411
Word Count: 1492

Bibliography
 Block, Ned. "Troubles with Functionalism." Philosophy of Mind: Contemporary Readings.
Ed. Timothy O'Connor and David Robb. New York: Taylor & Francis, 2003. 222-233.
Print.

 Levin, Janet. "Functionalism." Stanford Encyclopedia of Philosophy. Stanford University,


03 July 2013. Web. 25 Oct. 2017. <https://plato.stanford.edu/entries/functionalism>.

 Robinson, Howard. "Dualism." Stanford Encyclopedia of Philosophy. Stanford University,


29 Feb. 2016. Web. 25 Oct. 2017.
<https://plato.stanford.edu/entries/dualism/#SubDua>.

 Searle, John R. “Can Computers Think?” Minds, Brains, and Science, Harvard University
Press, 1983, pp. 28–41.

 Simon, Herbert A., and Eisenstadt, Stuart A.. "A Chinese Room That Understands." Views
into the Chinese Room: New Essays on Searle and Artificial Intelligence. Ed. John Preston
and Mark Bishop. Oxford, UK: Clarendon, 2002. 95-108. Print.

Você também pode gostar