Você está na página 1de 17

Harnad,S.(2003)CanaMachineBeConscious?How?

JournalofConsciousnessStudies(toappear)

Can a machine be conscious? How?

Stevan Harnad
Centre de Neuroscience de la Cognition (CNC)
Universit du Qubec Montral
CP 8888 Succursale Centre-Ville
Montral, Qubec, Canada H3C 3P8
harnad@uqam.ca
http://cogsci.soton.ac.uk/~harnad/
Abstract:A"machine"isanycausalphysicalsystem,hencewearemachines,hencemachinescanbe
conscious(feel).Thequestionis:which*kinds*ofmachinescanfeel,andhow?Chancesarethatrobots
thatcanpasstheTuringTestcompletelyindistinguishablefromusintheirbehavioralcapacitiescan
feel,butwecanneverbesure(becauseofthe"otherminds"problem).Andwecanneverexplainor
understand*how*or*why*theymanagetofeel,iftheydo,becauseofthe"mind/body"problem.We
canonlyknowhowtheypasstheTuringTest.Thatiswhythisproblemisnotjust*hard*itis
insoluble.

Asking whether a machine can be conscious is rather like asking


whether one has stopped beating one's wife: The question is so
heavy with assumptions that either answer would be incriminating!

Theanswer,ofcourse,is:Itdependsentirelyonwhatyoumeanbymachine!Ifyoumeanthecurrent
generationofmanmadedevices(toasters,ovens,cars,computers,todaysrobots),theansweris:almost
certainlynot.


Empirical Risk. Why "almost"? Two reasons, the first being the usual

one: (1) empirical risk. We know since at least Descartes that


even scientific "laws" are merely very probable, not certain. Only
mathematical laws -- which describe consequences that follow
provably (i.e., on pain of contradiction) from our own assumptions -are necessarily true. But this certainty and necessity are
unnecessary for physics; almost-certainty will do. "Can a particle
travel faster than the speed of light?" Almost certainly not (at least
on current-best theory -- or at least the last word of it that trickled
down to this non-physicist). "Could a particle travel faster than
light?" That certainly is not provably impossible, though it might be
impossible given certain assumptions. (But those assumptions are
not necessarily correct.)

The Other-Minds Problem. Empirical risk besets all scientific


hypotheses, but let us agree that it is not something we will worry
about here. There is no need for roboticists to be holier than
physicists. The second reason for the "almost" is peculiar to robotics,
however, and it is called (2) the "other minds problem (Harnad
1991): There is no way to be certain that any other entity than myself
is conscious (I am speaking deictically: please substitute yourself for
me, if you too are conscious). This too we owe to Descartes.

Inthelonghistoryofphilosophytheothermindsproblemhasbeenpuzzledoverforavarietyofreasons,
usuallyvariantsonquestionsaboutwhatonecanandcannotknowforsure.Theseepistemicquestionsare
interesting,butwewillnotworryaboutthemhere,fortheusualreason,whichisthefollowing:Itlooks
onthefaceofitasiftherightstrategyforhandlingtheothermindsproblemisidenticaltothestrategyfor
handlingempiricalrisk,namely,tonotethatalthoughwecanonlybe100%certainabouttwothings
about(1)mathematicsandabout(2)ourownconsciousnessallelsebeingjustamatterofprobability,
somethings,suchasscientificlawsandtheconsciousnessofourfellowhumanbeings,arenevertheless
socloseto100%surethatitisawasteoftimeworryingaboutthem.(Letusalsonote,though,thatinthe
empiricalscienceofrobotics,thatextralayerofriskthatcomesfromtheothermindsproblemmightjust
comebacktohauntus.)

So let us agree not to worry about the other-minds problem for now:
People other than myself are almost certainly conscious too, and
toasters and all other human artifacts to date are almost certainly
not.

What Is a Machine? Have we now answered the question "Can a


machine be conscious?" It sounds as if, at the very least, we have
answered the question "Is any machine we have built to date
conscious?" That makes the original question sound as if it was only
asking about what we can and cannot build, which is like asking
whether we can build a rocket that can reach Alpha Centauri (a
rather vague and arbitrary question about quantitative limitations on
future technology). But is "machine" defined as "what human beings
can build"? I think that defines "artifact" -- but "machine"? Or rather,
do we really want to ask merely: "Can a man-made artifact be
conscious?"

And even that would be rather vague, for "man-made" is itself rather
vague. Common sense dictates that human procreation does not
count as "man-making" in this context. But what about genetic or
other biological engineering? If the day comes when we can craft
organisms, even humans, molecule by molecule, in the laboratory,
does anyone -- or rather, anyone who has agreed to discount the
other-minds problem when it comes to naturally crafted fellowhumans -- doubt that such a bottom-up construction of a clone would
be conscious too?

So "man-made" is a wishy-washy term. It does not pick out what we


mean by "machine" here. Surely a toaster (that very same device)
would not become more eligible for consciousness if it happened to
grow on a tree instead of being fabricated by one of us. By the same

token, a toaster would not become any less of a "machine"


(whatever that turns out to mean) by growing on a tree: Two
toasters, identical right down to the last component, one of which I
built and the other of which grew on a tree, are surely both
"machines" (whatever that means) if either one of them is. Another
way to put this is that we need a definition of "machine" that is strictly
structural/functional, and not simply dependent on its historic origins,
if we want to make our question about what machines can and
cannot do (or be) into a substantive rather than an arbitrary one.

Kinds of Machines. But I am afraid that if we do follow this much

more sensible route to the definition of "machine," we will find that a


machine turns out to be simply: any causal physical system, any
"mechanism." And in that case, biological organisms are machines
too, and the answer to our question "Can a machine be conscious"
is a trivial "Yes, of course." We are conscious machines. Hence
machines can obviously be conscious. The rest is just about what
kinds of machines can and cannot be conscious, and how -- and
that becomes a standard empirical research program in "cognitive
science": The engineering side of cognitive science would be the
forward-engineering of man-made conscious systems and the
biological side of cognitive science would be the reverse-engineering
of natural conscious systems (like ourselves, and our felloworganisms): figuring out how our brains work.

Except for one problem, and it is the one that risked coming back to
haunt us: What does it mean to "forward-engineer" (or, for that
matter, to "reverse-engineer") a conscious system? It is to give a
causal explanation of it, to describe fully the inner workings of the
mechanism that gives rise to the consciousness.

Forward- and Reverse-Engineering the Heart and Brain. Let us

take a less problematic example: To forward-engineer a cardiac


system (a heart) is to build a mechanism that can do what the heart
can do. To reverse-engineer the heart is to do the same thing, but in
such a way as to explain the structure and the function of the
biological heart itself, and not merely create a prosthesis that can
take over some of its function. Either way, the explanation is a
structural/functional one. That is, both forward and reverse
engineering explain everything that a heart can do, and how,
whereas reverse engineering goes on to explain what the heart is
(made out of), and how it in particular happens to do what hearts
can do.

Now let us try to carry this over to the brain, which is presumably the
organ of consciousness. To forward-engineer the brain is to build a
mechanism that can do what the brain can do; to reverse engineer
the brain is to do the same thing, but in such a way as to explain the
structure and function of the biological brain itself. Either way, the
explanation is a structural/functional one. That is, both forward and
reverse engineering explain everything that a brain can do, and how,
whereas reverse engineering goes on to explain what the brain is
(made out of), and how it in particular happens to do what brains
can do: how it works.

How does the ghost of the other-minds problem spoil this seemingly
straightforward extension of the cardiac into the cogitative? First,
consider the forward-engineering: If we were forward-engineering
cardiac function, trying to build a prosthesis that took over doing all
the things the heart does, we would do it by continuing to add and
refine functions until we eventually built something that was
functionally indistinguishable from a heart. (One test of our success
might be whether such a prosthesis could be implanted into humans

from cradle to grave with no symptom that it was missing any vital
cardiac function.) This forward-engineered cardiac system would still
be structurally distinguishable from a natural heart, because it had
omitted other properties of the heart -- noncardiac ones, but
biological properties nonetheless -- and to capture those too may
require reverse-engineering of the constructive, molecular kind we
mentioned earlier: building it bottom-up out of biological
components.

The thing to note is that this cardiac research program is completely


unproblematic. If a vitalist had asked "Can a machine be cardiac?"
we could have given him the sermon about "machines" that we
began with (i.e., you should instead be asking What kind of
machine can and cannot be cardiac, and how?). Next we could
have led him on through forward-engineering to the compleat
reverse-engineered heart, our constructed cardiac clone, using
mechanistic principles (i.e., structure, function, and causality) alone.
At no point would the cardiac vitalist have any basis for saying: "But
how do we know that this machine is really cardiac?" There is no
way left (other than ordinary empirical risk) for any difference even to
be defined , because every structural and functional difference has
been eliminated in the compleat reverse-engineered heart.

The same would be true if it had been life itself and not just cardiac
function that had been at issue: If our question had been "Can a
machine be alive?" the very same line of reasoning would show that
there is absolutely no reason to doubt it (apart from the usual
empirical risk, plus perhaps some intellectual or technological doubts
of the Alpha Centauri sort). Again, the critical point is when we ask
of the man-made, reverse-engineered clone: "But how do we know
that this machine is really alive?" If there are two structurally and
functionally indistinguishable systems, one natural and the other

man-made, and their full causal mechanism is known and


understood, what does it even mean to ask "But what if one of them
is really alive, but the other is not?" What property is at issue that
one has and the other lacks, when all empirical properties have
already been captured by the engineering (Harnad 1994)?

The Animism at the Heart of Vitalism. Yet this last worry -- "How
can we know it's alive?" -- should sound familiar. It sounds like the
other-minds problem. Indeed, I suspect that, if we reflect on it, we
will realize that it is the other-minds problem, and that what we are
really worrying about in the case of the man-made system is that
there's nobody home in there, there is no ghost in the machine, And
that ghost, as usual, is consciousness. That's the property that we
are worried might be missing.

So chances are that it was always animism that was at the heart of
vitalism. Let us agree to set vitalism aside, however, as there is
certainly no way we can know whether something can be alive yet
not conscious (or incapable of returning to consciousness). Plants
and micro-organisms and irreversibly comatose patients will always
be puzzles to us in that respect. So let us not dwell on these
inscrutable cases and states. Logic already dictates that any vitalist
who does accept that plants are not conscious would be in exactly
the same untenable position if he went on to express scepticism
about whether the compleat artificial plant is really alive as the
sceptic about the compleat artificial heart (worried about whether it's
really a heart): If there's a difference, what's the difference? What
vital property is at issue? If you can't find one (having renounced on
consciousness itself), then you are defending an empty distinction.

Turing-Testing. But the same is most definitely not true in the case
of worries about consciousness itself. Let us take it by steps. First
we forward-engineer the brain: We build a robot that can pass the
Turing Test (Turing 1950; Harnad 1992): It can do everything a real
human can do, for a lifetime, indistinguishably from a real human
(except perhaps for appearance: we will return to that).

Let us note, though, that this first step amounts to a tall order,
probably taller than the order of getting to Alpha Centauri. But we
are talking about "can" here, that is, about what is possible or
impossible (for a machine), and how and why, rather than just what
happens to be within our actual human technological reach.

Doing vs. Feeling: The Feeling/Function Problem. So supposing


we do succeed in building such a Turing-scale robot (we are no
longer talking about toasters here). Now, the question is whether he
is really conscious: On the face of it, the only respect in which he is
really indistinguishable from us is in everything he can do. But
conscious is something I am, not something I do. In particular, it is
something I feel; indeed, it is the fact that I feel. So when the sceptic
about that robot's consciousness -- remember that he cannot be a
sceptic about machine consciousness in general: we have already
eliminated that by noting that people are a kind of machine too -wants to say that that robot is the wrong kind of machine, that he
lacks something essential that we humans have, we all know exactly
what difference the sceptic is talking about, and it certainly is not an
empty difference. He is saying that the robot does not feel, it merely
behaves -- behaves exactly, indeed Turing-indistinguishably -- as if it
feels, but without feeling a thing.

Empirical Robotics. It is time to remind ourselves of why it is that


we agreed to set aside the other-minds problem in the case of our
fellow-human beings: Why is it that we agreed not to fret over
whether other people really have minds (as opposed to merely
acting just as if they had minds, but in reality being feelingless
Zombies)? It was for the same kind of reason that we don't worry
about empirical risk: Yes, it could be that the lawful regularities that
nature seems to obey are just temporary or misleading; there is no
way to prove that tomorrow will be like today; there is no way to
guarantee that things are as they appear. But there is no way to act
on the contrary either (as long as the empirical regularities keep
holding).

Empirical risk is only useful and informative where there is still actual
uncertainty about the regularities themselves: where it is not yet
clear whether nature is behaving as if it is obeying this law or that
law; while we are still trying to build a causal explanation. Once that
is accomplished, and all appearances are consistently supporting
this law rather than that one, then fretting about the possibility that
despite all appearances things might be otherwise is a rather empty
exercise. It is fretting about a difference that makes no difference.

Of course, as philosophers are fond of pointing out, our question


about whether or not our Turing robot feels is (or ought to be) an
ontic question -- about what really is and is not true, what really does
or does not, can or cannot, exist -- rather than merely an epistemic
question about what we can and cannot know, what is and is not
"useful and informative," what does or does not make an empirical
difference to us. Epistemic factors (what's knowable or useful to
know) have absolutely no power over ontic ones (what there is, what
is true).


It would be wise for mere cognitive scientists to concede this point.
Just as it is impossible to be certain that the laws in accordance with
which nature seems to behave are indeed the true laws of nature, it
is impossible to be certain that systems that behave as if they feel,
truly feel. Having conceded this point regarding certainty, however,
only a fool argues with the Turing-Indistinguishable: Yes, the true
laws could be other than the apparent laws, but if I can't tell the two
apart empirically, I'd best not try to make too much of that distinction!
By the same token, a robot that is indistinguishable for a lifetime
from a feeling person might be a Zombie, but if I can't tell the two
apart empirically, I'd best not try to make too much of that distinction
(Harnad 2000).

Indistinguishable? But surely there are plenty of ways to distinguish


a robot from a human being. If you prick us, do we not bleed? So
perhaps the sceptic about the forward-engineered robot should hold
out for the reverse-bio-engineered one, the one made out of the right
stuff, Turing-indistinguishable both inside and out, and at both the
macro and micro levels. It is only about that machine that we can
reply to our reformulated question -- "What kinds of machine can
and cannot be conscious? -- that only that kind can.

But would we be right, or even empirically or logically justified in


concluding that? To put it in a more evocative way, to highlight the
paradoxical polarity of the "risks" involved: Would we be morally
justified in concluding that whereas the reverse-bioengineered
machines, because they are empirically indistinguishable from
natural machines like us, clearly cannot be denied the same human
rights as the rest of us, the forward-engineered machines, because
they are merely Turing-indistinguishable from us in their (lifelong)

behavioral capacity can be safely denied those rights and treated as


unfeeling Zombies (toasters)?

Other-Mind Reading and Turing-Testing.To answer this question


we need to look a little more closely at both our empirical
methodology in cognitive science and our moral criteria in real life.
Let us consider the second first. Since at least 1978 (Premack 1978)
there has grown an area of research on what is sometimes called
"theory of mind" and sometimes "mind-reading," in animals and
children . This work is not a branch of philosophy or parapsychology
as it might sound; it is the study of the capacity of animals and
children to detect or infer what others "have in mind." (As such, it
should really be called research on "other-mind perception.")

It has been found that children after a certain age, and certain
animals, have considerable skill in detecting or inferring what others
(usually members of their own species) are feeling and thinking
(Whiten 1991; Baron-Cohen 1995). The propensity for developing
and exercising this mind-reading skill was probably selected for by
evolution, hence is inborn, but it also requires learning and
experience to develop. An example of its more innate side might be
the capacity to understand facial expressions, gestures and
vocalizations that signal emotions or intentions such as anger and
aggression; a more learning-dependent example might be the
capacity to detect that another individual has seen something, or
wants something, or knows something.

Let us note right away that this sort of mind-reading is a form of


Turing-testing: inferring mental states from behavior. The "behavior"
might be both emitted and detected completely unconsciously, as in
the case of the release and detection of pheromones, or it might be

based on very particular conscious experiences such as when I


notice that you always purse your lips in a certain way when you
think I have lied to you. And there is everything in between; my
sense of when you are agitated vs. contented, along with their likely
behavioral consequences, might be a representative midpoint.
Language (which, let us not forget, is also a behavior) is probably
the most powerful and direct means of mind-reading (Harnad 1990;
Cangelosi & Harnad 2000).

Hence, apart perhaps from direct chemical communication between


brains, all mind-reading is based on behavior: Turing-testing. It could
hardly have been otherwise. We know, again since at least
Descartes, that the only mind we can read other than by Turingtesting is our own! As far as all other minds are concerned,
absent genuine telepathic powers (which I take to be a fiction, if not
incoherent), the only database available to us for other-mind-reading
is other-bodies' behavior.
We do have to be careful not to make the ontic/epistemic conflation
here: The foregoing does not mean that all there is to mind is
behavior (as the blinkered behaviorists thought)! But it does mean
that the only way to read others' minds is through their behavior, i.e.,
through Turing-testing.

Functional- vs. Structural/Functional-Indistinguishability. Now,


back to our two robots, the reverse-bioengineered one to whom we
were ready to grant human rights and the merely forwardengineered one about whom we were not sure: Both are Turingindistinguishable from us behaviorally, but only the first is
anatomically correct. We're all machines. Is only the first one the
right kind of machine to have a mind? On what basis could we
possibly conclude that? We have ascertained that all mind-reading is

just behavior-based Turing-testing, and all three of us (the two manmade robots and me) are indistinguishable in that respect. What
else is there? The rest of the neuromolecular facts about the brain?
Which facts?

Therearecountlessfactsaboutthebrainthatcouldnotpossiblyberelevanttothefactthatithasamind:
itsweight,forexample.Weknowthis,becausethereisahugerangeofvariationinhumanbrainmass
fromthemassivebrainofahugemantotheminutebrainofamicrocephalic,whoneverthelessfeelspain
whenheispinched.Nowimaginetryingtonarrowdownthepropertiesofthebraintothosethatare
necessaryandsufficientforitshavingamind.Thisturnsouttobejustanothervariantofouroriginal
question:"Whatkindsofmachinescanandcannotbeconscious?"Weknowbrainscanbe,buthow?
Whataretheirrelevantproperties(iftheirweight,forexample,isnot)?Nowimagineparingdownthe
propertiesofthebrain,perhapsbyexperimentingwithbioengineeredvariations,inordertotestwhich
onesareandarenotneededtobeconscious.Whatwouldthetestbe?

Turing-Filtering Relevant Brain Function. We are right back to


Turing-testing again! The only way to sort out the relevant and
irrelevant properties of the biological brain, insofar as consciousness
is concerned, is by looking at the brain's behavior. That is the only
non-telepathic methodology available to us, because of the otherminds problem. The temptation is to think that "correlations" will
somehow guide us: Use brain imaging to find the areas and
activities that covary with conscious states, and those will be the
necessary and sufficient conditions of consciousness. But how did
we identify those correlates? Because they were correlates of
behavior. To put it another way: When we ask a human being (or a
reverse-bioengineered robot) "Do you feel this?" we believe that the
accompanying pattern of activity is conscious because we believe
him when he says (or acts as if) he feels something -- not the other
way round: It is not that we conclude that his behavior is conscious
because of the pattern of brain activity; we conclude that the brain
activity is conscious because of the behavior.

So, by the same token, what are we to conclude when the forwardengineered robot says the same thing, and acts exactly the same
way (across a lifetime)? If we rely on the Turing criterion in the one
case and not the other, what is our basis for that methodological
(and moral) distinction? What do we use in its place, to conclude
that this time the internal correlates of the very same behavior are
not conscious states?

The answer to our revised question -- "What kinds of machines can


be conscious (and how)?" has now come into methodological focus.
The answer is: The kinds that can pass the Turing Test, and by
whatever means are necessary and sufficient to pass the Turing
Test.

Darwin and Telepathy. If we have any residual worries about


Zombies passing the Turing Test, there are two ways to console
ourselves. One is to remind ourselves that not even the Blind
Watchmaker who forward-engineered us had a better way: survival
and reproduction are just Turing functions too: Darwin is no more
capable of telepathy than we are. So there is no more (or less)
reason to worry that Zombies could slip through the Turing filter of
evolution than that they could slip through the Turing filter of robotic
engineering (Harnad 2002).

Turing and Telekinesis. Our second consolation is the realization


that the problem of explaining how (and why) we are not Zombies
(Harnad 1995) (otherwise known as the mind/body problem) is
a hard problem (Shear 1997), and not one we are ever likely to
solve. It would be easy if telekinetic powers existed: Then feelings
would be physical forces like everything else. But there is no
evidence at all that feelings are causal forces. That is why our

forward- and reverse-engineering can only explain how it is that we


can do things, not how it is that we can feel things. And that is why
the ghost in the machine is destined to continue to haunt us even
after all cognitive sciences empirical work is done (Harnad 2001).

Bibliography

Baron-Cohen, S. (1995). Mindblindness: an essay on autism and


theory of mind. Cambridge, MA: MIT Press.

Cangelosi, A. & Harnad, S. (2001) The Adaptive Advantage of


Symbolic Theft Over Sensorimotor Toil:Grounding Language in
Perceptual Categories. Evolution of Communication 4(1)
117-142
http://cogprints.soton.ac.uk/documents/disk0/00/00/20/36/index.html

Harnad, S. (1990) The Symbol Grounding Problem Physica D 42:


335-346.
http://cogprints.soton.ac.uk/documents/disk0/00/00/06/15/index.html

Harnad, S. (1991) "Other Bodies, Other Minds: A Machine


Incarnation of an Old Philosophical Problem"Minds and Machines 1:
43-54.
http://cogprints.soton.ac.uk/documents/disk0/00/00/15/78/index.html

Harnad, S. (1992) The Turing Test Is Not A Trick: Turing


Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4)
(October1992) pp. 9 - 10.
http://cogprints.soton.ac.uk/documents/disk0/00/00/15/84/index.html

Harnad, S. (1994) Levels of Functional Equivalence in Reverse


Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial
Life
1(3): 293-301.
http://cogprints.soton.ac.uk/documents/disk0/00/00/15/91/index.html

Harnad, Stevan (1995) "Why and How We Are Not Zombies. Journal
of Consciousness Studies 1:164-167
http://cogprints.soton.ac.uk/documents/disk0/00/00/16/01/index.html

Harnad, S. (2000) Minds, Machines, and Turing: The


Indistinguishability of Indistinguishables. Journal of Logic, Language,
and Information 9(4): 425-445. (special issue on "Alan Turing and
Artificial Intelligence")
http://cogprints.soton.ac.uk/documents/disk0/00/00/16/16/index.html

Harnad, S. (2001) No Easy Way Out. The Sciences 41(2) 36-42.


http://cogprints.soton.ac.uk/documents/disk0/00/00/16/24/index.html

Harnad, S. (2002) Turing Indistinguishability and the Blind


Watchmaker. In: J. Fetzer (ed.) Evolving Consciousness Amsterdam:

John Benjamins. Pp 3-18


http://cogprints.soton.ac.uk/documents/disk0/00/00/16/15/index.html

Premack, D. & Woodruff, G. (1978). Does the chimpanzee have a


theory of mind? Behavioral & Brain Sciences, 4, 515-526.

Shear, J. (Ed.) (1997) Explaining consciousness : the "hard


problem." Cambridge, Mass. : MIT Press, c1997.

Turing, A. M. (1950) Computing Machinery and Intelligence. Mind


49:433-460.
http://cogprints.soton.ac.uk/documents/disk0/00/00/04/99/index.html

Whiten, A. (Ed.) (1991). Natural theories of mind: Evolution,


development, and simulation of everyday mindreading . Oxford:
Blackwell.

Você também pode gostar