Escolar Documentos
Profissional Documentos
Cultura Documentos
JournalofConsciousnessStudies(toappear)
Stevan Harnad
Centre de Neuroscience de la Cognition (CNC)
Universit du Qubec Montral
CP 8888 Succursale Centre-Ville
Montral, Qubec, Canada H3C 3P8
harnad@uqam.ca
http://cogsci.soton.ac.uk/~harnad/
Abstract:A"machine"isanycausalphysicalsystem,hencewearemachines,hencemachinescanbe
conscious(feel).Thequestionis:which*kinds*ofmachinescanfeel,andhow?Chancesarethatrobots
thatcanpasstheTuringTestcompletelyindistinguishablefromusintheirbehavioralcapacitiescan
feel,butwecanneverbesure(becauseofthe"otherminds"problem).Andwecanneverexplainor
understand*how*or*why*theymanagetofeel,iftheydo,becauseofthe"mind/body"problem.We
canonlyknowhowtheypasstheTuringTest.Thatiswhythisproblemisnotjust*hard*itis
insoluble.
Theanswer,ofcourse,is:Itdependsentirelyonwhatyoumeanbymachine!Ifyoumeanthecurrent
generationofmanmadedevices(toasters,ovens,cars,computers,todaysrobots),theansweris:almost
certainlynot.
Empirical Risk. Why "almost"? Two reasons, the first being the usual
Inthelonghistoryofphilosophytheothermindsproblemhasbeenpuzzledoverforavarietyofreasons,
usuallyvariantsonquestionsaboutwhatonecanandcannotknowforsure.Theseepistemicquestionsare
interesting,butwewillnotworryaboutthemhere,fortheusualreason,whichisthefollowing:Itlooks
onthefaceofitasiftherightstrategyforhandlingtheothermindsproblemisidenticaltothestrategyfor
handlingempiricalrisk,namely,tonotethatalthoughwecanonlybe100%certainabouttwothings
about(1)mathematicsandabout(2)ourownconsciousnessallelsebeingjustamatterofprobability,
somethings,suchasscientificlawsandtheconsciousnessofourfellowhumanbeings,arenevertheless
socloseto100%surethatitisawasteoftimeworryingaboutthem.(Letusalsonote,though,thatinthe
empiricalscienceofrobotics,thatextralayerofriskthatcomesfromtheothermindsproblemmightjust
comebacktohauntus.)
So let us agree not to worry about the other-minds problem for now:
People other than myself are almost certainly conscious too, and
toasters and all other human artifacts to date are almost certainly
not.
And even that would be rather vague, for "man-made" is itself rather
vague. Common sense dictates that human procreation does not
count as "man-making" in this context. But what about genetic or
other biological engineering? If the day comes when we can craft
organisms, even humans, molecule by molecule, in the laboratory,
does anyone -- or rather, anyone who has agreed to discount the
other-minds problem when it comes to naturally crafted fellowhumans -- doubt that such a bottom-up construction of a clone would
be conscious too?
Except for one problem, and it is the one that risked coming back to
haunt us: What does it mean to "forward-engineer" (or, for that
matter, to "reverse-engineer") a conscious system? It is to give a
causal explanation of it, to describe fully the inner workings of the
mechanism that gives rise to the consciousness.
Now let us try to carry this over to the brain, which is presumably the
organ of consciousness. To forward-engineer the brain is to build a
mechanism that can do what the brain can do; to reverse engineer
the brain is to do the same thing, but in such a way as to explain the
structure and function of the biological brain itself. Either way, the
explanation is a structural/functional one. That is, both forward and
reverse engineering explain everything that a brain can do, and how,
whereas reverse engineering goes on to explain what the brain is
(made out of), and how it in particular happens to do what brains
can do: how it works.
How does the ghost of the other-minds problem spoil this seemingly
straightforward extension of the cardiac into the cogitative? First,
consider the forward-engineering: If we were forward-engineering
cardiac function, trying to build a prosthesis that took over doing all
the things the heart does, we would do it by continuing to add and
refine functions until we eventually built something that was
functionally indistinguishable from a heart. (One test of our success
might be whether such a prosthesis could be implanted into humans
from cradle to grave with no symptom that it was missing any vital
cardiac function.) This forward-engineered cardiac system would still
be structurally distinguishable from a natural heart, because it had
omitted other properties of the heart -- noncardiac ones, but
biological properties nonetheless -- and to capture those too may
require reverse-engineering of the constructive, molecular kind we
mentioned earlier: building it bottom-up out of biological
components.
The same would be true if it had been life itself and not just cardiac
function that had been at issue: If our question had been "Can a
machine be alive?" the very same line of reasoning would show that
there is absolutely no reason to doubt it (apart from the usual
empirical risk, plus perhaps some intellectual or technological doubts
of the Alpha Centauri sort). Again, the critical point is when we ask
of the man-made, reverse-engineered clone: "But how do we know
that this machine is really alive?" If there are two structurally and
functionally indistinguishable systems, one natural and the other
The Animism at the Heart of Vitalism. Yet this last worry -- "How
can we know it's alive?" -- should sound familiar. It sounds like the
other-minds problem. Indeed, I suspect that, if we reflect on it, we
will realize that it is the other-minds problem, and that what we are
really worrying about in the case of the man-made system is that
there's nobody home in there, there is no ghost in the machine, And
that ghost, as usual, is consciousness. That's the property that we
are worried might be missing.
So chances are that it was always animism that was at the heart of
vitalism. Let us agree to set vitalism aside, however, as there is
certainly no way we can know whether something can be alive yet
not conscious (or incapable of returning to consciousness). Plants
and micro-organisms and irreversibly comatose patients will always
be puzzles to us in that respect. So let us not dwell on these
inscrutable cases and states. Logic already dictates that any vitalist
who does accept that plants are not conscious would be in exactly
the same untenable position if he went on to express scepticism
about whether the compleat artificial plant is really alive as the
sceptic about the compleat artificial heart (worried about whether it's
really a heart): If there's a difference, what's the difference? What
vital property is at issue? If you can't find one (having renounced on
consciousness itself), then you are defending an empty distinction.
Turing-Testing. But the same is most definitely not true in the case
of worries about consciousness itself. Let us take it by steps. First
we forward-engineer the brain: We build a robot that can pass the
Turing Test (Turing 1950; Harnad 1992): It can do everything a real
human can do, for a lifetime, indistinguishably from a real human
(except perhaps for appearance: we will return to that).
Let us note, though, that this first step amounts to a tall order,
probably taller than the order of getting to Alpha Centauri. But we
are talking about "can" here, that is, about what is possible or
impossible (for a machine), and how and why, rather than just what
happens to be within our actual human technological reach.
Empirical risk is only useful and informative where there is still actual
uncertainty about the regularities themselves: where it is not yet
clear whether nature is behaving as if it is obeying this law or that
law; while we are still trying to build a causal explanation. Once that
is accomplished, and all appearances are consistently supporting
this law rather than that one, then fretting about the possibility that
despite all appearances things might be otherwise is a rather empty
exercise. It is fretting about a difference that makes no difference.
It would be wise for mere cognitive scientists to concede this point.
Just as it is impossible to be certain that the laws in accordance with
which nature seems to behave are indeed the true laws of nature, it
is impossible to be certain that systems that behave as if they feel,
truly feel. Having conceded this point regarding certainty, however,
only a fool argues with the Turing-Indistinguishable: Yes, the true
laws could be other than the apparent laws, but if I can't tell the two
apart empirically, I'd best not try to make too much of that distinction!
By the same token, a robot that is indistinguishable for a lifetime
from a feeling person might be a Zombie, but if I can't tell the two
apart empirically, I'd best not try to make too much of that distinction
(Harnad 2000).
It has been found that children after a certain age, and certain
animals, have considerable skill in detecting or inferring what others
(usually members of their own species) are feeling and thinking
(Whiten 1991; Baron-Cohen 1995). The propensity for developing
and exercising this mind-reading skill was probably selected for by
evolution, hence is inborn, but it also requires learning and
experience to develop. An example of its more innate side might be
the capacity to understand facial expressions, gestures and
vocalizations that signal emotions or intentions such as anger and
aggression; a more learning-dependent example might be the
capacity to detect that another individual has seen something, or
wants something, or knows something.
just behavior-based Turing-testing, and all three of us (the two manmade robots and me) are indistinguishable in that respect. What
else is there? The rest of the neuromolecular facts about the brain?
Which facts?
Therearecountlessfactsaboutthebrainthatcouldnotpossiblyberelevanttothefactthatithasamind:
itsweight,forexample.Weknowthis,becausethereisahugerangeofvariationinhumanbrainmass
fromthemassivebrainofahugemantotheminutebrainofamicrocephalic,whoneverthelessfeelspain
whenheispinched.Nowimaginetryingtonarrowdownthepropertiesofthebraintothosethatare
necessaryandsufficientforitshavingamind.Thisturnsouttobejustanothervariantofouroriginal
question:"Whatkindsofmachinescanandcannotbeconscious?"Weknowbrainscanbe,buthow?
Whataretheirrelevantproperties(iftheirweight,forexample,isnot)?Nowimagineparingdownthe
propertiesofthebrain,perhapsbyexperimentingwithbioengineeredvariations,inordertotestwhich
onesareandarenotneededtobeconscious.Whatwouldthetestbe?
So, by the same token, what are we to conclude when the forwardengineered robot says the same thing, and acts exactly the same
way (across a lifetime)? If we rely on the Turing criterion in the one
case and not the other, what is our basis for that methodological
(and moral) distinction? What do we use in its place, to conclude
that this time the internal correlates of the very same behavior are
not conscious states?
Bibliography
Harnad, Stevan (1995) "Why and How We Are Not Zombies. Journal
of Consciousness Studies 1:164-167
http://cogprints.soton.ac.uk/documents/disk0/00/00/16/01/index.html