Escolar Documentos
Profissional Documentos
Cultura Documentos
Notes
1 Heidegger was very anxious to get away from standard philosophical vocabulary and its Cartesian
connotations. So he coined a philosophical vocabulary of his own, much of which consists of
hyphenated phrases like 'being-in-the-world'. The hyphenation indicates that some unitary conception
is intended, and that the phrase will function by and large as a technical term. Neither Heidegger nor
Heideggerians should be expected to apologize for this, especially since Heidegger's proposed
vocabulary is explained at length in his text, and used at least as consistently as most technical
vocabularies ever are.
Being-in-the-world is a fundamental feature of Dasein. Just as Dasein replaces the Cartesian
conscious subject in Heidegger's framework, so being-in-the-world replaces the Cartesian construal of
the relationship between intelligent beings and their environment as an epistemological relation
between subject and object. Heidegger's point is that we do not primarily know about our world - we~
live in it. In other w o r d s - to co-opt a later and very similar distinction made by Gilbert R y l e - our
primary relationship with the world is a knowing-how rather than a knowing-that. Dreyfus explicates
being-in-the-world in terms of the ongoing and apparently effortless coping with familiar things that is
our normal relationship to the world under normal circumstances. (_See especially pp. 102 ft.)
BOOK REVIEWS 357
2 Indeed, he occasionally says as much. See, e.g., the beginning of his discussion of the "unavailable",
pp. 69-72.
3 Although perhaps not in Division II. It can be argued that the notion of authenticity in Division iI is
an expression of precisely the kind of individualism that Heidegger apparently rejects in Division I and
that consequently the (inauthentic) everyday Dasein of Division I and the authentic Dasein of Division
II represent inconsistent conceptions of human existence. Dreyfus does touch briefly on this issue,
which is a major topic of debate in the literature.
1. Preamble
Philosophers of mind and A I have long debated the consciousness of both minds
and machines. Such debate is often trying, since the participants generally cannot
offer any account of exactly what consciousness is. Now Dennett does offer an
account, one which, if it is even close to the truth, shows exactly how and why an
appropriately constructed machine would be as conscious as you or (more to the
point) I. D e n n e t t ' s account, which is a mixture of new philosophy and leading
science, requires us to rethink a lot of old positions and adopt m a n y new
perspectives. Such work is never quick and easy, but the slow and difficult road
on which he has set out is well worth a reconnoitre.
2. Multiple Drafts
Although materialism had done away with spirit substances, many a materialist is
still eager to cling to the Cartesian Theatre - a centre of conscious experience and
the mediating point for conscious (and free?) action; Dennett terms such thinkers
Cartesian Materialists. H e argues that if you trace afferent inputs up through the
brain, and efferent outputs backwards, there will be no special point at which they
come together. T h e r e is no neurogeographical or psychofunctional central
executive. Of course, there is nothing wrong with central executives per se, only
those that have all the qualities - consciousness, intentionality, etc. - that a theory
of mind sets out to explain. H e charges that cognitive science is often all too keen
to explain the function of various " s u p p o r t " modules, whilst maintaining an
embarrassing silence concerning the central system they implicitly serve. Such
doubts are not unique to Dennett - what is interesting here is both the vehemence
of their expression and the fact that he has a positive alternative to offer.
The Multiple Drafts theory asserts that there is no point in the brain where
experience is collected t o g e t h e r - no one locus of consciousness. The brain is
constantly monitoring the world outside, along with its own states, and forming
judgements about what it observes. Different parts of the brain form different
judgements, and over time judgements about exactly what is being observed will
change.
At different times and different places, various "decisions" or "judgments" are made; more literally,
parts of the brain are caused to go into states that discriminate different features, e.g., first mere onset
of stimulus, then location, then shape, later color, later still (apparent) motion, and eventually object
recognition. (p. 134.)
The brain constantly redrafts its set of judgements about the current state of the
world (and itself), and it is these drafts that make up the contents of conscious-
ness. If not put to use, drafts will simply be discarded, but if needed to modulate
action, then they will be adopted and sometimes stored in memory. Whilst the
clock ticks, you are unaware of it; you make,the judgement that it ticks, but put
that judgement to no use. When it stops, that judgement comes to the fore and is
used to modulate b e h a v i o u r - i t might cause a speech act or just trigger off an
inner monologue. So sometimes, when drafts are put to use, we then regard the
judgements that make them up as conscious, but they are just the same as the
drafts that never get the chance to modulate action. (Draft chances are decided
by "pandemonium"-style competition).
A n d so Multiple Drafts attacks the notion of a sharp line between conscious
and non-conscious experience. Some important judgements modulate a lot of
behavior; these clearly fall on the conscious side of the divide. Other judgements
can be shown to have been m a d e - as in many psychology e x p e r i m e n t s - but
cannot be readily reported. But a whole host of judgements are marginal, and
there is no motivation for drawing a sharp line between pre- and post-conscious
experiences. In general, Dennett argues against determinant (i.e., not vague)
experience. What you experience is not fixed, because it does not come together
in one final draft. But drafts modulate behaviour, and the action so produced is
determinate. When we describe our e x p e r i e n c e - out loud or to o u r s e l v e s - it
seems as though we describe something fixed and determinate. But that is just
what Dennett's theory would predict. The very act of producing a description is
modulated by multiple competing drafts, but the nature of action is not multiple;
only one description, out of the many possible, can result.
As well as rejecting the notion of a definitive publication in favour of multiple
drafts, Dennett make some important points about the temporal aspect of
experience. Firstly, he notes that it is simply a mistake to assume that the
temporal properties of the representational vehicle must be the same as the
temporal properties of its content. Just because brain event p occurs before brain
event q does not mean that what is represented by q must be represented as
BOOK REVIEWS 359
S. Conclusions
This book provides a healthy antidote to the predominant pessimism and
confusion surrounding the study of consciousness, but it still leaves you worrying,
albeit about different issues. The similarities between the Cartesian Theatre and
the display screen of the stream-of-consciousness virtual machine are alarming,
giving rise to a nagging doubt that Dennett may himself have succumbed to some
form of Cartesian Materialism. There is a tension between the no-fixed-line
approach offered by multiple drafts, and the relatively determinate, or Cartesian,
picture of consciousness suggested by the user-illusion story. A n d the argument
that the virtual-machine user need not be conscious, though not without
precedent (Rosenthal 1986), is clearly underdeveloped here, thus opening
Dennett to the charge of homuncularism.
Throughout the book, language is used to prop up Dennett's theory; conscious-
ness is largely a matter of telling stories. (And it might be said that the book is
more about ideas of self than ideas of consciousness.) Such a reliance, of course,
means denying consciousness, or at least a particular flavour of it, to non-
language users. But there seems no obvious link between phenomenal experience
and language use, no reason to think the experience of cats, dogs, and monkeys is
restricted because they are relatively inarticulate. I think there is a way out for
Dennett on this issue. What matters is not language as such, but judgements or,
perhaps, to use his vocabulary, "beliefs" (non-linguistic states), not "opinions"
(linguistic states) (p. 77n4). Conscious experience is the availability of vast
amounts of information for the purposes of modulating action. We can argue that
we describe our inner world to ourselves in terms, of behavioural possibilities, as
well as with word strings. Rather than saying to oneself that one sees an apple,
you can simply internally rehearse the action of reaching out for it. Hence, we can
form a useful picture of consciousness, for both language and non-language users.
Even with such revisions, more work needs to be done to explain conscious-
ness, and there is still room for doubt as to whether it can ever really be done.
However, with its wealth of new ideas and fresh angles, this is a book that nobody
studying the mind can afford to ignore.
362 BOOK REVIEWS
Reference
1. Rosenthal, David (1986), 'Two Concepts of Consciousness',PhilosophicalStudies 49: 329-359.
sufficient for mentality, but asserts that "access to the output of a powerful
subsymbolic processor" will provide the missing ingredient: content (p. 175).
In the course of laying out his ecumenical proposal, Clark covers much
territory. The book has two parts, called "The Mind's-Eye View" and "The
Brain's-Eye View." The first part is devoted to GOFAI and its critics, while the
second part is devoted to PDP and its critics. Thus, the book could serve as an
introduction to the schism, and Clark sometimes writes with this in mind. But
usually the introductory aims take a back seat to Clark's own battles, and this will
leave the lay reader frustrated. The professional reader can be frustrated by the
fact that Clark presents his views, as he admits in one case, "in a somewhat
distributed array" (p. 186). Clark's last chapter is, appropriately, called "Reas-
sembling the Jigsaw", and it is only another prelude to a short Epilogue and a
substantive Appendix ("Beyond Eliminativism"). Nevertheless, assembling
Clark's significant proposal is very rewarding and stimulating.
I now turn to three criticisms.
(1) In the Appendix, Clark first raises a problem that reared its head whenever
he discussed the multiplicity of the mind. If the mind is a set of virtual machines
and independent subsystems, we have a naturalistic variant of the Cartesian
problem of interactionism: How do the connectionist subsystems communicate?
Clark calls this "the problem of communication" (p. 206), and he suggests that
physical symbol structures, namely, what is modeled by GOFAI, might be needed
"to pass messages between the various subsystems of a single cognizer" (p. 206).
This solution undercuts the primacy of sub-symbolic processing. Recall that for
Clark not only will different tasks require different computational models, but
single tasks may require different processors. Hence, these single tasks depend on
a solution to the communication problem, and if symbol systems are needed to
solve this problem, the completion of these tasks requires symbol systems. For
these tasks, symbol processing is not "merely ingenious icing on the computation-
al cake" (p. 135). It is one of the cake's ingredients.
The issue then boils down to this: Are there any psychological tasks that do not
involve communication between various subsystems? Pattern matching, and
speedy and flexible processing are candidates, but it is not obvious that these
features in and of themselves are cognitive, psychological capacities. Maybe all
psychological tasks require symbolic processing to achieve communication be-
tween processors. If this is so, it might be true that subsymbolic systems, in and of
themselves, only implement cognitive tasks.
The problem of communication is not limited to the interaction between
connectionist subsystems. Clark suggests that PDP processors and virtual sym-
bolic processors interact and pass representations to each other (pp. 151-152,
171-175). The symbolic processors of a contentful mind will have "access to the
output of a powerful subsymbolic processor" (p. 175). The problem is how do
the connectionist and the symbolic processors communicate?
(2) An ingenious feature of Clark's book is his strategy against the Fodor and
364 BOOK REVIEWS
Pylyshyn critique of connectionism (Fodor and Pylyshyn 1988; cf. Fodor and
McLaughlin 1990). Clark responds by driving a wedge between thought ascrip-
tions (e.g., 'Jones believes that 2 + 2 = 4' or 'Jones wants to do arithmetic') and
in-the-head processing. Behavior is the wedge. The meaning of thought ascrip-
tions is the conditions under which they are warranted, and these conditions are
networks of behavior (actual and counterfactual behavior), where behavior
involves relations to external objects (pp. 48-50). Thought ascriptions are "a
holistic net thrown across a body of behavior of an embodied being acting in the
world" (p. 5). Since a thought (e.g., a belief or desire) just is what gets ascribed
in a thought ascription (p. 179), a thought is a network of behavior.
Behavior blocks Fodor and Pylyshyn's argument for symbolic in-the-head
processing, because the systematicity of behavior "exerts no obvious pressure" to
posit in-the-head physical symbol systems (p. 149). The locus of explanation for
cognitive science is the behavior that warrants thought ascriptions, not the
thought ascriptions themselves.
This argument plays a central role in Clark's account of how to build a thinker.
He distinguishes descriptive cognitive science from causal cognitive science
(pp. 153-152). Descriptive cognitive science develops models or formal theories
that capture the systematic relations of thought ascriptions. This is the appropri-
ate domain for GOFAI. Causal cognitive science attempts to model in-the-head
processing that causes the behavior described by thought ascriptions. PDP
processing is needed to explain the flexible behavior that grounds thought
ascriptions. Building a thinker must involve more than instantiating a computa-
tional model produced by descriptive cognitive science. A descriptive theory will
only model the systematicity of belief and desire sentences, which "merely
describe regularities in the behavior and are not geared to pick out the syntactic
entities that are computationally manipulated to produce behavior" (pp. 179-
180).
Unfortunately, Clark never offers a clear argument for why the systematicity of
behavior does not call for a compositional internal symbol system. I think that
one of Clark's reasons is that although behavior is systematic, it is not composi-
tional. Clark admits that if "the locus of systematicity in need of explanation lay
in thought-ascribing sentences," then we would have an argument for in-the-head
symbol systems (pp. 148-149). But, according to Clark, only the systematicity of
the behavior described by thought ascriptions requires a psychological expla-
nation.
As a matter of fact, sentences are in need of explanation. Even if we ignore
sentences that ascribe thoughts to others (e.g., 'Jones believes that 2 + 2 = 4'), we
are still left with sentences that express thoughts (e.g., Jones's saying "2 + 2 =
4"). The network of behavior that warrants thought-ascribing sentences contains
linguistic behavior, which expresses thoughts, and this behavior is both systematic
and compositional. One relevant feature of linguistic behavior is first-person
thought ascriptions (e.g., Jones's saying "I believe that 2 + 2 = 4"). It is not
BOOK REVIEWS 365
References
i. Fodor, Jerry, and McLaughlin, Brian (1990), 'Connectionism and the Problem of Systematicity:
Why Smolensky's Solution Doesn't Work', Cognition 35: 183-204.
2. Fodor, Jerry, and Pylyshyn,Zenon (1988), 'Connectionism and CognitiveArchitecture: A Critical
Analysis', Cognition 28: 3-71.
Recent work by philosophers in cognitive science tends to fall into two categories.
One is traditional philosophical work. William Lycan's Consciousness (1987) and
John Searle's Minds, Brains, and Science (1984) are good examples. They bring
traditional conceptual methods to bear on issues in cognitive science. While they
may make reference to empirical work in artificial intelligence (AI) or cognitive
psychology, their central theses don't (or aren't intended to) rest on empirical
evidence. The second category is philosophical work that attempts to argue, at
least in part, from empirical results. Examples in this category are Patricia
Churchland's Neurophilosophy (1986) and Alvin Goldman's Epistemology and
Cognition (1986). How to Build a Conscious Machine, by Leonard Angel, does
not yield easily to the above classification. Is it a "how to" manual, or is it a work
on the possibility of building a conscious machine? In fact, How to Build a
Accordingly, the challenge we are setting for ourselves is to see whether we might take a robot arm
mounted on motorized wheels of some sort and attach it to a program that receives information about
the environment in a primary sensory mode (presumably visual, tactile, or auditory) and, to a
program that operates upon this information so as to yield motor output, the implementation of which
exhibits the structure of rational agency. (p. 13.)
In Chapter 3, "Integrated System Programming", Angel attempts to set out a
model of rational agency. But it isn't clear what counts as a model for Angel. Is
BOOK REVIEWS 367
the program that he hopes to provide a program for a model of an agent, or is the
running program an actual rational agent? The difficulty is exacerbated by Angel's
talk of " m o d e l agents", "model ontologies', and "model environments". Angel
writes about giving " o u r agent the ability to recognize danger and flee it" (p. 23).
This passage is indexed, however, under the heading "modeling danger". So is it
a model or the real thing?
Angel claims that he is doing AI in the micro-world tradition. Programs like
S H R D L U , by Terry Winograd (1972), might be said to model the behavior
associated with the manipulation of blocks, because the machine never really
manipulates any blocks at all, but only attempts to exhibit the linguistic behavior
of someone who can manipulate blocks. Winograd did not try to build a robot
arm, a visual system, or any of the other things that would be required to
complete a machine that actually manipulated blocks. Angel, while identifying
with this micro-world tradition, believes that he is doing something more. So it is
difficult to understand what he means by 'model' when his project differs quite
fundamentally from micro-world modeling.
Both the title of the book and several claims in it suggest that Angel is doing
AI. Angel talks about programs, where presumably he means computer pro-
grams, and the discussion of the so-called programs is juxtaposed with remarks on
actual A I work, such as S H R D L U . But even Angel admits that what he offers
"must be strategic" (p. 12). I guess that means that the details are left to the
reader. I would not characterize the first half of H o w to Build a Conscious
Machine as work in AI, though it isn't much like philosophy either. Angel talks
over and over again about programming, but there isn't a line of code or
pseudo-code in the book, and there is virtually no reference to the programming
environment in which the model is to b e realized. The only comments on the
architecture of the proposed machine come in the form of hasty dismissals of
alternatives. H e r e , for example, is all Angel has to say about connectionism:
And whereas it may be appropriate to scoff at the idea that strictly nonconnectionistapproaches to the
"eomplexifications'will suffice to transform a discrete digitalized agent functioning linguistically in a
fine-grained environment, the need for neural-net or connectionist realizations of the various units of
the agency-attributive language using device does not undercut the boosters' theory that the
construction of agency-attributive language using devices constitutes the location of the continuum.
On the contrary. Although connectionists thus far haven't claimed to know how to use connectionism
in solving the really difficult AI-cognitive science problems of rational agency, possession of a
complete agency-attributive language using computational grid to squeeze neural-net flesh into, if
anything, justifies the boosters' approach to the current situation, and may well provide connectionists
with something of the framework they are looking for. (p. 56.)
If you have trouble with this passage, you are not alone. Even when you figure
out that by 'booster', Angel means someone who, like himself, thinks that
micro-world programming can be used to build intelligent machines, and that 'the
continuum' refers to something like the range of intelligence, from non-intelligent
behavior to human-like intelligence, there are inescapable problems with Angel's
text here. One is his flippant and uninformed discussion of an entire direction of
368 BOOK REVIEWS
absent qualia objection, a defense that he endorses. The rest of the literature is
dismissed in a parenthetical remark (p. 78).
The consideration of objections to functionalism by Searle (1984) and others
dominates the second part of the book. The objections discussed center largely on
the question of the appropriateness of ascriptions of intentionality to machines.
Angel believes that attributions of intentionality to the machine he described in
the first part of his book are warranted. H e concludes, without argument, that
attributions of consciousness are also appropriate.
It is perhaps this last flaw that is most likely to make reading this book a
disappointing effort, particularly for philosophers. Angel must think that if you
take care of intentionality, consciousness will take care of itself. If that's his
position, I don't see any argument for it. Unfortunately, his stated views are so
unclear that it is difficult to tell. Arguments for reducing the problem of
consciousness to the problem of intentionality would be of interest to the many
philosophers who have claimed that the p h e n o m e n o n of consciousness is a special
challenge for functionalist theories of mind.
I said above that H o w to Build a Conscious Machine is an example of what
happens when philosophers stray into unfamiliar territory. The first half of the
b o o k demonstrates how difficult it is for philosophers to talk intelligently about
AI. But this excuse cannot be made for the second half of the book. H e r e , Angel
should be on his own tuff. Unfortunately, this part of the book is as uninformed
as the first.
References
1. Churchland, Patricia Smith (1986), Neurophilosophy: Toward a Unified Science of the Mind-Brain,
Cambridge, MA: MIT Press.
2. Dennett, Daniel C. (1978), 'Towards a CognitiveTheory of Consciousness', in C. W. Savage (ed.),
Perception and Cognition: Issues in the Foundations of Psychology; Minnesota Studies in the
Philosophy of Science, Vol. 9 (Minneapolis: University of Minnesota Press).
3. Goldman, Alvin I. (1986), Epistemology and Cognition, Cambridge, MA: Harvard University
Press.
4. Lycan, William G. (1981), 'Form, Function and Feel', Journal of Philosophy 78: 24-49.
5. Lycan, William G. (1987), Consciousness, Cambridge, MA: MIT Press.
6. McDermott, Drew (1976), 'Artificial Intelligence Meets Natural Stupidity', SIGART Newsletter of
the Association for Computing Machinery, No. 57 (April 1976); reprinted in J. Haugeland, Mind
Design, Cambridge, MA: MIT Press, 1981, pp. 143-160.
7. Searle, John R. (1984), Minds, Brains and Science, Cambridge, MA: Harvard University Press.
8. Winograd, Terry (1972), Understanding Natural Language, Orlando, FL: Academic Press.
Geoffrey Brown, Minds, Brains and Machines, Mind Matters Series, New York:
St. Martin's Press, 1989, x i + 163 pp., $24.95 (cloth), ISBN 0-312-03144-0.
while others (those on Dennett and Kant) state only that their views can't be
summarized. It would have been far more useful simply to have given an index of
subjects and names (perhaps with their dates) - which is peculiarly lacking.
Distracting for American readers, especially students, might be the British
spellings, and especially the excessive use of the world 'whilst'-in some cases,
used dozens of times over several pages.
These flaws should probably not prevent philosophically-inclined instructors of
courses in AI or cognitive science from using this volume as a fine introduction to
historical theories of the nature of the mind.
Reference
1. Wittgenstein, Ludwig (1958), Preliminary Studies for the "Philosophical Investigations," Generally
Known as the Blue and Brown Books, New York: Harper and Row.
David M. Rosenthal (ed.), The Nature of Mind, New York: Oxford University
Press, 1991, x + 642 pp., $49.95 (cloth), ISBN 0-19-504670-6; $19.95 (paper),
ISBN 0-19-504671-4.
The first section ("Problems About Mind") has classical writings of Descartes,
Locke, and Reid, followed by contemporary critiques by Gilbert Ryle ("De-
scartes' Myth"), P. F. Strawson ("Self, Mind and Body"), Gareth B. Matthews
("Consciousness and Life"), and G. E. M. Anscombe ("The First Person"). The
second section ("Self and Other") deals with our knowledge of our own mental
states and the mental states of others. The third section ("Mind and Body")
comprises discussions of the relation of mental states to bodily states. The fourth
section ("The Nature of Mind") deals with allegedly special features of the
mental such as intentionality, phenomenological quality, subjectivity, free will,
and consciousness. The last section ("Psychological Explanation") deals with
some problems in psychological theory, such as the validity of the computational
approach, the role of social and environmental context in defining psychological
states, and the place of everyday concepts ("folk psychology") in the science of
psychology. The editor, David M. Rosenthal, provides a general introduction to
the whole anthology and specific introductions to the five sections, all of which are
very helpful in clarifying the issues. The anthology ends with an extensive
bibliography paralleling the subsections of the book.
The book begins (where else?) with Descartes's theorizing about a non-physical
substance with a unique property, thinking. Only the existence of such a
substance, he argues, can explain certain undeniable features of human life. It is
the predominant negative message of this book, which accurately reflects our
times, that it is bad theorizing. Alas, there is no predominant positive message in
the book. There is almost universal agreement that Descartes was wrong, but no
agreement on what is right.
Even if we do not postulate, with Descartes, a special substance, we still have
to deal with mental phenomenal - events, states, processes, properties, or aspects.
As Rosenthal points out in his perspicuous introductory discussions, there are two
major tendencies in current approaches to mental phenomena. Some stress the
continuity of the mental with the non-mental ("They are not really so different'.')
and others stress the non-continulty ("Yes they are"). Many authors in the
anthology argue against the mental/non-mental dichotomy or offer explanations
of the mental in terms of the physical. Two groups reject attempts to assimilate
the mental into a physicalistic view of the world. One group concludes that
eventually we will come to see that there are no such things as mental
phenomena. Paul Feyerabend ("Mental Events and the Brain") assigns mental
states the same outmoded status as the theologian's postulation of devil-posses-
sion, and Paul M. Churchland ("Eliminative Materialism and the Propositional
Attitudes") likens the mental to the alchemist's fundamental spirits. A small
g r o u p - Thomas Nagel ("What Is It Like to Be a Bat?"), Keith Campbell (in an
excerpt from "Central State Materialism"), Ned Block ("Troubles with Func:
tionalism"), Saul A. Kripke (in an excerpt from Naming and Necessity), Frank
Jackson ("The Existence of Mental Objects" and "What Mary Didn't Know"),
Christopher Peacocke ("Colour Concepts and Colour Experiences"), and the
BOOK REVIEWS" 375
author of this review ("Mental Events and the Brain") - argues that mental states
do exist and resist any presently available explanation in physical terms. But the
majority, with much internal disagreement, work within a physicalistic frame-
work.
The readers of this journal may find especially interesting the last section of the
book, which deals with current issues in the philosophy of psychology, namely,
how the science of psychology is related to computer science, the physical and
social environment of the psychological subject, and "folk psychology" (our
everyday understanding of behavior). The section starts with Fodor's "Meth-
odological Solipsism Considered as a Research Strategy in Cognitive Psychology,"
which further elaborates the earlier, ground-breaking functionalism of his Psycho-
logical Explanation (1968) by defending a computer model of mental processes.
This commits him to what he calls 'methodological solipsism', the view that
intentional mental processes are to be understood simply as a succession of purely
internal states following certain rules. Stephen Stich ("Paying the Price for
Methodological Solipsism"), Putnam ("Computational Psychology and Interpre-
tation Theory"), and Tyler Burge ("Individualism and the Mental") argue against
Fodor that such a computational functionalism would rob mental states of their
intentional, representational aboutness, their content, which requires that the
internal states be related to external situations. Searle ("Minds, Brains, and
Programs") uses his famous "Chinese Room" example (the upshot of which is
that there is more to understanding Chinese than the manipulation of Chinese
inscriptions according to rules) to argue that the mere instantiation of a computer
program cannot yield intentionality, nor will tying the computer to environmental
causes and effects yield it either. Fodor ("Author's Response"), presents a
typically "spirited reply to Stich and, for good measure, two replies to Searle (who
gets in the last, not previously published, word on this occasion).
These debates are couched i n terms of "folk psychology", our everyday
concepts of believing, wanting, understanding a language, etc. Stich, Paul
Churchland, and Daniel Dennett deny that such concepts can play any role in a
science of psychology. Stich ("Autonomous Psychology and the Belief-Desire
Thesis") argues that such concepts apply not to individual subjects but to subjects
in particular environmental, social, and historical contexts, and therefore cannot
be used to describe the individual psychological states a science of psychology
would require. Churchland ("Eliminative Materialism and the Propositional
Attitudes") argues that folk psychology is so defective that it should, and
eventually will, go the way of alchemy, phlogiston theory, and vitalism, and will
be replaced by neuroscience. Dennett ("Three Kinds of Intentional Psychology")
sees more good in folk psychology, taken as a useful instrument for predicting
behavior, but since it succeeds, he claims, only by Presupposing normatively,
ideally, and abstractly that individuals are rational agents, the states ascribed by
folk psychology can have no causal or explanatory power.
The field of philosophy of mind today is in great ferment, stimulated by results
376 BOOK REVIEWS
References
1. Dretske, Fred (1988), Explaining Behavior: Reasons in a Worldof Causes, Cambridge, MA: MIT
Press.
2. Fodor, Jerry (1968), PsychologicalExplanation, New York: Random House, 1968.
3. Millikan, Ruth Garrett (1984), Language, Thought and Other Biological Categories, Cambridge,
MA: MIT Press.
4. Papineau, David (1987), Reality and Representation, Oxford: Basil Blackwell.
5. Putnam, Hillary, 'PsychologicalPredicates', in W. H. Capitan and D. D. Merrill (eds.), Art, Mind,
and Religion, Pittsburgh: University of Pittsburgh Press, 1967, pp. 37-48.
6. Ryle, Gilbert (1949), The Concept of Mind, London: Hutchinson.
Department of Philosophy, J E R O M E A. S H A F F E R
University of Connecticut,
Storrs, CT 06269, U.S.A.