Você está na página 1de 24

Book Reviews

Hubert L. Dreyfus, Being-in-the-World: A Commentary on Heidegger's Being and


Time, Division I; Cambridge, MA: MIT Press, 1991, xiv+370 pp., $15.95
(paper), ISBN 0-262-54056-8.

In 1969, Hubert Dreyfus initiated a critique of AI and cognitive science that


claimed the early work of Martin Heidegger as its main inspiration. This book is
Dreyfus's long-awaited interpretation of Heidegger's Being and Time, based on
more than twenty years of lecture notes. It will be of particular interest to
members of the cognitive science community who are concerned to understand
Dreyfus's critique and its Heideggerian antecedents. It will also be of interest to
Heidegger scholars, both as a contribution to the interpretation of Being and
Time and as a resource for teaching it.
This book is billed as a commentary, to be read in conjunction with Heidegger's
text, not as a substitute for it. However, the clarity and comprehensiveness of
Dreyfus's writing is such that his characterization of Heidegger's ideas should be
intelligible even to those whose familiarity with the text is minimal. Being and
Time is divided roughly in half. Division I contains an analysis of the existential
structures underlying the everyday way of being-in-the-world of Dasein (persons),
complete with ontological and epistemological implications. 1 Division II contains
an account of "authentic" Dasein and a more fundamental account of the
existential structures of Division I in terms of temporality.
Dreyfus concentrates on Division I, which he s e e s - not uncontroversially- as
containing Heidegger's most original contributions to philosophy from this period.
The fifteen chapters of Being-in-the-World take up the main topics introduced in
Division I in order and in detail. Division II is dealt with only in an appendix,
co-authored by Jane Rubin, in which Heidegger's notion of authenticity is
interpreted as a secularization of Kierkegaard's theistic existentialism. The Divi-
sion II sections on temporality are not discussed.
This book functions beautifully as a commentary in many respects. It is replete
with references to Heidegger's lectures from the period of Being and Time, now
available in English, which deal with many of the same issues but are generally
much more straightforward. Here, Dreyfus has a considerable advantage over
previous expositors who were not in a position to make use of this material.
Another nice feature is that Dreyfus is at pains to contextualize Heidegger's
thought. For example, he regularly points out Heidegger's affinities with other
writers whose work may be more familiar, e.g., Dewey, Kuhn, and Wittgenstein.
Likewise, he often refers to the differences between Heidegger and his pre-

Minds and Machines 4: 353, 1994.


354 BOOK REVIEWS

decessor, Husserl, on one side, and Heidegger and such contemporary


philosophers as John Searle on the other.
On the other hand, it is a failing of this book that it gives the reader little
guidance with regard to the literature on Heidegger. This literature includes
several other commentaries on Being and Time, half a dozen general intro-
ductions to Heidegger, and something like a denumerably infinite set of interpre-
tive essays. Yet Being-in-the-World has no bibliography, never mind an annotated
one. Moreover, virtually every time Dreyfus mentions another Heidegger scholar,
it is to criticize her or him for having allegedly misunderstood Heidegger in some
respect. This might not even be worth mentioning if Being-in-the-World were
billed as an interpretation of Division I. But one expects somewhat more
even-handedness from a commentary, not to mention somewhat more help in
identifying worthwhile further reading.
I should also mention that Dreyfus offers new translations of some key
Heideggerian terminology, most of which are indeed improvements. I suspect that
the standard translations into English by Macquarrie and Robinson are so
entrenched that attempts to alter them at this point - even for good and sufficient
r e a s o n - may only result in annoying the veteran reader of Heidegger and
confusing the novice. However, the explanations of Heidegger's terminology that
accompany the new translations are valuable and useful.
Dreyfus organizes his discussion largely around the notion of intentionality.
This might seem odd, since Heidegger uses the term only once or twice in the
entire book; but it is in fact an excellent way of approaching the text. In the first
place, it is profitable to view Heidegger not as having abandoned the concerns of
Husserlian phenomenology along with its vocabulary, but rather as having recast
those concerns in fundamental ways. Second, the notion of intentionality is as
central to Anglo-American philosophy as it is to phenomenology, so, by em-
phasizing it, Dreyfus makes Heidegger's thought much more accessible to an
Anglo-American audience. Heidegger shows up as having many of the same
underlying concerns as the members of this audience do - e.g., about the nature
of action or the role of language in cognition- but as having addressed them in
rather different ways.
Specifically, Dreyfus presents Heidegger as having a non-mentalistic theory of
intentionality. Dasein is not the conscious subject of the Cartesian tradition, but a
practical, active, and embodied being. Examining everyday Dasein, Heidegger
finds that it is for the most part not conscious of the circumstances of its activity;
and when it tries to become aware of the wellsprings of this activity, it does not
find explicit theories or rule-like principles- what Heidegger calls "thematized
knowledge". For example, there are culture-specific practices governing how
close one stands to other people. But one is normally not aware of the distance
between oneself and others unless these practices are being violated, and even on
reflection one is hard put to describe the practices. So, Dreyfus says, for
Heidegger the intentional nature of activity is not primarily derived from mental
BOOK REVIEWS 355

intentionality, from an internal, explicitly represented system of beliefs and


desires, either conscious or unconscious. Moreover, it is neither possible nor
desirable to make the background of practices that underwrites the intentionality
of activity explicit in the form of a theory. Readers familiar with Dreyfus's work
will recall this line of argument and the consequences he draws with regard to the
feasibility of cognitive science. But with this book in hand, you will be able to see
exactly where these ideas are supposed to be found in Heidegger's own text, with
the significant increase in detail and depth that this affords.
Readers may be disappointed to find that Heidegger has no very specific
account to offer of how non-mentalistic intentionality is supposed to work. It
would be unfair to criticize him for this. His main concern is to get the
phenomenology right, since presumably only on this basis can further philosophi-
cal or psychological theorizing avoid operating under debilitating misapprehen-
sions. But the phenomenology underdetermines further theorizing, so it is difficult
to ,say what position Heidegger would have taken on issues that were not so
salient for him as they are for us.
In this connection, Dreyfus sometimes comes perilously close to construing
Heidegger as a behaviorist. About distance-standing practices, for instance, he
says that "learning to do it changes our brain, but there is no evidence and no
argument that rules or principles or beliefs are involved" (p. 19). You can find
this sort of claim in the Skinnerian corpus virtually by opening it at random, but
you cannot really find anything like it in Heidegger. Dasein is not the brain any
more than it is the conscious subject.
This is an instance leading into a more general point, which is that in an effort
to make sense of the text, Dreyfus sometimes goes beyond what the text will
obviously support. 2 These Dreyfusian extensions of Heidegger are neither illegiti-
mate nor uninteresting. But they do indicate again that this is more an interpreta-
tion of Division I than a commentary on it in the usual sense. This does not at all
compromise its value; it merely calls for a more critical stance on the part of the
reader.
A secondary emphasis concerns Heidegger's rejection of individualism. This is
related to the well-known individualism/holism debate in the social sciences. The
issue there is whether social structures are to be understood as the collective
activities of individuals, or whether, on the contrary, you have to start with the
social structures in order to understand the activities of individuals in the first
place. In Division I, Heidegger clearly opts for the latter position,3 which he raises
to the level of a general epistemological and ontological principle. For him, the
intelligibility of the shared world is not to be accounted for by investigating how
individuals represent the world to themselves. Rather, both the intelligibility and
the shared-ness are underwritten by the inculcation of social practices - practices
thai need not be consciously represented as such by individuals in order to
effectively structure behavior and thus disclose a particular world. The world is
invested with significance not in virtue of how I see it, but in virtue of what one
356 BOOK REVIEWS

does. Heidegger's discussion of this issue is confusing and sometimes outright


confused. It is perhaps here that Dreyfus displays his talent as an expositor to best
advantage, combing out tangled distinctions and rendering the main ideas in clear
and interesting perspective.
A third emphasis here is Dreyfus's frequent recurrence to the theme of
Heidegger's self-advertised break with the Cartesian tradition. For Dreyfus, this
tradition encompasses everybody from Descartes to Husserl, vast stretches of
current Anglo-American philosophy, and the philosophical foundations of cogni-
tive science. Heidegger's phenomenologically grounded position has implications
for many traditional philosophical problems, including such hoary sceptical
difficulties as the existence of the external world and other minds, as well as the
realism/idealism debate, the status of scientific knowledge, and the nature of
truth. In some cases, Heidegger sets out to dissolve the problem rather than
resolve it. Just to render the flavor of this enterprise: Kant said that the great
scandal of philosophy was that no one had succeeded in proving the existence of
an external world; Heidegger replies that the real scandal is that anyone should
ever have thought it necessary to try. But Heidegger does also have more
substantive things to say about some of these issues. For example, he approaches
questions about the nature and status of scientific knowledge by considering
science as a practice, thus anticipating some themes developed independently by
Kuhn and others. Likewise, his notion of truth as unhiddenness (as opposed to
correspondence, for instance) has been highly influential.
Being-in-the-World provides an excellent introduction to Heidegger for those
who are still strangers to his writing, and a thought-provoking interpretation of
Division I for longstanding acquaintances. Last but not least, it will clarify for
everybody the foundations of Dreyfus's ongoing Heideggerian critique of AI and
cognitive science.

Notes
1 Heidegger was very anxious to get away from standard philosophical vocabulary and its Cartesian
connotations. So he coined a philosophical vocabulary of his own, much of which consists of
hyphenated phrases like 'being-in-the-world'. The hyphenation indicates that some unitary conception
is intended, and that the phrase will function by and large as a technical term. Neither Heidegger nor
Heideggerians should be expected to apologize for this, especially since Heidegger's proposed
vocabulary is explained at length in his text, and used at least as consistently as most technical
vocabularies ever are.
Being-in-the-world is a fundamental feature of Dasein. Just as Dasein replaces the Cartesian
conscious subject in Heidegger's framework, so being-in-the-world replaces the Cartesian construal of
the relationship between intelligent beings and their environment as an epistemological relation
between subject and object. Heidegger's point is that we do not primarily know about our world - we~
live in it. In other w o r d s - to co-opt a later and very similar distinction made by Gilbert R y l e - our
primary relationship with the world is a knowing-how rather than a knowing-that. Dreyfus explicates
being-in-the-world in terms of the ongoing and apparently effortless coping with familiar things that is
our normal relationship to the world under normal circumstances. (_See especially pp. 102 ft.)
BOOK REVIEWS 357

2 Indeed, he occasionally says as much. See, e.g., the beginning of his discussion of the "unavailable",
pp. 69-72.
3 Although perhaps not in Division II. It can be argued that the notion of authenticity in Division iI is
an expression of precisely the kind of individualism that Heidegger apparently rejects in Division I and
that consequently the (inauthentic) everyday Dasein of Division I and the authentic Dasein of Division
II represent inconsistent conceptions of human existence. Dreyfus does touch briefly on this issue,
which is a major topic of debate in the literature.

Department of Philosophy BETH PRESTON


and Artificial Intelligence Program,
University of Georgia,
107 Peabody Hall,
Athens, GA 30602, USA

Daniel C. Dennett, Consciousness Explained, Boston: Little, Brown and Co.,


1991, xiii + 511 pp., $27.95 (cloth), ISBN 0-316-18065-3.

1. Preamble
Philosophers of mind and A I have long debated the consciousness of both minds
and machines. Such debate is often trying, since the participants generally cannot
offer any account of exactly what consciousness is. Now Dennett does offer an
account, one which, if it is even close to the truth, shows exactly how and why an
appropriately constructed machine would be as conscious as you or (more to the
point) I. D e n n e t t ' s account, which is a mixture of new philosophy and leading
science, requires us to rethink a lot of old positions and adopt m a n y new
perspectives. Such work is never quick and easy, but the slow and difficult road
on which he has set out is well worth a reconnoitre.

2. Multiple Drafts

Although materialism had done away with spirit substances, many a materialist is
still eager to cling to the Cartesian Theatre - a centre of conscious experience and
the mediating point for conscious (and free?) action; Dennett terms such thinkers
Cartesian Materialists. H e argues that if you trace afferent inputs up through the
brain, and efferent outputs backwards, there will be no special point at which they
come together. T h e r e is no neurogeographical or psychofunctional central
executive. Of course, there is nothing wrong with central executives per se, only
those that have all the qualities - consciousness, intentionality, etc. - that a theory
of mind sets out to explain. H e charges that cognitive science is often all too keen
to explain the function of various " s u p p o r t " modules, whilst maintaining an
embarrassing silence concerning the central system they implicitly serve. Such

Minds and Machines4: 357, 1994.


358 BOOK REVIEWS

doubts are not unique to Dennett - what is interesting here is both the vehemence
of their expression and the fact that he has a positive alternative to offer.
The Multiple Drafts theory asserts that there is no point in the brain where
experience is collected t o g e t h e r - no one locus of consciousness. The brain is
constantly monitoring the world outside, along with its own states, and forming
judgements about what it observes. Different parts of the brain form different
judgements, and over time judgements about exactly what is being observed will
change.

At different times and different places, various "decisions" or "judgments" are made; more literally,
parts of the brain are caused to go into states that discriminate different features, e.g., first mere onset
of stimulus, then location, then shape, later color, later still (apparent) motion, and eventually object
recognition. (p. 134.)

The brain constantly redrafts its set of judgements about the current state of the
world (and itself), and it is these drafts that make up the contents of conscious-
ness. If not put to use, drafts will simply be discarded, but if needed to modulate
action, then they will be adopted and sometimes stored in memory. Whilst the
clock ticks, you are unaware of it; you make,the judgement that it ticks, but put
that judgement to no use. When it stops, that judgement comes to the fore and is
used to modulate b e h a v i o u r - i t might cause a speech act or just trigger off an
inner monologue. So sometimes, when drafts are put to use, we then regard the
judgements that make them up as conscious, but they are just the same as the
drafts that never get the chance to modulate action. (Draft chances are decided
by "pandemonium"-style competition).
A n d so Multiple Drafts attacks the notion of a sharp line between conscious
and non-conscious experience. Some important judgements modulate a lot of
behavior; these clearly fall on the conscious side of the divide. Other judgements
can be shown to have been m a d e - as in many psychology e x p e r i m e n t s - but
cannot be readily reported. But a whole host of judgements are marginal, and
there is no motivation for drawing a sharp line between pre- and post-conscious
experiences. In general, Dennett argues against determinant (i.e., not vague)
experience. What you experience is not fixed, because it does not come together
in one final draft. But drafts modulate behaviour, and the action so produced is
determinate. When we describe our e x p e r i e n c e - out loud or to o u r s e l v e s - it
seems as though we describe something fixed and determinate. But that is just
what Dennett's theory would predict. The very act of producing a description is
modulated by multiple competing drafts, but the nature of action is not multiple;
only one description, out of the many possible, can result.
As well as rejecting the notion of a definitive publication in favour of multiple
drafts, Dennett make some important points about the temporal aspect of
experience. Firstly, he notes that it is simply a mistake to assume that the
temporal properties of the representational vehicle must be the same as the
temporal properties of its content. Just because brain event p occurs before brain
event q does not mean that what is represented by q must be represented as
BOOK REVIEWS 359

occurring after what is represented by p. Secondly, he stresses that brain


operations take a finite and non-trivial amount of time. The result is that not only
is there not always an answer to the question as to what is consciously
experienced, but also there is not always an answer as to when it is experienced.
When the British Empire fought the battle of New Orleans, was it aware of the
peace treaty signed days before? An answer is that bits of it were, but the bits that
fought the battle were not. Similarly, one part of the brain may form some
judgement but may not have time to inform another part of the brain that is
controlling some relevant action. Taking these thoughts on board gives a more
sophisticated, but less mysterious, picture of consciousness.
But the multiple-drafts thesis alone does not really explain consciousness.
Primarily, it offers a solution to the problem of understanding or, in Dennett's
words, the problem of what it is for the "given" to be "taken". He runs an
argument, reminiscent of Wittgenstein, to the effect that the phenomenology of
understanding could never ground understanding, and then introduces Multiple
Drafts, a theory that links perception to action, understanding to practice. To
"take" a "given" one way or another is for your behaviour, or counterfactual
behaviour, to be modulated by the "given" in a particular way. By establishing an
account of understanding that does not require any extra mediation by conscious-
ness, Dennett makes the task of explaining that much more straightforward.

3. Evolution and Virtual Machines


Having established the Multiple Drafts theory, Dennett turns his attention to the
origin of consciousness. He argues for a three-phase evolution: genetic,
phenotypic, and cultural (or "memetic"). Phenotypic evolution refers to the
accelerated process that occurs when organisms are able to receive a significant
degree of education in their own lifetime. The thought is that benefits that accrue
from such learning are very advantageous, and so natural selection will highly
favour brains demonstrating plasticity once they have arrived on the scene.
Dennett argues, drawing from Richard Dawkins, that culture evolves in the same
way that species do, only instead of genes we talk of "memes". "Examples of
memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or
of building arches" (Dawkins, quoted by Dennett, p. 202). Some memes are
preserved throughout successive generations, others are discarded, and while it is
not necessary that to be a successful meme you need to help your carrier (i.e.,
human society), it is no coincidence that many successful memes are very useful
to u s - the idea of the wheel, for example, is well preserved.
Dennett now argues that both language and consciousness are relatively recent
phenomena, less supported by dedicated hardware in the brain, but, rather,
largely implemented by memetic software, which is uploaded by the process of
growing up in the right sort of rich cultural environment. Now, although Multiple
Drafts clearly commits Dennett to a form of parallelism, he does not want to deny
360 BOOK REVIEWS

that some aspects of consciousness are markedly serial. So he suggests that o u r


stream of consciousness arises from a von-Neumann-style virtual machine im-
plemented in the parallel architecture of the brain. Tied closely to this virtual-
machine idea is that of the "user illusion". The user illusion is the world that
software presents to its operator. On a modern computer, this will involve
windows, icons, menus, and so forth. But if consciousness consists in the user
illusion of the serial virtual machine, who is the user being fooled? Dennett
recognises the homuncular temptation, but quickly squashes it. H e argues, though
not without difficulty, that the user of the virtual machine need not itself be a
conscious system. If he succeeds in this argument,, and it certainly needs
refinement, then maybe, just here, there is the glimmering of a real account of
consciousness.

4. The Self and How It Feels


In the final phase of his book, Dennett directly tackles the problem of phenom-
enal experience. He suggests that we can treat each individual's phenomenal
world, as described by verbal reports and evidenced by external behaviour, in the
same way that we treat the worlds of fiction. This approach is ontologically
agnostic about the nature of raw experiences, or qualia, and allows Dennett to
push an anti-realist line. It also dovetails nicely with his assertion that there are
many questions about experience that have no answer, just as there are of fiction.
No amount of investigation will reveal the colour of Sherlock Holmes's wallpaper,
unless it is actually referred to by the text.
H o w do you come to report (and make claims about) your inner world? It
certainly does not seem as though you unconsciously produce word s t r i n g s - a
behaviour modulated by inner states quite unlike their textual descriptions- but
this is exactly what Dennett suggests. This may seem to deny us our feels, our
phenomenal experience, but it is stressed that the fact that we s e e m to have a rich
inner life is not questioned; rather, it is emphasized that this inner life is no more
than a seeming one. It is a tricky argument, often reminiscent of similar
arguments by Ryle and Wittgenstein. But, disturbingly, it often seems to be very
dependent on our linguistic skills. Is Dennett claiming that we simply talk our
inner world into existence? This would be unappealing, since it .certainly seems
that we can experience without actually describing, or even being able to
describe, what it is we experience.
R a t h e r than concentrating on the nature of experiences available to a conscious
system, Dennett wants to place emphasis on the kind of system capable of
supporting experiences at all. H e argues that what is special is that a conscious
system has a notion of self that is not only apparent to external observers - since
in this sense even a lowly mollusc can be seen to be some kind of s e l f - but that is
a p p a r e n t to the system itself. We can readily see how a non-conscious system
could attribute selfhood to other systems in the world. This is done by explaining
BOOK REVIEWS 361

events in a certain w a y - telling stories with characters as well as mere events.


Dennett asks that this apparatus be included in the brain and then turned in on
itself. The self arises from the stories people tell about themselves. " O u r tales are
spun, but for the most part we don't spin them; they spin us. Our human
consciousness, and our narrative selfhood, is their product, not their source"
(p. 418). So, although there is a sense in which the self is an illusion, it is, like
centres of gravity, still involved in pushing and pulling. And once the illusion of
self, the centre of narrative gravity, has been created, a subject is realized that
can phenomenally experience.

S. Conclusions
This book provides a healthy antidote to the predominant pessimism and
confusion surrounding the study of consciousness, but it still leaves you worrying,
albeit about different issues. The similarities between the Cartesian Theatre and
the display screen of the stream-of-consciousness virtual machine are alarming,
giving rise to a nagging doubt that Dennett may himself have succumbed to some
form of Cartesian Materialism. There is a tension between the no-fixed-line
approach offered by multiple drafts, and the relatively determinate, or Cartesian,
picture of consciousness suggested by the user-illusion story. A n d the argument
that the virtual-machine user need not be conscious, though not without
precedent (Rosenthal 1986), is clearly underdeveloped here, thus opening
Dennett to the charge of homuncularism.
Throughout the book, language is used to prop up Dennett's theory; conscious-
ness is largely a matter of telling stories. (And it might be said that the book is
more about ideas of self than ideas of consciousness.) Such a reliance, of course,
means denying consciousness, or at least a particular flavour of it, to non-
language users. But there seems no obvious link between phenomenal experience
and language use, no reason to think the experience of cats, dogs, and monkeys is
restricted because they are relatively inarticulate. I think there is a way out for
Dennett on this issue. What matters is not language as such, but judgements or,
perhaps, to use his vocabulary, "beliefs" (non-linguistic states), not "opinions"
(linguistic states) (p. 77n4). Conscious experience is the availability of vast
amounts of information for the purposes of modulating action. We can argue that
we describe our inner world to ourselves in terms, of behavioural possibilities, as
well as with word strings. Rather than saying to oneself that one sees an apple,
you can simply internally rehearse the action of reaching out for it. Hence, we can
form a useful picture of consciousness, for both language and non-language users.
Even with such revisions, more work needs to be done to explain conscious-
ness, and there is still room for doubt as to whether it can ever really be done.
However, with its wealth of new ideas and fresh angles, this is a book that nobody
studying the mind can afford to ignore.
362 BOOK REVIEWS

Reference
1. Rosenthal, David (1986), 'Two Concepts of Consciousness',PhilosophicalStudies 49: 329-359.

School of Cognitive and Computing Sciences, MATTHEW ELTON


University of Sussex,
Falmer,
Brighton BN1 9QH,
United Kingdom

Andy Clark, Microcognition: Philosophy, Cognitive Science, and Parallel Distrib-


uted Processing, Cambridge, MA: MIT Press, 1989, xiv + 226 pp., $10.95 (paper),
ISBN 0-262-53095-3.

Andy Clark's Microcognition is an attempt to resolve ecumenically the great


schism that has developed between computational modelers of the mind: the
connectionists, or the adherents of parallel distributed processing (PDP), and the
followers of conventional physical symbol processing, or Good Old Fashioned
Artificial Intelligence (GOFAI). Clark embraces both models. Different cognitive
tasks "may positively require a variety of algorithmic models . . . . [T]he project of
psychological explanation may involve the construction both of microfunctional,
PDP accounts and in some cases the construction of serial, symbol processing
accounts" (p. 182). Conventional physical symbol systems are appropriate for
higher level cognitive capacities such as planning, rule-following, frame-based
reasoning, or language production and comprehension. PDP models are needed
to account for capacities such as fast expert problem solving, flexible pattern
recognition, or flashes of insight. Clark also maintains that not only do different
tasks require different computational models, but different aspects of a single task
may require PDP as well as GOFAI models (p. 174).
In short, Clark's solution to the "holy war" is to reject a shared assumption,
namely that all cognitive capacities are "psychologically explicable using only the
formal apparatus of a single computational architecture" (p. 128). The mind
consists "of a multiplicity of virtual machines" (p. 2).
However, Clark's ecumenism has its limits. Symbol processing is a by-product
of subsymbolic, connectionist microcognition. GOFAI accurately (and not just
approximately) models mature and evolutionarily recent cognitive capacities that
emerge from microcognitive capacities that are best modeled by PDP. PDP
models the basic architecture of in-the-head processing, and symbolic processing
is a product of our ability to create, manipulate, and then mentally model symbols '
in the environment (pp. 131-136). However, the "rich pattern matching [con-
nectionist] .substructure" is needed to endow symbolic representations with
content (p. 135). Clark agrees with John Searle that symbolic processing is not

Minds and Machines4: 362, 1994.


BOOK REVIEWS 363

sufficient for mentality, but asserts that "access to the output of a powerful
subsymbolic processor" will provide the missing ingredient: content (p. 175).
In the course of laying out his ecumenical proposal, Clark covers much
territory. The book has two parts, called "The Mind's-Eye View" and "The
Brain's-Eye View." The first part is devoted to GOFAI and its critics, while the
second part is devoted to PDP and its critics. Thus, the book could serve as an
introduction to the schism, and Clark sometimes writes with this in mind. But
usually the introductory aims take a back seat to Clark's own battles, and this will
leave the lay reader frustrated. The professional reader can be frustrated by the
fact that Clark presents his views, as he admits in one case, "in a somewhat
distributed array" (p. 186). Clark's last chapter is, appropriately, called "Reas-
sembling the Jigsaw", and it is only another prelude to a short Epilogue and a
substantive Appendix ("Beyond Eliminativism"). Nevertheless, assembling
Clark's significant proposal is very rewarding and stimulating.
I now turn to three criticisms.
(1) In the Appendix, Clark first raises a problem that reared its head whenever
he discussed the multiplicity of the mind. If the mind is a set of virtual machines
and independent subsystems, we have a naturalistic variant of the Cartesian
problem of interactionism: How do the connectionist subsystems communicate?
Clark calls this "the problem of communication" (p. 206), and he suggests that
physical symbol structures, namely, what is modeled by GOFAI, might be needed
"to pass messages between the various subsystems of a single cognizer" (p. 206).
This solution undercuts the primacy of sub-symbolic processing. Recall that for
Clark not only will different tasks require different computational models, but
single tasks may require different processors. Hence, these single tasks depend on
a solution to the communication problem, and if symbol systems are needed to
solve this problem, the completion of these tasks requires symbol systems. For
these tasks, symbol processing is not "merely ingenious icing on the computation-
al cake" (p. 135). It is one of the cake's ingredients.
The issue then boils down to this: Are there any psychological tasks that do not
involve communication between various subsystems? Pattern matching, and
speedy and flexible processing are candidates, but it is not obvious that these
features in and of themselves are cognitive, psychological capacities. Maybe all
psychological tasks require symbolic processing to achieve communication be-
tween processors. If this is so, it might be true that subsymbolic systems, in and of
themselves, only implement cognitive tasks.
The problem of communication is not limited to the interaction between
connectionist subsystems. Clark suggests that PDP processors and virtual sym-
bolic processors interact and pass representations to each other (pp. 151-152,
171-175). The symbolic processors of a contentful mind will have "access to the
output of a powerful subsymbolic processor" (p. 175). The problem is how do
the connectionist and the symbolic processors communicate?
(2) An ingenious feature of Clark's book is his strategy against the Fodor and
364 BOOK REVIEWS

Pylyshyn critique of connectionism (Fodor and Pylyshyn 1988; cf. Fodor and
McLaughlin 1990). Clark responds by driving a wedge between thought ascrip-
tions (e.g., 'Jones believes that 2 + 2 = 4' or 'Jones wants to do arithmetic') and
in-the-head processing. Behavior is the wedge. The meaning of thought ascrip-
tions is the conditions under which they are warranted, and these conditions are
networks of behavior (actual and counterfactual behavior), where behavior
involves relations to external objects (pp. 48-50). Thought ascriptions are "a
holistic net thrown across a body of behavior of an embodied being acting in the
world" (p. 5). Since a thought (e.g., a belief or desire) just is what gets ascribed
in a thought ascription (p. 179), a thought is a network of behavior.
Behavior blocks Fodor and Pylyshyn's argument for symbolic in-the-head
processing, because the systematicity of behavior "exerts no obvious pressure" to
posit in-the-head physical symbol systems (p. 149). The locus of explanation for
cognitive science is the behavior that warrants thought ascriptions, not the
thought ascriptions themselves.
This argument plays a central role in Clark's account of how to build a thinker.
He distinguishes descriptive cognitive science from causal cognitive science
(pp. 153-152). Descriptive cognitive science develops models or formal theories
that capture the systematic relations of thought ascriptions. This is the appropri-
ate domain for GOFAI. Causal cognitive science attempts to model in-the-head
processing that causes the behavior described by thought ascriptions. PDP
processing is needed to explain the flexible behavior that grounds thought
ascriptions. Building a thinker must involve more than instantiating a computa-
tional model produced by descriptive cognitive science. A descriptive theory will
only model the systematicity of belief and desire sentences, which "merely
describe regularities in the behavior and are not geared to pick out the syntactic
entities that are computationally manipulated to produce behavior" (pp. 179-
180).
Unfortunately, Clark never offers a clear argument for why the systematicity of
behavior does not call for a compositional internal symbol system. I think that
one of Clark's reasons is that although behavior is systematic, it is not composi-
tional. Clark admits that if "the locus of systematicity in need of explanation lay
in thought-ascribing sentences," then we would have an argument for in-the-head
symbol systems (pp. 148-149). But, according to Clark, only the systematicity of
the behavior described by thought ascriptions requires a psychological expla-
nation.
As a matter of fact, sentences are in need of explanation. Even if we ignore
sentences that ascribe thoughts to others (e.g., 'Jones believes that 2 + 2 = 4'), we
are still left with sentences that express thoughts (e.g., Jones's saying "2 + 2 =
4"). The network of behavior that warrants thought-ascribing sentences contains
linguistic behavior, which expresses thoughts, and this behavior is both systematic
and compositional. One relevant feature of linguistic behavior is first-person
thought ascriptions (e.g., Jones's saying "I believe that 2 + 2 = 4"). It is not
BOOK REVIEWS 365

obvious, and I think it is false, that first-person thought ascriptions describe a


network of behavior. First-person thought ascriptioias simply express one's
thoughts. They don't describe a network of behavior; they are systematic and
compositional linguistic behavior that expresses the speaker's thoughts. The
obvious pressure for a system of compositional internal representations, then, is
the fact that we use language to express our thoughts.
(3) Finally, I want to comment on Clark's claim that GOFAI models give the
"Mind's-Eye View", namely, intuitive models that codify our thought ascriptions,
while PDP models give the "Brain's-Eye View". Both labels are misleading.
Although neurally inspired, neural "architecture" is not connectionist architec-
ture. For one thing, neurons do not have negative weights. Second, GOFAI
models are also quite unintuitive, because they fully abstract from phenomeno-
logical foundations. The details of phrase-structure models of conventional psy-
cholinguistics or the visual models developed to account for, say, the scanning
effect have little to do with our ordinary, commonsensical thought ascriptions.

References
i. Fodor, Jerry, and McLaughlin, Brian (1990), 'Connectionism and the Problem of Systematicity:
Why Smolensky's Solution Doesn't Work', Cognition 35: 183-204.
2. Fodor, Jerry, and Pylyshyn,Zenon (1988), 'Connectionism and CognitiveArchitecture: A Critical
Analysis', Cognition 28: 3-71.

Department of Philosophy, M I C H A E L LOSONSKY


Colorado State University,
Ft. Collins, CO 80523, USA

Leonard Angel, How to Build a Conscious Machine, Boulder, CO: Westview


Press, 1989, xii + 131 pp., $42.50 (cloth), ISBN 0-8133-0944-1.

Recent work by philosophers in cognitive science tends to fall into two categories.
One is traditional philosophical work. William Lycan's Consciousness (1987) and
John Searle's Minds, Brains, and Science (1984) are good examples. They bring
traditional conceptual methods to bear on issues in cognitive science. While they
may make reference to empirical work in artificial intelligence (AI) or cognitive
psychology, their central theses don't (or aren't intended to) rest on empirical
evidence. The second category is philosophical work that attempts to argue, at
least in part, from empirical results. Examples in this category are Patricia
Churchland's Neurophilosophy (1986) and Alvin Goldman's Epistemology and
Cognition (1986). How to Build a Conscious Machine, by Leonard Angel, does
not yield easily to the above classification. Is it a "how to" manual, or is it a work
on the possibility of building a conscious machine? In fact, How to Build a

Minds and Machines 4: 365, 1994.


366 BOOK REVIEWS

Conscious Machine tries to do both, but in the process demonstrates the


difficulties philosophers face when they stray into unfamiliar territory.
H o w to Build a Conscious Machine is an ambitious work. Angel attempts both
to do AI and to provide its philosophical foundations. The first half of the book
details what Angel calls a " p u r e human-to-machine interactive interagency
attributive program" (p. 19). That's a mouthful, and I'm not sure that it is a
grammatical mouthful, but Angel's idea is to build a micro-world environment
with two agents, one controlled directly by a human being, the second controlled
by a micr0-worlds AI program or set of programs. Most of Part I, "Interagency
Attribution and Android Skepticism", describes this environment. Angel argues
that we can attribute goals, beliefs, desires, wishes, and other mental states to the
p r o g r a m m e d agent. Indeed, Angel maintains, we can build intentional states right
into the agent.
What does this have to do with consciousness? Is an agent with intentional
states an agent with consciousness? Remarkably little is said about consciousness
in the first several chapters. In fact, almost nothing of what, in recent literature, is
discussed under the rubric of consciousness can be found anywhere in H o w to
Build a Conscious Machine. I'll return to this important point below.
Angel tries to do AI before providing the philosophical foundations for his
peculiar brand of AI. It might appear that Angel has matters backward. Shouldn't
he show that it is possible to build a conscious machine before attempting to build
one? In fact, Angel's procedure shows some promise; he argues from a
description of a machine to the claim that the machine described is in fact
conscious. Of course, one can only proceed in this way by putting aside a host of
in-principle objections, and Angel is quite willing to do so. There are obvious
dangers witli such an approach.
In Chapter 3, "Integrated System Programming", Angel tries to set out the
minimal conditions for something t o count as a conscious entity. His main claim is
that a conscious machine is one to which we would attribute purposive behavior.
Angel tries to list the components of any system that exhibits such behavior.
T h e r e are several fundamental difficulties here, but I want to put them aside for a
m o m e n t and instead consider Angel's suggestions for "modeling" a purposive
system.
One difficulty is Angel's notion of modeling. In Chapter 2, "Pure Versus
Impure Models of Agency", Angel describes his task in the following words:

Accordingly, the challenge we are setting for ourselves is to see whether we might take a robot arm
mounted on motorized wheels of some sort and attach it to a program that receives information about
the environment in a primary sensory mode (presumably visual, tactile, or auditory) and, to a
program that operates upon this information so as to yield motor output, the implementation of which
exhibits the structure of rational agency. (p. 13.)
In Chapter 3, "Integrated System Programming", Angel attempts to set out a
model of rational agency. But it isn't clear what counts as a model for Angel. Is
BOOK REVIEWS 367

the program that he hopes to provide a program for a model of an agent, or is the
running program an actual rational agent? The difficulty is exacerbated by Angel's
talk of " m o d e l agents", "model ontologies', and "model environments". Angel
writes about giving " o u r agent the ability to recognize danger and flee it" (p. 23).
This passage is indexed, however, under the heading "modeling danger". So is it
a model or the real thing?
Angel claims that he is doing AI in the micro-world tradition. Programs like
S H R D L U , by Terry Winograd (1972), might be said to model the behavior
associated with the manipulation of blocks, because the machine never really
manipulates any blocks at all, but only attempts to exhibit the linguistic behavior
of someone who can manipulate blocks. Winograd did not try to build a robot
arm, a visual system, or any of the other things that would be required to
complete a machine that actually manipulated blocks. Angel, while identifying
with this micro-world tradition, believes that he is doing something more. So it is
difficult to understand what he means by 'model' when his project differs quite
fundamentally from micro-world modeling.
Both the title of the book and several claims in it suggest that Angel is doing
AI. Angel talks about programs, where presumably he means computer pro-
grams, and the discussion of the so-called programs is juxtaposed with remarks on
actual A I work, such as S H R D L U . But even Angel admits that what he offers
"must be strategic" (p. 12). I guess that means that the details are left to the
reader. I would not characterize the first half of H o w to Build a Conscious
Machine as work in AI, though it isn't much like philosophy either. Angel talks
over and over again about programming, but there isn't a line of code or
pseudo-code in the book, and there is virtually no reference to the programming
environment in which the model is to b e realized. The only comments on the
architecture of the proposed machine come in the form of hasty dismissals of
alternatives. H e r e , for example, is all Angel has to say about connectionism:
And whereas it may be appropriate to scoff at the idea that strictly nonconnectionistapproaches to the
"eomplexifications'will suffice to transform a discrete digitalized agent functioning linguistically in a
fine-grained environment, the need for neural-net or connectionist realizations of the various units of
the agency-attributive language using device does not undercut the boosters' theory that the
construction of agency-attributive language using devices constitutes the location of the continuum.
On the contrary. Although connectionists thus far haven't claimed to know how to use connectionism
in solving the really difficult AI-cognitive science problems of rational agency, possession of a
complete agency-attributive language using computational grid to squeeze neural-net flesh into, if
anything, justifies the boosters' approach to the current situation, and may well provide connectionists
with something of the framework they are looking for. (p. 56.)
If you have trouble with this passage, you are not alone. Even when you figure
out that by 'booster', Angel means someone who, like himself, thinks that
micro-world programming can be used to build intelligent machines, and that 'the
continuum' refers to something like the range of intelligence, from non-intelligent
behavior to human-like intelligence, there are inescapable problems with Angel's
text here. One is his flippant and uninformed discussion of an entire direction of
368 BOOK REVIEWS

AI research. Another is that the passage is ungrammatical and contorted. I offer


it here as a sample both of Angel's writing and his style of argument.
The unfortunate prose just quoted is not an isolated glitch. The problems with
Angel's writing include simple typographical errors such as 'thse' (p. 29), pseudo-
words like 'cognitivized' (p. 27), and sentences like "It mapmakes" (p. 23). Some
of these errors should have been caught by Westview's editorial staff. Other
problems with Angel's writing are less local, and they make the book quite
difficult to read. Only at the end of the micro-world programming section of the
book does Angel let the reader know that he takes himself to be doing micro-
world programming. In many places, Angel assumes a familiarity with a specific
literature. For example, he discusses an admittedly well-known flowchart in
Daniel Dennett's "Toward a Cognitive Theory of Consciousness" (1978) without
ever explaining it or its place in Dennett's paper. Even if this article is familiar to
many readers, the way it is introduced and discussed makes reading the passage
unnecessarily difficult.
When it comes to the details of Angel's system, we get little more than a rather
superficial account of human agency, not a general formula for building a
conscious machine. Angel slides easily between necessary features of rational
agency and what appear to be contingent features of human agency. For example,
Angel says that a rational system will be one that exhibits survival behavior. So
far, so good. But Angel concludes: "Accordingly, we will model the process of
getting hungry, finding food, eating, and receiving nutrition" (pp. 21-22). To
"model marks of satisfaction and distress," Angel explains: "Our agent will have
three modes of behavior that we call smile, frown, and neutral" (p. 22). No
argument is presented for the necessity of these features. It's a long jump from
the claim that a rational creature exhibits survival behavior to the claim that any
rational creature will have the capacity to smile. Indeed, human beings often
indicate satisfaction by smiling, but Angel gives other reasons for including in his
rational agent the ability to smile. If "smile" is just intended as a place holder for
"satisfaction-indicator", then Angel should say so. Naming psychological func-
tions, however, doesn't amount to AI programming but just a list of what Drew
McDermott called "wishful primitives" (1976: 145).
The second part of H o w to Build a Conscious Machine is entitled "The
Philosopher's Project". Here, Angel explores the philosophical conclusions that
one can draw from "the possibility of a formal entity interacting with a human via
an input device leading to a representative agent in the formal entity's mi-
croworld" (p. 65). I see two major problems with this part of the book. First, it
depends on the so-called engineering work that precedes it. Second, while Angel
recognizes that a central issue for any investigation of consciousness is the
question of the nature of qualitative states, remarkably, there is almost no
discussion of this issue in the book and scant references to the enormous literature
on qualia. The discussion of absent qualia arguments, for example, is limited to a
couple of paragraphs in which Angel reports Lycan's_ defense (1981) against the
BOOK REVIEWS 369

absent qualia objection, a defense that he endorses. The rest of the literature is
dismissed in a parenthetical remark (p. 78).
The consideration of objections to functionalism by Searle (1984) and others
dominates the second part of the book. The objections discussed center largely on
the question of the appropriateness of ascriptions of intentionality to machines.
Angel believes that attributions of intentionality to the machine he described in
the first part of his book are warranted. H e concludes, without argument, that
attributions of consciousness are also appropriate.
It is perhaps this last flaw that is most likely to make reading this book a
disappointing effort, particularly for philosophers. Angel must think that if you
take care of intentionality, consciousness will take care of itself. If that's his
position, I don't see any argument for it. Unfortunately, his stated views are so
unclear that it is difficult to tell. Arguments for reducing the problem of
consciousness to the problem of intentionality would be of interest to the many
philosophers who have claimed that the p h e n o m e n o n of consciousness is a special
challenge for functionalist theories of mind.
I said above that H o w to Build a Conscious Machine is an example of what
happens when philosophers stray into unfamiliar territory. The first half of the
b o o k demonstrates how difficult it is for philosophers to talk intelligently about
AI. But this excuse cannot be made for the second half of the book. H e r e , Angel
should be on his own tuff. Unfortunately, this part of the book is as uninformed
as the first.

References
1. Churchland, Patricia Smith (1986), Neurophilosophy: Toward a Unified Science of the Mind-Brain,
Cambridge, MA: MIT Press.
2. Dennett, Daniel C. (1978), 'Towards a CognitiveTheory of Consciousness', in C. W. Savage (ed.),
Perception and Cognition: Issues in the Foundations of Psychology; Minnesota Studies in the
Philosophy of Science, Vol. 9 (Minneapolis: University of Minnesota Press).
3. Goldman, Alvin I. (1986), Epistemology and Cognition, Cambridge, MA: Harvard University
Press.
4. Lycan, William G. (1981), 'Form, Function and Feel', Journal of Philosophy 78: 24-49.
5. Lycan, William G. (1987), Consciousness, Cambridge, MA: MIT Press.
6. McDermott, Drew (1976), 'Artificial Intelligence Meets Natural Stupidity', SIGART Newsletter of
the Association for Computing Machinery, No. 57 (April 1976); reprinted in J. Haugeland, Mind
Design, Cambridge, MA: MIT Press, 1981, pp. 143-160.
7. Searle, John R. (1984), Minds, Brains and Science, Cambridge, MA: Harvard University Press.
8. Winograd, Terry (1972), Understanding Natural Language, Orlando, FL: Academic Press.

Department o f Philosophy SAUL TRAIGER


and Cognitive Science Program,
Occidental College,
Los Angeles, C A 90041, U.S.A.
370 BOOK REVIEWS

Geoffrey Brown, Minds, Brains and Machines, Mind Matters Series, New York:
St. Martin's Press, 1989, x i + 163 pp., $24.95 (cloth), ISBN 0-312-03144-0.

This is an excellent introduction to the philosophical issues involved in artificial


intelligence (AI), viewed from a historical perspective. It would be especially
well-suited- perhaps even the best available t e x t - for an undergraduate course
on the "philosophy of artificial intelligence" or on the philosophy of mind where
there is keen interest in basic issues such as the 'possibility of "artificial"
intelligence.
This is nevertheless a book with both strengths and weaknesses. Its main
strength is in giving a lucid, even-handed, historical survey of philosophical
contributions to topics of current interest for research in AI and cognitive science
since early Modern philosophy beginning with Descartes. I here use capitalized
'Modern' in the strict historical sense to mean Western intellectual history after
the Renaissance. This history includes quite good and appropriate discussions of
conceptions of the mind from Descartes, Spinoza, Malebranche, Leibniz, Ber-
keley, Kant, Fichte, Hegel, Brentano, James, Wittgenstein, Ayer, Ryle, Russell,
Strawson, Smart, to philosophers who are more au courant as Dennett, Kripke,
and Searle. The discussions of solipsism (the view that there may be only one
mind - one's own) from Descartes to Wittgenstein are particularly thoughtful and
well-organized. There are remarkably few accounts of the history of modern
theories of mind that have this sweep and that are also focused upon areas of
interest for AI. One unfortunately gets the impression from reading much
contemporary work in the philosophy of mind, and especially work from AI and
cognitive-science circles, that there is little or no awareness of the exact
contributions of Kant, Leibniz, or Spinoza, for example, not to mention
Descartes. It is true that there is indeed a good question about how much of this
diffuse and often difficult material really is useful for modern investigations, but
at least Brown does not begin with the assumption that our clever and ruthlessly
"scientific" contemporaries invented the subject and have made all the best
arguments and observations in the history of the field. For those readers from AI
and cognitive science who do not have some awareness of these historical
traditions, this book may be hard going, and the relevance for modern projects
sometimes unclear. There are nevertheless very few other places where one could
go to get such succinct accounts of Modem philosophical theories of the nature of
mind. Even if these theories are not all splendidly clear and attractive (e.g.,
Spinoza's and Fichte's), it should be useful for everyone to see where modern
concepts of the mind and its n a t u r e - including now commonplace o n e s - arose,
e.g., in Descartes.
Although I do not fault the historical approach Brown uses, and in fact find it a
useful antidote to the aura of trendy "We're tops!" now-centrism that seems to
dominate many discussions in the theory of mind, it is nevertheless not altogether
clear that he has chosen the historical material most useful and interesting for
BOOK REVIEWS 371

contemporary researchers. He seems to have focused almost exclusively on


metaphysical issues of the nature and existence of mind (a,k.a. the mind-body
problem), which often involve extremely speculative arguments with complex
methodological assumptions, if not outright theological in nature. More useful
might have been detailed theories of the workings of the mind, whatever it exactly
"is". These would include the Theory of Ideas in Leibniz and Locke (including
clear and unclear ones), historical work on natural kinds and categories- includ-
ing Kant's a priori categories - theories of the nature of reasoning and inference,
theories of symbols and signs (e.g., in Peirce), and theories of components and
workings of the mind (including the will, unconscious projections, emotions,
desires, and their objects). The one excursion into such more detailed accounts of
the structure of minds and their objects is a short section on Brentano. One might
also reasonably Object to Brown's restricting- without argument- his survey to
Modern works, since Aristotle and medieval philosophy also contain many useful
contributions on components of the mind, and on the nature of mind, action,
inference, and representation.
If the broad historical sweep of the account Brown gives of Modern philosophy
of mind is this book's strength, then his accounts of contemporary philosophical
views are somewhat thin, and the accounts of contemporary computer-related
issues in AI, such as computer design, natural-language processing, logic and
reasoning, learning, neural nets, connectionism, and parallel processing are
nonexistent or rather unsophisticated and (now) dated. The last three topics are
not even mentioned. His account of Turing's famous essay is short and somewhat
unsympathetic. There is almost no discussion of technical areas such as com-
putability and recursive functions. His account of Searle's Chinese Room is brief
and sympathetic, but hardly does justice to the enormous difficulty of this
conceptual puzzle and to the vast commentary it has spawned. There is no
mention of attractive views that do not consider minds and representations in
them as unambiguous "stand-alone" systems, such as Putnam's socio-functional-
ism. Far more successful is Brown's remarkably clear and careful discussion of
Wittgenstein's Private Language Argument. One might propose that Wittgen-
stein's views were over-discussed in the 60s and 70s, and quite under-discussed
n o w - with the Private Language Argument and its now-vast literature actually
being of considerable importance for many issues in the AI-related philosophy of
mind, such as what it is "to follow a rule". Although "symptoms" of mind are
briefly mentioned, one misses in Brown's discussion some reference to the careful
discussion of "symptoms" and underlying "illnesses" in Wittgenstein's Blue and
Brown Books (1958), which have such close analogy to "symptoms" of the mind
and of intelligence.
It is, I think, very unlikely that anyone would find the short discussion of
computers and of branches and techniques of AI, in Chapter 9 ("The Computer
and the Brain"), to be anything but overly simple-minded and outdated. If the
book were used as a course text, however, this material could easily be replaced
372 BOOK REVIEWS

by more recent articles or specialized texts dealing specifically with artificial


intelligence. The discussions of the brain and neurology are slightly more useful,
although also somewhat simplistic.
This book is clearly more of a fair-minded textbook and survey than it is a
rhetorical monograph. One can nevertheless discern the positions toward which
Brown tilts. The positions he treats most positively, and the arguments he uses
against other positions, suggest an attraction to the position of Wittgenstein and
Searle, and away from the quasi-behaviorism of Turing and many optimistic
researchers in AI and cognitive science. One peculiar statement is that Brown
feels AI researchers are "cautious in their optimism" (p. 60). Although short-
term optimism is rare - certainly since Shank's famous "AI winter" attacks on the
overblown promise of natural-language processing and expert systems - most AI
researchers seem quite wedded to the eventual success of their project, and for
obscure reasons. Perhaps such enthusiasm is, however, both appropriate and
necessary for making progress, if there is to be a n y - e v e n if it may strike
outsiders as dogmatic. That is, I am suggesting that optimism about the
representation of mental phenomena by artificial systems is more "justified" by
the motivation it produces and that requires it than by convincing evidence for its
possibility in the usual sense.
Another place where Brown gently and substantially inserts himself into the
dialectic is what he calls the "curious anomaly" (p. 60) that standards of evidence
for intelligence appropriate to human beings do not seem appropriate to artifacts,
such as man-made computers, and vice-versa. Namely, by Searle's argument, for
example, knowledge of internal workings seems to disqualify certain entities from
having intelligence, or at least understanding, while the present state of medical
knowledge seems irrelevant to our assessment of human beings' capabilities. This
elegant way of putting it, however, then drifts into the usual Searlean boilerplate
about systems that "have" semantics and those that don't. It is far from clear,
except in some rather pre-theoretical and exclusively first-person sense, what it
"is" ("feels like"?) to "have" a semantics. There has perhaps been too much
philosophical and grammatical tolerance of talk of things (other than formal
theories) "having" semantics.
St. Martin's Press is not entirely to be commended for its production of this
volume. The sans serif type is clearly intended to strike the reader as impressively
modern, but it impedes both reading and scanning for keywords (as a number of
studies have shown). The jaunty lowercase title of chapters and sections also
strike one as merely a failed attempt at trendiness, and simply distracting. The
several unsophisticated cartoons might be perceived by alert college students as
insulting; these give way to somewhat clumsily hand-drawn technical diagrams in
the later chapters (e.g., of the brain and a Turing machine). There are a fair
number of typographical errors, and excessive and smeared inking, especially on
boldface words. The intended function of the glossary of thinkers' names and
positions at the end of the book is unclear. Some entries give useful summaries,
BOOK REVIEWS 373

while others (those on Dennett and Kant) state only that their views can't be
summarized. It would have been far more useful simply to have given an index of
subjects and names (perhaps with their dates) - which is peculiarly lacking.
Distracting for American readers, especially students, might be the British
spellings, and especially the excessive use of the world 'whilst'-in some cases,
used dozens of times over several pages.
These flaws should probably not prevent philosophically-inclined instructors of
courses in AI or cognitive science from using this volume as a fine introduction to
historical theories of the nature of the mind.

Reference
1. Wittgenstein, Ludwig (1958), Preliminary Studies for the "Philosophical Investigations," Generally
Known as the Blue and Brown Books, New York: Harper and Row.

Department of Philosophy, R A N D A L L R. DIPERT


State University of New York College at Fredonia,
Fredonia, NY 14063, U.S.A.

David M. Rosenthal (ed.), The Nature of Mind, New York: Oxford University
Press, 1991, x + 642 pp., $49.95 (cloth), ISBN 0-19-504670-6; $19.95 (paper),
ISBN 0-19-504671-4.

We can mark the philosophy of mind as emerging as a separate field of philosophy


with the publication of Gilbert Ryle's The Concept of Mind (1949). It was Ryle's
purpose to expose as fallacious the idea that there was something special that
distinguishes the mind and mental phenomena from other phenomena, but,
ironically, his work produced a frenzy of theorizing about the very distinction he
tried to deconstruct.
This anthology contains most of the widely discussed and highly influential
articles at the center of present thought in the philosophy of mind. I know no
better anthology comprising the current issues in this field. It consists of 62
selections divided into five major sections, further divided into subsections. All
articles have been published previously except for an interchange between Jerry
Fodor and John Searle, entitled "After-thoughts: Yin and Yang in the Chinese
R o o m " and "Yin and Yang Strike Out", respectively. Ned Block's "Troubles
with Function.alism" is his revised and abridged version of an earlier publication
with the same title. Hilary Putnam's "The Nature of Mental States" was originally
entitled "Psychological Predicates" (1967); his move from the formal to the
material mode exemplifies the bold new willingness of philosophers to talk about
things rather than words about things.
374 BOOK REVIEWS

The first section ("Problems About Mind") has classical writings of Descartes,
Locke, and Reid, followed by contemporary critiques by Gilbert Ryle ("De-
scartes' Myth"), P. F. Strawson ("Self, Mind and Body"), Gareth B. Matthews
("Consciousness and Life"), and G. E. M. Anscombe ("The First Person"). The
second section ("Self and Other") deals with our knowledge of our own mental
states and the mental states of others. The third section ("Mind and Body")
comprises discussions of the relation of mental states to bodily states. The fourth
section ("The Nature of Mind") deals with allegedly special features of the
mental such as intentionality, phenomenological quality, subjectivity, free will,
and consciousness. The last section ("Psychological Explanation") deals with
some problems in psychological theory, such as the validity of the computational
approach, the role of social and environmental context in defining psychological
states, and the place of everyday concepts ("folk psychology") in the science of
psychology. The editor, David M. Rosenthal, provides a general introduction to
the whole anthology and specific introductions to the five sections, all of which are
very helpful in clarifying the issues. The anthology ends with an extensive
bibliography paralleling the subsections of the book.
The book begins (where else?) with Descartes's theorizing about a non-physical
substance with a unique property, thinking. Only the existence of such a
substance, he argues, can explain certain undeniable features of human life. It is
the predominant negative message of this book, which accurately reflects our
times, that it is bad theorizing. Alas, there is no predominant positive message in
the book. There is almost universal agreement that Descartes was wrong, but no
agreement on what is right.
Even if we do not postulate, with Descartes, a special substance, we still have
to deal with mental phenomenal - events, states, processes, properties, or aspects.
As Rosenthal points out in his perspicuous introductory discussions, there are two
major tendencies in current approaches to mental phenomena. Some stress the
continuity of the mental with the non-mental ("They are not really so different'.')
and others stress the non-continulty ("Yes they are"). Many authors in the
anthology argue against the mental/non-mental dichotomy or offer explanations
of the mental in terms of the physical. Two groups reject attempts to assimilate
the mental into a physicalistic view of the world. One group concludes that
eventually we will come to see that there are no such things as mental
phenomena. Paul Feyerabend ("Mental Events and the Brain") assigns mental
states the same outmoded status as the theologian's postulation of devil-posses-
sion, and Paul M. Churchland ("Eliminative Materialism and the Propositional
Attitudes") likens the mental to the alchemist's fundamental spirits. A small
g r o u p - Thomas Nagel ("What Is It Like to Be a Bat?"), Keith Campbell (in an
excerpt from "Central State Materialism"), Ned Block ("Troubles with Func:
tionalism"), Saul A. Kripke (in an excerpt from Naming and Necessity), Frank
Jackson ("The Existence of Mental Objects" and "What Mary Didn't Know"),
Christopher Peacocke ("Colour Concepts and Colour Experiences"), and the
BOOK REVIEWS" 375

author of this review ("Mental Events and the Brain") - argues that mental states
do exist and resist any presently available explanation in physical terms. But the
majority, with much internal disagreement, work within a physicalistic frame-
work.
The readers of this journal may find especially interesting the last section of the
book, which deals with current issues in the philosophy of psychology, namely,
how the science of psychology is related to computer science, the physical and
social environment of the psychological subject, and "folk psychology" (our
everyday understanding of behavior). The section starts with Fodor's "Meth-
odological Solipsism Considered as a Research Strategy in Cognitive Psychology,"
which further elaborates the earlier, ground-breaking functionalism of his Psycho-
logical Explanation (1968) by defending a computer model of mental processes.
This commits him to what he calls 'methodological solipsism', the view that
intentional mental processes are to be understood simply as a succession of purely
internal states following certain rules. Stephen Stich ("Paying the Price for
Methodological Solipsism"), Putnam ("Computational Psychology and Interpre-
tation Theory"), and Tyler Burge ("Individualism and the Mental") argue against
Fodor that such a computational functionalism would rob mental states of their
intentional, representational aboutness, their content, which requires that the
internal states be related to external situations. Searle ("Minds, Brains, and
Programs") uses his famous "Chinese Room" example (the upshot of which is
that there is more to understanding Chinese than the manipulation of Chinese
inscriptions according to rules) to argue that the mere instantiation of a computer
program cannot yield intentionality, nor will tying the computer to environmental
causes and effects yield it either. Fodor ("Author's Response"), presents a
typically "spirited reply to Stich and, for good measure, two replies to Searle (who
gets in the last, not previously published, word on this occasion).
These debates are couched i n terms of "folk psychology", our everyday
concepts of believing, wanting, understanding a language, etc. Stich, Paul
Churchland, and Daniel Dennett deny that such concepts can play any role in a
science of psychology. Stich ("Autonomous Psychology and the Belief-Desire
Thesis") argues that such concepts apply not to individual subjects but to subjects
in particular environmental, social, and historical contexts, and therefore cannot
be used to describe the individual psychological states a science of psychology
would require. Churchland ("Eliminative Materialism and the Propositional
Attitudes") argues that folk psychology is so defective that it should, and
eventually will, go the way of alchemy, phlogiston theory, and vitalism, and will
be replaced by neuroscience. Dennett ("Three Kinds of Intentional Psychology")
sees more good in folk psychology, taken as a useful instrument for predicting
behavior, but since it succeeds, he claims, only by Presupposing normatively,
ideally, and abstractly that individuals are rational agents, the states ascribed by
folk psychology can have no causal or explanatory power.
The field of philosophy of mind today is in great ferment, stimulated by results
376 BOOK REVIEWS

in the philosophy of language, logic, computer science, and cognitive psychology.


This anthology captures that ferment.
It struck me that one omission in this anthology is a representative of the newly
emerging historical version of functionalism. The earlier functionalism seems to
beg the question; if mental states are to be analyzed in terms of the " r o l e " they
play in achieving the "goals", " e n d " , or "aims" of the system, then we will need
an analysis of those intentional terms. Ruth Millikan (in Language, Thought and
Other Biological Categories, 1984), Fred Dretske (in Explaining Behavior:
Reasons in a World of Causes, 1988), David Papineau (in Reality and Representa-
tion, 1987), and others offer a historical account of function modeled on the
function of the heart (via natural selection) or the function of words (via cultural
selection or individual conditioning). Perhaps Rosenthal will include this ap-
proach in the next edition. At any rate, he has given us a marvelous amount to
chew on for quite a while.

References
1. Dretske, Fred (1988), Explaining Behavior: Reasons in a Worldof Causes, Cambridge, MA: MIT
Press.
2. Fodor, Jerry (1968), PsychologicalExplanation, New York: Random House, 1968.
3. Millikan, Ruth Garrett (1984), Language, Thought and Other Biological Categories, Cambridge,
MA: MIT Press.
4. Papineau, David (1987), Reality and Representation, Oxford: Basil Blackwell.
5. Putnam, Hillary, 'PsychologicalPredicates', in W. H. Capitan and D. D. Merrill (eds.), Art, Mind,
and Religion, Pittsburgh: University of Pittsburgh Press, 1967, pp. 37-48.
6. Ryle, Gilbert (1949), The Concept of Mind, London: Hutchinson.

Department of Philosophy, J E R O M E A. S H A F F E R
University of Connecticut,
Storrs, CT 06269, U.S.A.

Você também pode gostar