Você está na página 1de 339

New Waves in Philosophy

Series Editors: Vincent F. Hendricks and Duncan Pritchard

Titles include:

Jesús H. Aguilar, Andrei A. Buckareff and Keith Frankish (editors)


NEW WAVES IN PHILOSOPHY OF ACTION
Michael Brady (editor)
NEW WAVES IN META-ETHICS
Thom Brooks (editor)
NEW WAVES IN ETHICS
Thom Brooks (editor)
NEW WAVES IN GLOBAL JUSTICE
Otavio Bueno and Oystein Linnebo (editors)
NEW WAVES IN PHILOSOPHY OF MATHEMATICS
Boudewijn De Bruin and Christopher F. Zurn (editors)
NEW WAVES IN POLITICAL PHILOSOPHY
Maksymilian Del Mar (editor)
NEW WAVES IN PHILOSOPHY OF LAW
Allan Hazlett (editor)
NEW WAVES IN METAPHYSICS
Vincent F. Hendricks and Duncan Pritchard (editors)
NEW WAVES IN EPISTEMOLOGY
P.D. Magnus and Jacob Busch (editors)
NEW WAVES IN PHILOSOPHY OF SCIENCE
Yujin Nagasawa and Erik J. Wielenberg (editors)
NEW WAVES IN PHILOSOPHY OF RELIGION
Jan Kyrre Berg Olsen, Evan Selinger and Søren Riis (editors)
NEW WAVES IN PHILOSOPHY OF TECHNOLOGY
Greg Restall and Gillian Russell (editors)
NEW WAVES IN PHILOSOPHICAL LOGIC
Jesper Ryberg, Thomas S. Petersen and Clark Wolf (editors)
NEW WAVES IN APPLIED ETHICS
Sarah Sawyer (editor)
NEW WAVES IN PHILOSOPHY OF LANGUAGE
Mark Sprevak and Jesper Kallestrup (editors)
NEW WAVES IN PHILOSOPHY OF MIND
Kathleen Stock and Katherine Thomson-Jones (editors)
NEW WAVES IN AESTHETICS
Cory D. Wright and Nikolaj J. L. L. Pedersen (editors)
NEW WAVES IN TRUTH

New Waves in Philosophy


Series Standing Order ISBN 978–0–230–53797–2 (hardcover)
Series Standing Order ISBN 978–0–230–53798–9 (paperback)
(outside North America only)
You can receive future titles in this series as they are published by placing a standing order.
Please contact your bookseller or, in case of difficulty, write to us at the address below with
your name and address, the title of the series and one of the ISBNs quoted above.
Customer Services Department, Macmillan Distribution Ltd, Houndmills, Basingstoke,
Hampshire RG21 6XS, England
New Waves in Philosophy
of Mind
Edited by

Mark Sprevak
School of Philosophy, Psychology and Language, University of Edinburgh, UK

and

Jesper Kallestrup
School of Philosophy, Psychology and Language, University of Edinburgh, UK
Selection and editorial matter © Mark Sprevak and Jesper Kallestrup 2014
Chapters © Individual authors 2014
Softcover reprint of the hardcover 1st edition 2014 978-1-137-28671-0
All rights reserved. No reproduction, copy or transmission of this
publication may be made without written permission.
No portion of this publication may be reproduced, copied or transmitted
save with written permission or in accordance with the provisions of the
Copyright, Designs and Patents Act 1988, or under the terms of any licence
permitting limited copying issued by the Copyright Licensing Agency,
Saffron House, 6–10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorized act in relation to this publication
may be liable to criminal prosecution and civil claims for damages.
The authors have asserted their rights to be identified as the authors of this work
in accordance with the Copyright, Designs and Patents Act 1988.
First published 2014 by
PALGRAVE MACMILLAN
Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited,
registered in England, company number 785998, of Houndmills, Basingstoke,
Hampshire RG21 6XS.
Palgrave Macmillan in the US is a division of St Martin’s Press LLC,
175 Fifth Avenue, New York, NY 10010.
Palgrave Macmillan is the global academic imprint of the above companies
and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States,
the United Kingdom, Europe and other countries
ISBN 978-1-137-28672-7 ISBN 978-1-137-28673-4 (eBook)
DOI 10.1057/9781137286734
This book is printed on paper suitable for recycling and made from fully
managed and sustained forest sources. Logging, pulping and manufacturing
processes are expected to conform to the environmental regulations of the
country of origin.
A catalogue record for this book is available from the British Library.
A catalog record for this book is available from the Library of Congress.
Contents

Series Editors’ Preface vii

Preface viii

Notes on Contributors x

Part I Metaphysics of Mind

1 The Cartesian Argument against Physicalism 3


Philip Goff

2 A Call for Modesty: A Priori Philosophy and


the Mind-Body Problem 21
Eric Funkhouser

3 Verbs and Minds 38


Carrie Figdor

4 Meanings and Methodologies 54


Justin C. Fisher

5 Entangled Externalisms 77
Mark Sprevak and Jesper Kallestrup

6 The Phenomenal Basis of Epistemic Justification 98


Declan Smithies

7 The Metaphysics of Mind and the


Multiple Sources of Multiple Realizability 125
Gualtiero Piccinini and Corey J. Maley

8 The Real Trouble with Armchair Arguments


against Phenomenal Externalism 153
Adam Pautz

Part II Mind and Cognitive Science

9 Problems and Possibilities for Empirically Informed


Philosophy of Mind 185
Elizabeth Irvine

v
vi Contents

10 Psychological Explanation, Ontological Commitment


and the Semantic View of Theories 208
Colin Klein

11 Naturalizing Action Theory 226


Bence Nanay

12 The Architecture of Higher Thought 242


Daniel A. Weiskopf

13 Significance Testing in Neuroimagery 262


Edouard Machery

14 Lack of Imagination: Individual Differences in


Mental Imagery and the Significance of Consciousness 278
Ian Phillips

15 A Beginner’s Guide to Group Minds 301


Georg Theiner

Index 323
Series Editors’ Preface

The aim of the New Waves in Philosophy series was to gather the young and
up-and-coming scholars in philosophy to give their view of the subject
now and in the years to come and to serve a documentary purpose –
that is, ‘this is what they said then, and this is what happened’. These
volumes provide a snapshot of cutting-edge research that will be of
vital interest to researchers and students working in all subject areas of
philosophy. Our goal was to have a New Waves volume in every one of
the main areas of philosophy, and with this volume on the philosophy
of mind, we believe that this goal has been achieved. Accordingly, this
volume is the final book in this series. The principles that underlie the
New Waves in Philosophy series will live on, however, in the new Palgrave
Innovations in Philosophy series.

Vincent F. Hendricks and Duncan Pritchard

vii
Preface

Philosophy of mind has always been one of the core disciplines in


philosophy, and it continues to play a dominant role in contemporary
philosophy. The topics broached by philosophers working in this field are
profound, vexed and intriguing. The posed problems pertain, amongst
other things, to the nature and kinds of consciousness we enjoy, to the
differences, if any, between our phenomenal and intentional states and
to the causal and putatively reductive relationships between our mental
states and the physical states of our body and the environment. Some
of these problems are old – or are new versions of old ones – and can
be traced to our great philosophical ancestors; others are entirely novel,
prompted by landmark empirical findings in the cognitive sciences. The
methods philosophers avail themselves of in attempting to provide illu-
minating responses to these problems range from armchair conceptual
analysis to brain-scanning techniques. Very few theories of mind are
widely considered refuted – Malebranche’s occasionalism and Ryle’s
analytic behaviourism are candidates – and most current debates of such
theories exhibit the same vigour and rigour as those that the theories’
introduction generated, not least because of how they mesh with issues
in cognitive science, philosophy of science, metaphysics and philos-
ophy of language, to name but a few. Surely this indicates that which
contenders are correct and to what extent they are so are still very much
open questions.
This volume gives promising younger researchers in the philosophy
of mind a chance to stir up new ideas. They have been asked to think
afresh, to consider what a new wave might look like. The contributors
offer what they take to be the right philosophical account of the topic
in question, examining along the way where the philosophy of mind
and cognition is and where they think it ought to be going. A recurrent
feature is the way insights and results in neuroscience, artificial intelli-
gence, cognitive psychology and cognate areas influence philosophical
theorizing about cognition and the mind.
The volume consists of two parts. Part I is devoted to topics in the
metaphysics of mind. Common to most of the contributions is the ques-
tion of what bearing armchair methods such as thought experiments
and a priori conceptual analysis have on the nature of meaning, mental

viii
Preface ix

content, phenomenal character or the physical realizability of the


mental. Part II is more directly concerned with topics and problems at the
intersection of philosophy of mind and cognitive science. The contribu-
tions share an interest in the question of what light empirical findings
and methodologies in cognitive science shed on explanatory practices
and ontological commitments in more traditional parts of philosophy
of mind. What emerges from the chapters is that the volume’s bipartite
division is not a sharp one; an important recent trend blurs traditional
metaphysical concerns and empirical work.
Prior to the publication of this volume, we organized an online
conference under the auspices of Eidyn: The Edinburgh Centre for
Epistemology, Mind and Normativity. During the three weeks of the
New Waves in Philosophy of Mind Online Conference, more than
500 participants registered, hundreds of posts were made in the online
forums, and the conference papers were downloaded more than 3,000
times. In this, its final form, the volume reflects the input that authors
received from the wide-ranging international scholarly community that
the conference drew upon. We thank everyone for making it a hugely
successful event. We are also very grateful to Brendan George, Melanie
Blair and the rest of the editorial team at Palgrave Macmillan for their
support and encouragement throughout. Finally, we thank Jamie Collin
for his excellent work on the index.
Notes on Contributors

Carrie Figdor is Associate Professor of Philosophy and core faculty in the


Interdisciplinary Graduate Program in Neuroscience at the University of
Iowa. She received her B.A. from Swarthmore College and M.A. Ph.D.
from the Graduate Center of the City University of New York. Her
primary research interests are in philosophy of mind, psychology, and
neuroscience, metaphysics and neuroethics. Her work has appeared in
the Journal of Philosophy, Philosophy of Science, Topics in Cognitive Science,
Neuroethics and other venues. She is also a former reporter and editor for
the Associated Press and a contributor to the Wall Street Journal.

Justin C. Fisher is Assistant Professor of Philosophy at Southern


Methodist University. His Ph.D. is from the University of Arizona.
Before coming to SMU he also spent time as a postdoctoral researcher at
Harvard University and as a visiting assistant professor at the University
of British Columbia. He has written on a variety of topics in philos-
ophy of mind, cognitive science and metaphysics, including papers in
Philosophical Studies, Noûs, Philosophical Psychology and the British Journal
for Philosophy of Science.

Eric Funkhouser is Associate Professor of Philosophy at the University


of Arkansas, USA. He received his Ph.D. from Syracuse University in
2002. His research is primarily in metaphysics, as well as philosophy
of mind and action. He has published articles in journals such as Noûs,
Philosophical Studies and Philosophy Compass, and his book The Logical
Structure of Kinds is forthcoming. In particular, he specializes in proper-
ties, mental causation, belief and rationality.

Philip Goff is Lecturer in Philosophy at the University of Liverpool.


He is currently working on a book titled Consciousness and Fundamental
Reality, which takes our special relationship with consciousness to be
a crucial source of data in metaphysics and from that starting point
tries to work out the fundamental nature of reality. The book argues
against physicalism, defends a distinctive form of Russellian monism
and explores the relationship between thought and consciousness. He
has published articles on these themes in Philosophical Studies, Philosophy
and Phenomenological Research and Australasian Journal of Philosophy and

x
Notes on Contributors xi

has papers in three forthcoming Oxford University Press volumes on


consciousness, panpsychism and Russellian monism.

Elizabeth Irvine is a postdoctoral fellow at the Australian National


University and a lecturer (on leave for 2013/14) at the University of
Cardiff, UK. She received her Ph.D. from the University of Edinburgh in
2011. Her primary interests are in philosophy of psychology and cogni-
tive science and in philosophy of science. She has been published in the
British Journal for Philosophy of Science and Philosophical Psychology and is
the author of the book Consciousness as a Scientific Concept: A Philosophy
of Science Perspective.

Jesper Kallestrup holds the Chair in Mental Philosophy at the University


of Edinburgh. He obtained his Ph.D. from the University of St. Andrews.
His primary research interests are at the intersection of philosophy of
mind, philosophy of language and epistemology. He is the author of
Semantic Externalism and has published more than 30 research articles in
journals such as Australasian Journal of Philosophy, Analysis, Philosophical
Studies, Synthese, American Philosophical Quarterly, European Journal of
Philosophy, Pacific Philosophical Quarterly, Dialectica and Philosophy and
Phenomenological Research.

Colin Klein is Associate Professor at the University of Illinois at


Chicago. His research focuses on philosophy of mind and philos-
ophy of science, particularly where they intersect in philosophy of
psychology. He is interested in general questions about theory testing
and intertheoretic explanation, as well as more specific questions
about the methodology of functional brain imaging and pain percep-
tion. He has published in journals such as the Journal of Philosophy,
Philosophy of Science, British Journal for the Philosophy of Science,
Philosophical Psychology, Synthese and the volume Foundational Issues
in Human Brain Mapping.

Edouard Machery is Professor in the Department of History and


Philosophy of Science at the University of Pittsburgh, a Fellow of the
Center for Philosophy of Science at the University of Pittsburgh and
a member of the Center for the Neural Basis of Cognition (University
of Pittsburgh–Carnegie Mellon University). He is the author of Doing
without Concepts, as well as the editor of The Oxford Handbook of
Compositionality, La Philosophie Expérimentale, Arguing about Human
Nature and Current Controversies in Experimental Philosophy. He is the
editor of the Naturalistic Philosophy section of Philosophy Compass
xii Notes on Contributors

since 2012, and he was awarded the Stanton Prize by the Society for
Philosophy and Psychology in 2013.

Corey J. Maley is Ph.D. candidate (ABD) at Princeton University and


Adjunct Assistant Professor at the University of Missouri–Kansas City. As
an undergraduate at the University of Nebraska–Lincoln, Corey received
a B.S. in computer science, mathematics and psychology and a B.A. in
philosophy. He works in moral psychology and the philosophy of cogni-
tive science.

Bence Nanay is Professor of Philosophy and BOF Research Professor at


the University of Antwerp and Senior Research Associate at Peterhouse,
University of Cambridge. He received his Ph.D. from the University
of California, Berkeley, in 2006. He is the author of Between Perception
and Action and Aesthetics as Philosophy of Perception and the editor of
Perceiving the World. He has published numerous articles on philosophy
of mind, philosophy of biology and aesthetics.

Adam Pautz is Associate Professor of Philosophy at the University of


Texas at Austin. He received his Ph.D. from New York University in
2004. He has also worked at David Chalmers’s Centre for Consciousness
at the Australian National University. He is interested in consciousness,
the philosophy of perception, the sensible qualities and ‘the naturaliza-
tion program’. He is currently working on two book projects: one on
perception and another developing a ‘consciousness first’ program in
the philosophy of mind.

Ian Phillips is CUF Lecturer and Gabriele Taylor Fellow at St Anne’s


College, Oxford. He was awarded his Ph.D. from UCL in 2009 and
returned there as a lecturer in 2010. He also held a Fellowship by
Examination at All Souls College, Oxford, from 2005 to 2012. He has
authored papers in philosophy of mind and cognitive science for Mind &
Language, Philosophy and Phenomenological Research, Philosophical Studies,
Philosophical Perspectives, the Philosophical Quarterly and the Proceedings
of the Aristotelian Society. In 2011 he was awarded the William James
Prize for Contributions to the Scientific Study of Consciousness by the
Association for the Scientific Study of Consciousness. Supported by a
Leverhulme Research Fellowship, he is currently working on a book
entitled Our Experience of Time.

Gualtiero Piccinini is Associate Professor and Chair of the Department


of Philosophy at the University of Missouri–St. Louis, where he is also a
member of the Center for Neurodynamics. He received his Ph.D. from
Notes on Contributors xiii

Pittsburgh University in 2003. He has published extensively on the nature


of computation, computational theories of cognition, the relationship
between psychology and neuroscience, concepts and consciousness. He
has received several awards and fellowships, including a fellowship at
the Institute for Advanced Studies at the Hebrew University of Jerusalem
and a Scholars’ Award by the National Science Foundation. He is the
founder of Brains (www.philosophyofbrains.com), an academic group
blog on the philosophy of mind, psychology and neuroscience.

Declan Smithies is Associate Professor of Philosophy at the Ohio State


University. He received his Ph.D. from New York University in 2006 and
was recently Postdoctoral Fellow in Philosophy at the Australian National
University. He works on various issues at the intersection of episte-
mology and the philosophy of mind and cognitive science. His papers
have been published in the Journal of Philosophy, Noûs, Philosophy and
Phenomenological Research, Philosophical Perspectives and the Australasian
Journal of Philosophy. He is also co-editor of Attention: Philosophical
and Psychological Essays and Introspection and Consciousness, and he is
currently writing a monograph on the epistemic role of consciousness.

Mark Sprevak is Lecturer in Philosophy at the University of Edinburgh.


He obtained his Ph.D. from the University of Cambridge in 2006. His
primary research interests are philosophy of mind, philosophy of science
and metaphysics, with particular focus on the cognitive sciences. He
has published articles in the Journal of Philosophy, the British Journal for
the Philosophy of Science, Synthese, Philosophy, Psychiatry & Psychology and
Studies in History and Philosophy of Science, among other journals. His
book The Computational Mind is forthcoming from Routledge.

Georg Theiner is Assistant Professor at Villanova University in


Philadelphia. After obtaining M.A. degrees in Philosophy and Linguistics
from the University of Vienna, he received a Fulbright scholarship to
study at Indiana University, where he earned his Ph.D. in Philosophy,
together with a joint Ph.D. in Cognitive Science in 2008. He was a Killam
Postdoctoral Fellow at the University of Alberta from 2008 to 2010. His
main areas of research are the philosophy of mind and cognitive science,
particularly theories of embodied cognition, the ‘extended mind’ thesis
and various forms of group cognition. He is the author of Res Cogitans
Extensa: A Philosophical Defense of the Extended Mind Thesis.

Daniel A. Weiskopf is Associate Professor of Philosophy at Georgia State


University. He received his Ph.D. from the Philosophy–Neuroscience–
Psychology program at Washington University in St. Louis in 2003. He
xiv Notes on Contributors

works on the philosophy of cognitive science, mind and language and


has published papers in the British Journal for the Philosophy of Science,
Philosophy and Phenomenological Research, Mind & Language and Synthese,
among other places. He and Fred Adams are currently completing a book
on the philosophy of psychology for Cambridge University Press.
Part I
Metaphysics of Mind
1
The Cartesian Argument against
Physicalism
Philip Goff

I’m an analytic metaphysician who thinks analytic metaphysicians


don’t think enough about consciousness. By ‘consciousness’ I mean the
property of being a thing such that there’s something that it’s like to be
that thing. There’s something that it’s like for a rabbit to be cold or to be
kicked or to have a knife stuck in it. There’s nothing that it’s like (or so
we ordinarily suppose) for a table to be cold or to be kicked or to have a
knife stuck in it. There’s nothing that it’s like from the inside, as it were,
to be a table. We mark this difference by saying that the rabbit, not the
table, is conscious.
The property of consciousness is special because we know for certain
that it is instantiated. Not only that, but we know for certain that
consciousness as we ordinarily conceive of it is instantiated. I am not
claiming that we know everything there is to know about consciousness
or that we never make mistakes about our own conscious experience.
My claim is simply that one is justified in being certain – believing with
a credence of 1 – that there is something that it’s like to be oneself,
according to one’s normal understanding of what it would be for there
to be something that it’s like to be oneself.
This makes our relationship with consciousness radically different
from our relationship with any other feature of reality. Much meta-
physics begins from certain ‘Moorean truths’ – truths of common sense
that it would be intolerable to deny. Perhaps it is a Moorean truth that
some or all of the following things exist: persons, time, space, freedom,
value, solid matter. But it would be difficult to justify starting meta-
physical enquiry from the conviction that these things must exist as
we ordinarily conceive of them. We must remain open to science and
philosophy overturning our folk notions of what it is for someone to be
free or for something to be solid or for time to pass.

3
4 Philip Goff

Matters are different when it comes to consciousness. It is not simply


that I can gesture at some property of ‘consciousness’ with folk plati-
tudes and have confidence that something satisfies the bulk of those
platitudes. When I entertain the proposition <there is something that
it’s like to be me>, I know that that very proposition (not it or some revi-
sion of it containing a slightly different concept of ‘being something
such that there’s something that it’s like to be it’) is true.
You can’t build a satisfactory metaphysical theory wholly from the
datum that there is consciousness; that datum is after all consistent with
solipsism. We must continue to rely on Moorean truths, empirical data and
the weighing of theoretical virtues in trying to formulate our best guess
as to what reality is like. But because the datum that there is conscious-
ness (as we ordinarily conceive it) is unrevisable it ought to occupy a
central place in enquiry, a fixed point around which other considerations
revolve. I call an approach to analytic metaphysics that grants the reality
of consciousness this central place ‘analytic phenomenology’.
The potential of this datum is grossly underexplored; it has arguable
implications for the nature of time, persistence, properties, composition,
objecthood and personal identity. Time will tell, but it is possible that
with an agreed source of unrevisable data, analytic phenomenologists
may achieve some degree of consensus on certain key questions – a goal
which has so far eluded other schools of metaphysics.
Perhaps the most famous alleged implication of the reality of conscious-
ness is the falsity of physicalism. In this paper I focus on Descartes’s
conceivability argument against the historial ancestor of physicalism:
materialism. In my undergraduate lectures, these arguments against
materialism were presented as objects for target practice rather than
serious evaluation. At the time it seemed to me that there was more
to the arguments than they were being given credit for. I now think
Descartes’s Meditations provides the resources for a sound argument
against standard contemporary forms of physicalism. In what follows I
present this argument.
In the final section, I highlight a distinctive advantage of this argu-
ment: if sound, it demonstrates the non-physicality not only of sensory
experience but also of thought.

1 The second meditation and the refutation of


analytic functionalism

Physicalism is the metaphysical view that nothing in actual concrete


reality is anything over and above the physical. There is a great divide
The Cartesian Argument against Physicalism 5

amongst physicalists over the epistemological implications of that meta-


physical doctrine. A priori physicalists, whom I consider in this first
section, believe that all facts are a priori entailed by the physical facts.
If you knew the intrinsic and extrinsic properties of every fundamental
particle and field and you were clever enough, you could in principle
work out a priori all the other facts: what the chemical composition of
water is, who won the Second World War, how many number 1 hits the
Beatles had, and so on.1
Perhaps the trickiest case for the a priori physicalist is mentality. Prima
facie it doesn’t seem possible to move a priori from the kind of facts
brain science delivers to the facts about consciousness. A colour-blind
brain scientist could know all the physical facts about colour experience
without knowing what it’s like to see colours. A brain scientist who’s
never tasted a lemon could not work out how one tastes from poking
around in someone’s brain.2 At least that’s how things seem. If she wants
to have a plausible, fully worked out view, the a priori physicalist cannot
just brutally assert that contrary to appearances, the mental facts do
follow a priori from the physical facts but must give some plausible
account of mental concepts which has this implication.
The standard way of doing this is to adopt some form of analytic func-
tionalism; that is, to give some kind of causal analysis of mental concepts.
The straightforward analytic functionalist says that mental concepts
denote higher-order functional states. For example, the concept of pain
denotes the state of having some more fundamental state that ‘plays the
pain role’ – that is, roughly speaking, that responds to bodily damage by
instigating avoidance behaviour. On the more subtle ‘Australian’ form
of analytic functionalism defended by David Armstrong and David
Lewis, mental concepts are non-rigid designators which pick out certain
states in virtue of the higher-level functional states they realize.3 Just
as the concept ‘head of state’ picks out in each country the individual
that happens to be the head of state in that country, so ‘pain’ picks out
in each population the state that happens to play the pain role in that
population.
It is clear that both forms of analytic functionalism are forms of a
priori physicalism. Suppose Jennifer’s c-fibres are firing, and the firing of
c-fibres is the state that plays the pain role both in Jennifer and in the
human population in general. For the straightforward analytic function-
alist, if I know all the physical facts I will be able to work out that Jennifer
is in a state that plays the pain role and can infer from this information
that Jennifer is in pain. For the Australian analytic functionalist, if I
know all the physical facts I will know that Jennifer instantiates the state
6 Philip Goff

that plays the pain role in humans and can infer from this information
that Jennifer is in pain. In either case, the mental facts can be deduced
from the physical facts.
The second meditation provides the resources for a decisive refuta-
tion of both of these forms of analytic functionalism. By the end of the
second meditation I have doubted the existence of my body and my
brain and of the entire physical world around me. For all I know for
certain, my apparent experience of all these things might be an espe-
cially vivid hallucination instigated by an omnipotent evil demon. This
demon might have brought me into existence just a moment ago – with
false memories of a long history and expectations of a similar future –
and may destroy me a moment hence. I discover that the only thing the
demon cannot be deceiving me about is my own existence as a thinking
thing: no matter how much the demon is deceiving me, I must exist as
a thinking thing in order to be deceived.
At the end of this guided meditation, when I have doubted the exist-
ence of anything physical whilst at the same time enjoying the certain
knowledge that I exist as a thinking thing, I find I am conceiving of
myself as a pure and lonely thinker: a thing that has existence only in the
present moment, and that has no characteristics other than its present
mode of thought and experience.4 The fact that I can conceive of myself
as a pure and lonely thinker is inconsistent with the analytic func-
tionalist analysis of mental concepts. For the straightforward analytic
functionalist, it is a priori that something has a given mental state if
and only if it has the higher-order state of having some other state that
plays the relevant causal role. However, a pure and lonely thinker has
no states other than the mental states themselves: its mental states are
not realized in anything more fundamental. If straightforward analytic
functionalism is true, a pure and lonely thinker is inconceivable. And
yet a pure and lonely thinker is not inconceivable; the second medita-
tion guides us to its conception.
For the Australian analytic functionalist, it is a priori that something
is in pain if and only if it has the state that plays the pain role in its
population. But a pure and lonely thinker does not have a population;
it is alone in its world. If Australian analytic functionalism were true, a
pure and lonely thinker would be inconceivable. Yet by the end of the
second meditation we end up conceiving of one.
Lewis does suggest at one point that the population relevant to deter-
mining the application of mental concepts might be the concept user’s
population rather than the population of the creature the concept
is being applied to.5 However, when I reach the end of the second
The Cartesian Argument against Physicalism 7

meditation, I am supposing that I am alone in the universe and hence


am not a member of any population. If Lewis were right about the refer-
ence fixing description of pain, then ‘pain’ would have no application
in such a conceivable scenario, just as the concept ‘head of state’ has no
application in a scenario where there are no countries. Yet if I read the
second meditation when I have a headache, I end up conceiving of a
scenario in which the concept ‘pain’ evidently has application.
Why have analytic functionalists been so complacent about this
incredibly powerful argument against their view, an argument which –
on the assumption that they took a philosophy degree – they cannot
possibly have been ignorant of? I think that straightforward analytic
functionalists have felt unthreatened by Cartesian considerations
because their view entails that the mental is multiply realized and allows
that in certain non-actual scenarios the mental may be realized by non-
physical goings-on. Whilst functional states in the actual scenario may
be realized by fleshly mechanisms, in non-actual scenarios they are real-
ized by ectoplasm. Therefore, the fact that we can conceive of mental
processes without physical processes – as we do at the end of the second
meditation – is consistent with straightforward analytic functionalism.
However, whilst it is true that straightforward analytic functionalism
is consistent with the conceivability of minds without brains, it is not
true that straightforward analytic functionalism is consistent with the
scenario we finish up conceiving of at the end of the second meditation.
The thing we end up conceiving of at the end of the second meditation
is not just a thing with mentality not realized in physical stuff; it is a
thing with mentality not realized in any stuff. And it is not coherent
to suppose that ‘the higher-order state of having some other state that
plays the pain role’ exists in the absence of some other state that plays
the pain role.
The Australian analytic functionalist avoids this problem by identi-
fying pain with the realizer of the pain role rather than with the pain
role itself. It is then conceivable that pain is a fundamental state, as
there are scenarios where a fundamental state plays the pain role in the
population being considered. Furthermore, there are coherent scenarios
in which pain does not play the pain role. In cases of what Lewis calls
‘mad pain’, there is an individual who instantiates the state which plays
the pain role in her population, without it being the case that it plays
the pain role in her.6 However, despite the ingenious flexibility of the
Australian view, it does not allow that ‘pain’ has application in scenarios
in which nothing in existence plays the pain role. Yet when we reach
the end of the second mediation and have a headache, we find ourselves
8 Philip Goff

conceiving of a scenario in which nothing plays the pain role and yet
‘pain’ still has application.
In none of this discussion have we moved from the epistemological
to the metaphysical. Analytic functionalists make certain claims about
mental concepts, claims that have implications for what it is coherent
to suppose. Those claims are inconsistent with the state of conceiving
we end up in at the end of the second meditation. We are able to
refute analytic functionalism by refuting its epistemological elements.
Descartes admits in the second meditation that physical things – ‘these
very things which I am supposing to be nothing, because they are
unknown to me’ – may ‘in reality be identical with the “I” of which I
am aware’.7 The leap from the epistemological to the metaphysical must
wait until the sixth meditation.

2 The sixth meditation and the refutation of a posteriori


physicalism

In the sixth meditation, we find the following argument against


materialism:

First, I know that everything which I clearly and distinctly under-


stand is capable of being created by God so as to correspond exactly
with my understanding of it. Hence the fact that I can clearly and
distinctly understand one thing apart from another is enough to make
me certain that the two things are distinct, since they are capable of
being separated, at least by God. The question of what kind of power
is required to bring about such a separation does not affect the judge-
ment that the two things are distinct ... on the one hand I have a
clear and distinct idea of myself, in so far as I am simply a thinking,
non-extended thing; and on the other hand I have a distinct idea of
body, in so far as this is simply an extended, non-thinking thing. And
accordingly it is certain that I am really distinct from my body and
can exist without it.8

We can lay the argument out as follows:

Premise 1: Anything I can clearly and distinctly conceive of is


possible.
Premise 2: I can clearly and distinctly conceive of my mind and
brain/body existing independently of each other.
The Cartesian Argument against Physicalism 9

Conclusion 1: My mind and my brain/body could exist independ-


ently of each other.
Premise 3: If my mind and brain/body could exist independently of
each other, then they are distinct substances.
Conclusion 2: My mind and brain/body are distinct substances.

Let us consider the premises of this argument in more detail.

2.1 Premise 1
When I was a first-year philosophy undergraduate, I was taught that
premise 1 of this argument could be swiftly refuted with the counterex-
ample of water existing independently of H2O. It seems that we can
conceive of a scenario in which water exists in the absence of H2O – for
example, a scenario in which experiments reveal water to have some
other chemical composition. Yet if we infer from this the real possibility
of water existing in the absence of H2O, we are quickly led to the non-
identity of water and H2O, contrary to what is in fact the case.
This rejection of premise 1 is far too quick. Descartes doesn’t say that
any old conceiving implies possibility, only that a clear and distinct
conception implies possibility. I take it that whatever else having a clear
and distinct conception involves, it involves understanding what you’re
conceiving of. Suppose I think of electric charge as ‘that thing Dave (my
physicist chum) was talking about the other night’ (where I use this
description as a rigid designator) but have zero understanding of the
defining characteristics of negative charge. It is clear that such a concep-
tion of negative charge is not clear and distinct. I can refer to negative
charge, but there is a clear sense in which I don’t know what it is: I have
no idea what it is for something to be negatively charged. I don’t have
the understanding of the nature of negative charge that, let us suppose,
my physicist chum Dave has. Although I can involve negative charge in
what I am conceiving of, to the extent that I do I bring opacity into my
conception.
Such opacity brings in its wake coherent conceivability without possi-
bility. I can coherently conceive of all sorts of scenarios in which ‘nega-
tive charge’ features – I might suppose that negative charge is what
underlies a wizard’s ability to teleport – without this implying that
negative charge really could be as I am supposing. My ignorance of the
nature of negative charge licences a conceptual free-for-all.
Our concept ‘water’ is also opaque in this sense. For something to be
water is for it to be H2O. But this is not apparent or a priori accessible to
10 Philip Goff

me when I conceive of water as such. Again we have a licence for a concep-


tual free-for-all: the fact that I can coherently conceive of water’s having a
chemical composition other than H2O has no modal implications.9 None
of this takes us away from Descartes’s view as to the relationship between
conceivability and possibility. Perhaps Descartes had a false view about
the nature of our ordinary concept of water, but had he taken it to be
an opaque concept – that is to say, a concept that reveals little or nothing
about the nature of its referent – he would no doubt have denied that our
conception of water when deploying that concept is clear and distinct,
and hence denied that such conceptions have modal ramifications. On the
assumption that a clear and distinct conception must involve only trans-
parent concepts – that is, concepts that reveal the complete essence of the
states they denote – the conceivably impossible scenario of water existing
in the absence of H2O does not constitute a counterexample to premise 1.
Indeed, once premise 1 is clarified in this way, it is not clear that
there are any counterexamples to it. Putting aside the mind-body case
as contentious, the examples one finds in the literature – individuals
with origins distinct from their actual origins or natural kinds with
essences distinct from their actual essences – all seem to involve things
being thought about under opaque concepts. In each case, if we knew
the essence or essential origins of the thing being thought about, the
scenario in question would not be conceivable. Furthermore there are
clear benefits to the view that conceivability and possibility are linked
in something like the way Descartes took them to be. It provides a clear
and plausible account of how we know about possibility, and it offers
the hope of an attractive reduction of modal truths in terms of facts
about ideal conceivability (under transparent concepts).10 Were it not
for the trouble it makes for physicalism, perhaps this traditional view of
the relationship between conceivability and possibility might not have
fallen from favour.

2.2 Premise 2
When I reach the end of the second meditation, when I have stripped
away everything it is possible to doubt and alighted upon the certain
knowledge of my existence as a thinking/experiencing thing, I end up
conceiving of my mind existing in the absence of anything physical. But
is this conception clear and distinct? If either the concept of my mind or
the general concept of the physical is not fully transparent, the resulting
conception will fail to be clear and distinct.
Arnauld complained that Descartes had not demonstrated that our
concepts of body and mind were adequate. Certainly they seem to reveal
The Cartesian Argument against Physicalism 11

something of the nature of the substance they denote, but how can we
know that they reveal its entire nature? Arnauld supports his argument
by means of an analogy. Two things are worth noting about Arnauld’s
analogy: (i) it involves properties rather than substances (as Descartes
notes in his reply);11 (ii) it involves subtle a priori knowledge concerning
those properties:

Suppose someone knows for certain that the angle in a semicircle is a


right angle, and hence that the triangle formed by this angle and the
diameter of the circle is right-angled. In spite of this, he may doubt,
or not yet have grasped for certain, that the square on the hypot-
enuse is equal to the squares on the other two sides; indeed he may
even deny this if he is misled by some fallacy.12

For this to be analogous to the mind-body case, the following would


have to be the case. The reason we are able, at the end of the second
meditation, to doubt the instantiation of physical properties without
doubting the instantiation of mental properties is that we have not
worked out the subtle conceptual connection between the mental and
the physical. On further reflection, it would turn out to be incoherent
to suppose that, say, my current feeling of pain exists whilst nothing
physical does, just as it turns out to be incoherent to suppose that the
angle in a semicircle is a right angle and yet the square of the hypot-
enuse (of a triangle formed from this angle) is not equal to the squares
on the other two sides.
However, consider what such a subtle conceptual connection would
involve. Nobody takes seriously the idea that there is a conceptual
connection between mental properties and specific physical properties,
such as the firing of c-fibres. If this were the case, neuroscience would be
an a priori science. And so inevitably any supposed conceptual connec-
tion must be between mental properties and certain functional role
properties, such that in the actual world those functional properties
are realized by physical properties. If Arnauld’s objection is to have any
force, we must turn to analytic functionalism.
As we have seen, at the end of the second meditation we are conceiving
of mental properties in the absence of such functional role properties, as
we are doubting the latter but certain of the former. The analytic func-
tionalist development of Arnauld’s point would go as follows:

We are only able to simultaneously suppose the existence of mental


properties and doubt the existence of causal role properties because
12 Philip Goff

we have not reflected enough. Further reflection would reveal such a


scenario to be incoherent.

However, it is not plausible to suppose that there is a conceptual connec-


tion between mental and causal role properties which is too subtle to
be noticed without some incredibly sophisticated reflection. We are
not dealing with complicated mathematics here. Rather, the analytic
functionalist proposal is that causal role properties constitute the basic
a priori content of mental concepts, that to suppose that someone is
in pain is just to suppose that someone has an inner state that plays
the pain role. If this were true, then at the end of the second medita-
tion Descartes would be contradicting himself in the most perverse and
straightforward way. This is simply not plausible.
Therefore, understood as a point about subtle conceptual connec-
tions between mental properties and physical or functional properties,
Arnauld’s concern has little force. However, there is still a serious issue
concerning how Descartes can rule out that mental and physical concepts
fail to reveal the complete essences of the entities they denote. It is espe-
cially difficult to see how Descartes can rule this out concerning our
concepts of mental and physical substances, as opposed to properties.
At the end of the second meditation I am thinking of myself in terms
of my mental properties. But how can I rule out that there is more to
my nature than those mental properties I am using to think about
myself? For the reasons I give above, it is implausible to suppose that
there are some subtle corporeal aspects to my nature that are concep-
tually implied by the way I am conceiving of myself at the end of the
second meditation. Nonetheless, there may be corporeal aspects of the
‘I’ I am conceiving of at the end of the second mediation which have no
conceptual association with the mental properties in terms of which I
am conceiving of that ‘I’. For this reason it seems to me that Descartes’s
argument fails as an argument for substance dualism, as Descartes is
unable to demonstrate that the concept of myself I have at the end of
the second meditation reveals my complete nature, and, hence is unable
to demonstrate that premise 2 is true.
However, I want to suggest that the argument can be modified to form
a successful argument for property dualism by substituting the following
for premise 2:

Premise 2*: I can clearly and distinctly conceive of my mental prop-


erties existing in the absence of any neurophysiological or functional
properties.
The Cartesian Argument against Physicalism 13

and the following for premise 3:

Premise 3*: If my mental properties could exist independently of


any neurophysiological or functional properties, then my mental
properties are not identical to any neurophysiological or functional
properties.

When I reach the end of the second meditation, I am conceiving of


my mental properties existing in the absence of any physical or func-
tional properties. But is this a clear and distinct conception? Could it be
that the properties denoted by my mental concepts are in fact physical
or functional properties, even though this is not apparent a priori? Or
could it be that the properties denoted by my physical properties are in
fact mental properties, even though this is not apparent a priori? Let us
take these possibilities in turn.

2.3 Mental concepts


These days analytic functionalism isn’t so popular, and most philoso-
phers of mind accept the existence of distinctively mental concepts
which bear no a priori connection to physical or functional concepts.
However, physicalists tend to embrace a semantic externalist account of
the reference of such concepts. The reference of our mental concepts,
on such a view, is determined by facts outside what is a priori accessible:
causal connections, subpersonal recognitional capacities of the concept
user or facts about the evolved function of our mental concepts.13 If the
reference of our mental concepts is determined by facts outside what is
a priori accessible, then mental concepts lack a priori content; they are
utterly opaque ‘blind pointers’. Just as we pick out water as ‘that stuff
whatever it is’ (pointing with our fingers), so we pick out pain as ‘that
state whatever it is’ (pointing introspectively). It turns out, thinks the
physicalist, that in each case ‘that state whatever it is’ is a brain state.
If this kind of view is correct, we do not have a clear and distinct
conception of our mental properties when we think of them under
mental concepts, and premise 2* is false. But such a view of our mental
concepts is utterly implausible. Perhaps the best way to see the implau-
sibility is to return to the second meditation. Recall how one ends up
conceiving of oneself at the end of the second meditation:

... what then am I? A thing that thinks. What is that? A thing that
doubts, understands, affirms denies, is willing, is unwilling, and
also imagines and has sensory perceptions [by ‘sensory perceptions’
14 Philip Goff

Descartes means conscious experiences as though perceived through


the senses].14

On the semantic externalist model, each of the mental concepts involved


in this description is a blind pointer, revealing nothing about what it is
for something to be in the state denoted. But it is evident when I am in
it that the conception I have of myself at the end of the second medita-
tion, when I have doubted away the physical world, is a rich, substantive
conception of myself. If I know that someone (myself or someone else)
is doubting such and such, or understanding such and such, or believing
or wanting such and such, or that she is having certain sensory experi-
ences as though such and such were the case, I understand a great deal
about that person’s nature. I am not just blindly denoting her qualities.
In one of his objections to Descartes, Gassendi seems to worry that
to conceive of himself as a mental thing is not to have a substantive
conception of his nature:

Who doubts that you are thinking? What we are unclear about, what
we are looking for, is that inner substance of yours whose property is
to think. Your conclusion should be related to this inquiry, and should
tell us not that you are a thinking thing, but what sort of thing this
‘you’ who thinks really is. If we are asking about wine, and looking
for the kind of knowledge which is superior to common knowledge,
it will hardly be enough for you to say ‘wine is a liquid thing, which
is compressed from grapes, white or red, sweet, intoxicating’ and so
on. You will have to attempt to investigate and somehow explain its
internal substance, showing how it can be seen to be manufactured
from spirits, tartar, the distillate, and other ingredients mixed together
in such and such quantities and proportions. Similarly, given that you
are looking for knowledge of yourself which is superior to common
knowledge (that is, the kind of knowledge we have had up till now),
you must see that it is certainly not enough for you to announce that
you are a thing that thinks and doubts and understands, etc. You
should carefully scrutinise yourself and conduct, as it were, a kind
of chemical investigation of yourself, if you are to succeed in uncov-
ering and explaining to us your internal substance.15

Descartes replies, ‘I have never thought that anything more is required


to reveal a substance than its various attributes; thus the more attributes
of a given substance we know, the more perfectly we understand its
nature’.16 I would want to make a slightly more qualified claim: all that
The Cartesian Argument against Physicalism 15

is required to reveal a substance is knowledge of its attributes under


transparent concepts. The description Gassendi offers of wine is formed
of opaque concepts which fail to tell us the real nature of wine: ‘wine
is a liquid thing, which is compressed from grapes, white or red, sweet,
intoxicating’. In such a case, empirical investigation is required to
make progress on understanding the nature of wine.17 Gassendi has
correctly identified some common or garden concepts which happen
to be opaque. But of course it does not follow that all of our common
or garden-variety concepts are opaque. Whether or not mental concepts
are fully transparent I consider shortly. But it is evident that the concep-
tion of myself I have at the end of the second meditation is not entirely
opaque; it reveals to me at least something of my nature.
The second meditation alone, then, provides the resources to refute
both the dominant form of a priori physicalism and the dominant form
of a posteriori physicalism. Once Descartes has guided me to a concep-
tion of myself as a pure and lonely thinker, it is evident that

A. I can coherently suppose that nothing exists other than myself and
my conscious experience, from which I can infer that analytic func-
tionalism is false.
B. I am having a rich and substantive conception of my nature, from
which I can infer that the semantic externalist model of mental
concepts favoured by most contemporary physicalists is false.

It is worth noting that both of these positions can be ruled out


without moving from the epistemic to the metaphysical. These specific
physicalist views make specific claims about mental concepts which are
inconsistent with the fact that the conception I have of myself at the
end of the second meditation is substantive and consistent. However,
if we want to refute a more general conception of physicalism rather
than specific (albeit very widespread) versions of it, we must try to make
that move from conceivability to possibility by justifying premise 2* or
something like it.
Our mental concepts are certainly not opaque, but does it follow that
they are transparent, revealing everything of the nature of the mental
properties they denote? Perhaps, rather, they are translucent, revealing
some but not all of the nature of mental properties. Although in
conversation many philosophers seem attracted to the idea that mental
concepts are translucent, there are not many worked-out versions of the
view. A fully worked out theory of mental concepts as translucent would
have to answer the following question: which aspects of a mental state
16 Philip Goff

such as pain do we transparently understand, and which aspects do we


merely opaquely denote? It’s hard to see what the answer to this ques-
tion would be even in the case of a sensory state such as pain, but it’s
even less clear what we could say when it comes to a cognitive state such
as believing that it’s raining.18
However, even if it turns out that mental concepts are translucent,
we could simply shift the focus of the argument from mental states to
those aspects of mental states that are transparently revealed to us; call these
‘mental* properties’. We can thus substitute premise 2** for premise 2*:

Premise 2**: I can clearly and distinctly conceive of my mental*


properties existing in the absence of any neurophysiological or func-
tional properties.

and premise 3** for premise 3*:

Premise 3**: If my mental* properties could exist independently of


any neurophysiological or functional properties, then my mental*
properties are not identical with to any neurophysiological or func-
tional properties.

It would be sufficient to refute physicalism if my mental* properties


can be shown to be distinct from any functional or neurophysiological
properties.
What about neurophysiological or functional concepts? It seems clear
that functional concepts are transparent; a description specifying a causal
role property in terms of its complete causal role completely specifies the
nature of that property. The matter is slightly less clear with regard to
neurophysiological properties. When a neurologist talks about ‘c-fibres’,
is she talking about a state of the brain whose nature is entirely captured
by what neuroscience has to tell us about that state, or is she talking about
a state picked out by what brain science has to tell us about it but which
may have a nature that goes beyond what brain science can tell us about
it? If the latter then premise 2* is false, as neurophysiological concepts
turn out to be translucent or opaque, and hence brain science cannot
afford us a clear and distinct conception of the properties of the brain.
This dispute seems to me to be largely terminological. No doubt the brain
scientist, with more important matters to attend to than metaphysics,
employs concepts which are indeterminate between these two options. It
is up to us to decide what we mean by our talk of physical brain properties.
The spirit of physicalism would seem to dictate that we define our talk of
The Cartesian Argument against Physicalism 17

physical brain properties such that their nature can be entirely captured by
neuroscience or by neuroscience in conjunction with more basic physical
sciences.19 At any rate, if we can get an argument that refutes the kind of
‘physicalist’ who believes that the nature of mental properties can be fully
revealed by the physical sciences, we have a significant argument. I there-
fore stipulate that by ‘neurophysiological properties’ I mean those proper-
ties which are transparently revealed to us by brain science (or brain science
in conjunction with more basic sciences of matter – see note 19).
We have therefore demonstrated that all the concepts involved in the
conception referred to in premise 2* are transparent. The physicalist
might continue to insist that the conception we reach at the end of
meditation 2 is in some way obscure, confused or incoherent. But she is
obliged to show this, and until she does we are entitled to suppose that
it is clear and distinct, as indeed it seems to be.
I take it that premise 3* is almost entirely uncontroversial, and hence
we have a sound argument, from the resources of the Meditations, for the
falsity of physicalism understood as the view that mental (or mental*)
properties are either (i) identical with properties the nature of which is
entirely revealed to us by neurophysiology or (ii) identical with func-
tional properties that are realized by properties the nature of which is
entirely revealed to us by neurophysiology:

Premise 1: Anything I can clearly and distinctly conceive of is


possible.
Premise 2*: I can clearly and distinctly conceive of my mental (or
mental*) properties existing in the absence of any neurophysiological
or functional properties.
Conclusion 1: My mental (or mental*) properties could exist in the
absence of any neurophysiological or functional properties.
Premise 3*: If my mental (or mental*) properties could exist inde-
pendently of any neurophysiological or functional properties, then
my mental properties are not identical with any neurophysiological
or functional properties.
Conclusion 2: My mental (or mental*) properties are not identical
with any neurophysiological or functional properties.

3 The non-physicality of thought

Most antiphysicalist arguments of the past eighty years have tried to


demonstrate that conscious states, defined as being states such that there
18 Philip Goff

is something that it is like to be in them, are distinct from physical or


functional states. It was for a long time generally accepted that a func-
tionalist account of cognitive states is satisfactory. There has of late been
a growing number of philosophers arguing that cognitive states are in
fact identical with or grounded in conscious states.20 If this is the case
and if we have sound arguments for antiphysicalism about conscious
states, it would seem to follow that we should be antiphysicalists about
cognitive states.
However, it remains extremely difficult to settle the matter of whether
cognitive states, such as thinking that it’s raining, count as states such
that there is ‘something that it’s like’ to be in them. There may even be
no fact of the matter as to whether this phrase from Thomas Nagel,21
which has its paradigm extension in the sensory realm, has application
in the realm of thought.
The Cartesian argument directly supports the non-physicality of cogni-
tive states without relying on the thesis that they are states of conscious-
ness. At the end of the second meditation, when I have doubted the
entire physical world, I am not only conceiving of myself as a thing
with sensory states; I am also conceiving of myself as a thing that thinks,
that doubts, that is willing or unwilling: I am conceiving of myself as
a thing with propositional attitudes. All of these states can be clearly
and distinctly conceived of in the absence of anything physical or func-
tional, and hence the Cartesian argument demonstrates that all of these
states are non-physical.
The Cartesian argument shows, therefore, that we have a ‘hard
problem’ not only of consciousness but of mentality in general. The
mind-body problem just got tougher.

Notes
1. To make the claim slightly more carefully: for each proposition that you can
grasp, you would be able to work out its truth value.
2. Perhaps someone who had never seen red or tasted a lemon would not have
a mental concept of what it’s like to see red or taste a lemon. But it seems
that even if you did have a full concept of, say, what it’s like to taste a lemon,
perhaps gained through tasting lemons whilst blindfolded, you would not be
able to know from a neurophysiological description of a lemon experience
that it satisfied that concept.
3. Armstrong (1968) and Lewis (1966, 1970, 1980, 1994).
4. Descartes classes sensory experiences as a kind of thought.
5. Lewis (1980).
6. Lewis (1980).
7. Descartes (1645, 18).
The Cartesian Argument against Physicalism 19

8. Descartes (1645, 54).


9. Perhaps it is a slight exaggeration to say that there are no modal implications
of the conceivability of water with a chemical composition other than H2O.
The important point is that because of the opacity of the concept ‘water’ we
cannot infer from the conceivability to the possibility of this state of affairs.
10. I outline such a reduction in more detail in Goff and Papineau (forthcoming)
and Goff (n.d.).
11. Descartes (1645, 110–112).
12. Descartes (1645, 109).
13. See Loar (1990), Papineau (2002), Perry (2001) and Tye (1995).
14. Descartes (1645, 19).
15. Descartes (1645, 71).
16. Descartes (1645, 72).
17. In fact, I am inclined to think that even empirical investigation won’t reveal
the essence of wine, as observation reveals only the extrinsic features of
material things. But the important point here is the negative one that our
ordinary concept of wine does not reveal the essence of its referent.
18. Robert Schroer (2010) offers a view according to which concepts of sensory
states reveal the internal structure of those states but opaquely denote the
atomic elements involved in that structure. The view that only structural
aspects of my nature are revealed to me in the conception I have of myself at
the end of the second meditation seems to me not that much more plausible
than the view that no aspects of my nature are revealed to me in the concep-
tion I have of myself at the end of the second meditation. More importantly,
this model cannot be applied to cognitive states.
19. A natural view would be that neuroscience reveals brain states to be essen-
tially constituted of certain more basic physical elements, and that the
essences of those more basic elements are revealed by more basic sciences of
matter.
20. For a good range of essays on both sides of the debate, see Bayne and
Montague (2011) and Kriegal (2013).
21. Nagel (1975).

References
Armstrong, D. (1968) A Materialist Theory of Mind. London: Routledge and Kegan
Paul.
Bayne, T., and Montague, M. (eds) (2011) Cognitive Phenomenology. New York:
Oxford University Press.
Descartes, R. (1645[1996]) Meditations on First Philosophy. Reprinted in Meditations
on First Philosophy, rev. edn, J. Cottingham (ed.), Cambridge: Cambridge
University Press.
Goff, P. [n.d.]. ‘Consciousness and Fundamental Reality’. Manuscript.
Goff, P., and D. Papineau (forthcoming ) ‘What’s Wrong with Strong Necessities?’
Philosophical Studies.
Kriegal, U. (2013) Phenomenal Intentionality. New York: Oxford University Press.
Lewis, D. (1966) ‘An Argument for the Identity Theory’. Journal of Philosophy,
63(1), 17–25.
20 Philip Goff

Lewis, D. (1970) ‘How to Define Theoretical Terms’. Journal of Philosophy, 67(13),


427–446.
Lewis, D. (1980) ‘Mad Pain and Martian Pain’. In Readings in the Philosophy of
Psychology, N. Block (ed.), vol. I, Cambridge, MA: Harvard University Press,
216–222.
Lewis, D. (1994) ‘Reduction of Mind’. In Companion to the Philosophy of Mind, S.
Guttenplan (ed.), Oxford: Blackwell, 412–431.
Loar, B. (1990) ‘Phenomenal States’. Philosophical Perspectives, 4, 81–108.
Nagel, T. (1975) ‘What’s It Like to Be a Bat?’ Philosophical Review, 83, 435–450.
Papineau, D. (2002) Thinking about Consciousness. Oxford: Clarendon Press.
Perry, J. (2001) Knowledge, Possibility and Consciousness. Cambridge, MA: MIT
Press.
Schroer, R. (2010) ‘Where’s the Beef? Phenomenal Concepts as Both Demonstrative
and Substantial’. Australasian Journal of Philosophy, 88(3), 505–522.
Tye, M. (1995) Ten Problems of Consciousness: A Representational Theory of the
Phenomenal Mind. Cambridge, MA: MIT Press.
2
A Call for Modesty: A Priori
Philosophy and the Mind-Body
Problem
Eric Funkhouser

Philosophy has long claimed the mind-body problem, which presses us


to discover and explain the relationship between the mind and body, as
its own. This made sense millennia ago, when natural phenomena of
various kinds were included within the domain of philosophy. But as
many of these topics, including the nature and origin of animal life and
the heavens, have been taken over by the natural sciences, we might
wonder why the mind-body problem persists as a particularly philosoph-
ical problem. Philosophers did not discover that water is H2O (no, not
even Saul Kripke), so why should we think that philosophers, in their
capacity as a priori theorizers, can discover and explain the relationship
between mind and body?
The phrase ‘the mind-body problem’ is ambiguous, and it is likely a
misnomer. There isn’t one problem of mind-body relations; there are
many. My focus here is on the ontological problems: what is mental
stuff, and how does it relate to physical stuff? What kinds of mental
states are there, and how do they relate to physical states? The first ques-
tion is answered by positions like substance dualism and physicalism.
It is the more fundamental ontological question. But given the wide
acceptance of physicalism as an empirical thesis, most contemporary
discussion has focused on the kind question. And kind distinctness –
non-reductive physicalism – is still the dominant position among
philosophers. I examine the appropriateness of philosophical methods
for answering this question, criticizing what I see as an overextension
of a priori tools.
The main thesis I defend is that the relationship between psycho-
logical and physical kinds cannot be known on a priori, even largely
a priori, grounds. Unfortunately, I see our recent philosophical history
as violating this maxim. Consider the two most influential arguments

21
22 Eric Funkhouser

concerning the mind-body problem from the last couple of decades –


David Chalmers’s zombie argument and Jaegwon Kim’s exclusion argu-
ment. The former, an exclusively a priori argument, concludes that
the varieties of phenomenal consciousness are kind-distinct from the
physical. In fact, phenomenal consciousness is not even metaphysically
necessitated by the physical. The latter argument is not exclusively a
priori – it contains empirical premises (e.g., about the causal complete-
ness of physics) – but its critical premise that rules out systematic over-
determination is a priori. And at least some versions of this argument
conclude that psychological kinds must be identical to physical kinds
(or else they are epiphenomenal).
It would be very unusual if the nature of contingently instanti-
ated kinds that seem suited for scientific study – psychology – could
be known a priori. We no longer think that the elements or heavenly
bodies can be known by armchair speculation. So why the continued a
priori treatment of psychological kinds? I think we should take seriously
the tinge of embarrassment that, I hope, we would experience upon
attempting to explain to our scientific colleagues that we have discov-
ered that phenomenal kinds are not physical kinds because, consistent
with the physical facts, zombies are ideally conceivable.
The alternative conception that I favour endorses a division of labour
between the a priori methods that are the bread and butter of tradi-
tional philosophy and the essential contributions acquired through
empirical methods. A distinctive contribution from each method is
required. Chalmers has made this point to some extent, claiming that
empirical results alone cannot settle many of the long-standing issues in
the philosophy of mind.

Whenever empirical results are brought to bear on the philosophical


questions, the application requires some sort of philosophical premise
to serve as a bridge. And in case of the big philosophical questions
above, in order for this premise to be strong enough that the data
bears directly on the question, the premise is typically so strong that
it is almost as contentious as the philosophical views at issue ...
I take the moral to be that the debates in question may well have a
deeply philosophical core, one that is unlikely to be resolved by the
straightforward application of empirical results. Instead, the core of
the debates may well rest on conceptual, metaphysical, and norma-
tive issues that fall largely within the a priori domain. So philoso-
phers should not feel embarrassed at spending a lot of time working
A Call for Modesty 23

in a largely non-empirical mode, as most philosophers do. (Grim


2009, 8)

This is true to a large extent. But I caution against going too far in
the other direction. Empirical data alone typically do not settle kind
questions, but neither does a priori speculation alone. In the next two
sections I offer ‘big picture’, yet I also think effective, criticisms of the
two major a priori–based arguments from the last couple of decades –
the zombie argument and the exclusion argument.

1 The zombie argument

Philosophy has a long history of a priori speculations about the nature


of psychological kinds (and mentality more generally). In retrospect, we
should see that many of these speculations were misguided. Yet to this
day we see philosophers making very similar speculations, stretching
the limits of a priori reasoning and the significance of our inability to
imagine certain possibilities. Take, for example, Chalmers’s zombie argu-
ment. In this argument let P stand for the physical truths and Q stand
for the truths about phenomenal consciousness:

1. P&~Q is ideally conceivable.


2. If P&~Q is ideally conceivable, then P&~Q is 1-possible.
3. If P&~Q is 1-possible, then P&~Q is 2-possible, or Russellian monism
is true.
4. If P&~Q is 2-possible, materialism is false.
5. Materialism is false, or Russellian monism is true.1

The gist of the argument is as follows. If we had all the physical facts
before us and we were ideally rational, we could conceive of the physical
facts obtaining but not phenomenal consciousness (or, at least, we could
not rule this out). If this is ideally conceivable, it is possible in the same
sense, 1-possibility, that it is possible that water is not H2O. Namely,
if we were to consider the imagined world actual, ‘water is not H2O’
would be true. But because there is no appearance/reality distinction
for phenomenal consciousness, its primary intension is the same as its
secondary intension. Unlike the case of water, if P&~Q is 1-possible,
then it is also 2-possible. (Or Russellian monism is true. However, I treat
this as more of an appended qualification – the thrust of the argument is
for dualism.) But then the physical truths do not metaphysically neces-
24 Eric Funkhouser

sitate the truths about phenomenal consciousness, and materialism is


false.
The literature on this argument is massive. So my discussion focuses
on big-picture lines of objection that I think are promising. First, why
think that P&~Q is even ideally conceivable? Here is a line of thought:

First, physical descriptions of the world characterize the world in


terms of structure and dynamics. Second, from truths about structure
and dynamics, one can deduce only further truths about structure
and dynamics. Third, truths about consciousness are not truths about
structure and dynamics. (Chalmers 2010, 120)

Inconceivability claims like this are not unprecedented. Take, for


example, Leibniz’s famous mill argument:

Moreover, we must confess that the perception, and what depends on


it, is inexplicable in terms of mechanical reasons, that is, through shapes
and motions. If we imagine that there is a machine whose structure
makes it think, sense, and have perceptions, we could conceive it
enlarged, keeping the same proportions, so that we could enter into
it, as one enters a mill. Assuming that, when inspecting its interior,
we will only find parts that push one another, and we will never find
anything to explain a perception.2

We cannot conceive how thought can arise from purely mechanical


causes – that is, determined purely by the shape, motion and pushing
of bodies. Further, the physical world is mechanical. So thought cannot
arise from the physical world. Chalmers’s reasoning – as he well knows –
is quite similar, basically substituting ‘structural and dynamical’ for
‘mechanical’, though limiting his point to phenomenal consciousness
rather than to all thought.
One could reasonably challenge the claim that P&~Q is ideally conceiv-
able. One way to do this is to be open to the possibility that physical
kinds are not merely structural and dynamical in the way Chalmers’s
assumes. Chalmers himself takes this possibility seriously, accounting
for it under the option that he describes as Russellian monism.3 Perhaps
there are physical kinds that have an intrinsic nature, unknowable
to us but ideally knowable (e.g., by God), from which truths about
phenomenal consciousness are necessitated. This is one way of denying
that P&~Q is ideally conceivable. Perhaps we should even challenge
Chalmers’ claim that structural and dynamical properties can necessitate
A Call for Modesty 25

only other structural and dynamical properties. But regardless of where


our emphasis lies, we should be wary of Chalmers’s premise that P&~Q
is ideally conceivable. The reasons he offers are not much different from
those offered by Leibniz hundreds of years ago. Each points out what
he takes to be a general truth about the nature of the physical, and
these supposed general truths are quite similar – that it is exclusively
mechanical and that it is exclusively structural/dynamical. This might
cause some concern when we discover that Leibniz used the same line of
reasoning to argue against gravity!4 The point is not that Chalmers would
have a similar problem accounting for gravity. Rather, our conception
of the physical can radically change (or simply be deficient), such that
what might be inconceivable from one conception of the physical could
become conceivable from an improved conception. Even if it is ideally
conceivable that mechanical, structural/dynamical, or functional facts
obtain without phenomenal consciousness (or gravity or whatever), it
does not follow that phenomenal consciousness is not necessitated by
the physical facts. (Nor does it follow that this is conceivable with the
proper and full understanding of the physical.) Like Leibniz, we simply
might have an impoverished understanding of the physical. This should
be a familiar point, but it bears repeating.
But even if P&~Q is ideally conceivable, we can reasonably deny that
it is 1-possible. For the time being let us grant that the primary and
secondary intensions of phenomenal consciousness are identical and
that premise 3 is true.5 The comparison and contrast to water and H2O
might be helpful here.

A. We can 1-conceive of water but no H2O.


B. It is 1-possible that there is water but no H2O.
C. It is not 2-possible that there is water but no H2O.
D. We can 1-conceive of P but ~Q.
E. But it is not 1-possible that P but ~Q.
F. It is not 2-possible that P but ~Q.

Chalmers has written that it would be a bizarre and unprincipled


exception if the link from 1-conceivability to 1-possibility were broken in
the case of P&~Q, as we do not find such a disconnect between conceiv-
ability and possibility elsewhere and the exception would appear to be
ad hoc.6 But to the contrary, such a disconnect should not come as a
surprise. After all, we could have good empirical grounds for thinking
that P metaphysically necessitates Q even without believing that P
conceptually entails Q. In the parallel case, we could have good reason
26 Eric Funkhouser

for thinking that water metaphysically necessitates H2O (e.g., that water
is identical to H2O, on the assumption that such an identity must be
metaphysically necessary) on largely empirical grounds – combined with
a priori principles of good (scientific) reasoning – and without knowl-
edge of (or optimism for) any conceptual entailment. We then work
backwards – in this special case in which the primary and secondary
intensions are assumed to be identical – to derive a claim about what is
either 1-possible or ideally conceivable. In this case, we accept E because
we accept F and deny an appearance/reality distinction.
If we begin with an empirically informed conviction that P&~Q is not
2-possible and we assume that the primary and secondary intensions of
Q are identical, we ought to conclude that either P&~Q is not 1-possible
or that P&~Q is not ideally conceivable. We just considered the possi-
bility that P&~Q is ideally conceivable but not 1-possible. The important
point here is that if this is the case, we have an explanation of the failure
of conceivability to track possibility. If we begin with a conviction that
P&~Q is not 2-possible, the lack of an appearance/reality distinction
explains why it is not 1-possible either. And when investigating contin-
gent kinds that seem like appropriate objects of scientific investigation,
why not put greater stock in our empirically grounded speculations (e.g.,
that P&~Q is not 2-possible) than we do in our ability to track possibility
with conceivability (or for that matter, in claims about what is ideally
conceivable)? We can ask which is more unlikely – that this is not a
metaphysical necessity or that here conceivability fails to track possi-
bility? We know that exceptions to the conceivability-possibility thesis
would be unexpected in general. But if this is the particular exception,
then there is an explanation as to why it is not 1-possible. Of course, we
might still wonder why P&~Q is ideally conceivable. For this reason, I
prefer going a different route.
Rather than argue that phenomenal consciousness is a principled
exception to conceivability tracking possibility, we could argue that the
sameness of primary and secondary intensions explains the ideal incon-
ceivability of P&~Q. This is an appealing alternative, as it preserves the
connection between ideal conceivability and 1-possibility. As before, we
start with our conviction that P&~Q is not 2-possible. Given the same-
ness of primary and secondary intensions, we conclude that it is not
1-possible either. But we still have a firm belief that ideal conceivability
tracks 1-possibility. We then deny D and conclude that P&~Q must not
be ideally conceivable after all. This is not such a bad conclusion, as
ideal conceivability is something of an epistemic pipe dream anyway –
for example, Goldbach’s conjecture, for all I know, could be ideally
A Call for Modesty 27

conceivable or ideally inconceivable. Here we at least have some justi-


fication for the claim that P&~Q is not ideally conceivable – the ques-
tion is whether or not it outweighs Chalmers’s reasons for thinking that
P&~Q is ideally conceivable. Again, his reasons are that the physical is
structural and dynamical, and upon ideal reflection we would discover
that there is nothing in the structural and dynamical that necessi-
tates phenomenal consciousness. But we could have better reasons for
thinking that P metaphysically necessitates Q than we do for accepting
Chalmers’s limited conception of the physical. Recall that Leibniz also
argued against physicalist accounts of mentality – and gravity – along
similar lines.
Of course, someone like Chalmers would challenge the claim that we
can justifiably have this antecedent confidence in the 2-impossibilty of
P&~Q. This is the critical claim. He would push the line that at most
we could be justified in holding that P nomologically necessitates Q,
with the conceivability of P&~Q undermining its claim to metaphysical
necessity. But philosophers claim various metaphysical necessities, with
confidence and apparent justification, without first considering any
conceivability claim. This prior commitment to metaphysical neces-
sity then undermines the 2-possibility of apparent counterexamples
grounded in conceivability claims. We believe that it is metaphysically
necessary that water is H2O or, say, that material objects have their actual
origins. The grounds for accepting these metaphysical necessities can be
empirical or theoretical, but they are not held on conceivability grounds.
We can conceive of situations that seem like counterexamples – very
close counterparts to water or people, say, that differ in their chemical
constitution or biological origins. Rather than see these as disproving
the claim to metaphysical necessity, however, we hold that they are not
really possible (i.e., 2-possible) after all because – here is the important
point – we have a prior commitment to the metaphysical necessity.
So can there be good grounds for having a prior commitment to the
belief that P metaphysically necessitates Q? I see at least three possible
grounds. First, we should note that the relationship between P and Q
does not appear to be like the causal connections that are prime exam-
ples of nomological necessities. Rather, it appears to be a case of a
synchronic necessitation relation – like realization – much as we say
that watery stuff can be realized in the various chemical constitutions
that metaphysically necessitate them.
Second, realization is very plausibly a relation of metaphysical neces-
sity. Given the possibility that phenomenal consciousness is multiply
realizable in the physical, we might not want to identify Q with P (and
28 Eric Funkhouser

so ground the metaphysical necessity in a claim about identities between


rigid designators). But nor would we have identified water with H2O if
in our world various chemical kinds (H2O, XYZ, etc.) realized watery
stuff. Still, even before we understood the chemical explanations as to
how XYZ (say) gives rise to watery stuff, we would have reason to think
that those various chemical kinds each metaphysically necessitate water.
That is how realization is typically understood. More generally, we can
be confident that a relationship of constitutive dependence obtains
without understanding how this is so. Now think of the various physical
realizers of phenomenal consciousness as analogues to H2O, XYZ and
the like in our imagined world with chemically heterogeneous water.
Third, the simple fact that the physical sciences have had widespread
success at eventually providing similar metaphysical necessities for a
host of other contingent kinds – many of which were mysteriously real-
ized at one time – gives some reason to think that the physical-mental
correlation is best explained as a metaphysical necessity. Here, one could
also make the case that theorizing should be guided by Occam’s razor
and a tendency to favour metaphysically necessary connections (even
if not yet well understood) over arbitrary nomological connections. The
simpler explanation, in this case, is that phenomenal consciousness
occurred because it had to occur, just as the simpler explanation in the
case of heterogeneously realized water is that XYZ produces water (even
if we do not yet know how) because it has to.
Chalmers considers responses somewhat like this one, but he classi-
fies them as inexplicable metaphysical necessities accepted on empirical
grounds although epistemically primitive.

Here, a type-B materialist can suggest that P ⊃ Q may be a Kripkean a


posteriori necessity, like ‘water is H2O’ (though Kripke himself denies
this claim). If so, then we would expect there to be an epistemic gap
since there is no a priori entailment from P to Q, but at the same time
there will be no ontological gap. In this way, Kripke’s work can seem
to be just what the type-B materialist needs. ... One can argue that in
other domains, necessities are not epistemically primitive. The neces-
sary connection between water and H2O may be a posteriori, but it
can itself be deduced from a complete physical description of the
world (one can deduce that water is identical to H2O, from which
it follows that water is necessarily H2O). The same applies to the
other necessities that Kripke discusses. By contrast, the type-B mate-
rialist must hold that the connection between physical states and
A Call for Modesty 29

consciousness is epistemically primitive in that it cannot be deduced


from the complete physical truth. (Chalmers 2010, 117)

Chalmers is correct that the type-B materialist, by definition, must say


that phenomenal consciousness is epistemically primitive in this sense.
This was my earlier response, which said that our commitment to the
metaphysical necessity could be taken to provide us with a reason to
deny that ideal conceivability tracks 1-possibility. But the objector who
begins with a conviction in P ⊃ Q as a metaphysical necessity need not
hold this. Rather, such an objector can take the metaphysical neces-
sity as a reason to believe that P ⊃ Q is not epistemically primitive – for
example, that P&~Q is not ideally conceivable. That is the objection I
have just offered.
There are two features of phenomenal consciousness that both make
it peculiar and are exploited in conceivability arguments. First, we have
no real understanding of how it does or even can arise from the phys-
ical. Chalmers is right to attend to this epistemic gap; it truly is a deep
scientific mystery. (He has also done a lot to encourage empirical inves-
tigation to bridge this gap, serving as a central figure in promoting a
science of consciousness. His is a naturalistic dualism.) Second, it is often
claimed that there is not an appearance/reality distinction for phenom-
enal consciousness. According to Chalmers, the first point supports the
ideal conceivability of P&~Q, and the second explains why it is meta-
physically possible. But if we start with our reasons for accepting P⊃Q as
a metaphysical necessity, the second point provides a reason for denying
that our ignorance provides good grounds for accepting the ideal
conceivability claim. There is no good reason to privilege our impover-
ished imagination and conception of the physical as a basis for dualism
over instead accepting our empirically informed claims, grounded in
the three reasons just presented, to the metaphysical necessities. It is
immodest to favour the a priori so.

2 The exclusion argument

Next let’s consider exclusion arguments. A common version of these


arguments aims to establish a conclusion quite the opposite of zombie
arguments – namely, that psychological kinds are identical to physical
kinds. These arguments do not focus on phenomenal consciousness,
however. Instead, they attend to those psychological kinds that we are
even more confident are causally efficacious; for example, propositional
30 Eric Funkhouser

attitudes like belief and desire. Such exclusion arguments have this
form:

P1. If psychological kinds are distinct from physical kinds, then


psychological causes are distinct from physical causes.
P2. There is a complete physical cause for every physical effect.
P3. Systematic causal overdetermination cannot occur.7
P4. But mental (psychological) causation of physical effects does
occur.
C. Psychological kinds are not distinct from physical kinds.
Psychological kinds are physical kinds.8

This argument does have empirical premises. The causal completeness of


the physical is most prominent among these, though it is often accepted
by philosophers as something of an article of faith rather than seriously
defended. The argument is critically driven, however, by the a priori
premise that prohibits (or holds as highly unlikely, in slightly different
versions of the argument) systematic causal overdetermination. Though
I think there are other problems with the exclusion argument as well,
here I object by focusing, as with the zombie argument, on what I take
to be an overly ambitious a priori speculation.9
What are the grounds for accepting P3? They are not empirical, and
I am unaware of any concern about overdetermination arising from
within the sciences. Though different sciences sometimes compete with
each other in the quest to solve a problem or causally explain some
phenomenon, each tends to think that its way of tackling the issue is
more likely to succeed than is the other’s, not that they (e.g., neuro-
science and psychology) would exclude one another.
Nor does the concept of causation forbid overdetermination. In fact,
accounts of causation have gone out of their way to accommodate it.
Here think of the Lewisian epicycles devoted to preserving a counter-
factual account in light of causal overdetermination. (Similar epicycles
might be needed for probability-raising accounts of causation.) And
there is nothing that would justify prohibiting (or cautioning against)
distinct causal laws connecting different types of causes to the same
effect either. Nor is there any obstacle to allowing primitive causal
connections between distinct causes and a common effect. There seems
to be nothing about the particular accounts of causation in play, or the
concept of causation more generally, that would justify treating causa-
tion as a zero sum game.10 To the contrary, comparisons to kindred
A Call for Modesty 31

concepts like reason and responsibility suggest that we should expect


effects to often be overdetermined. Just as there can be multiple reasons
for an outcome or fact and just as multiple parties can be responsible for
an outcome, there can be multiple causes for an effect.
There are situations, however, in which positing overdetermining
causes is unprincipled, if not too coincidental to even tolerate. For
example, it is unprincipled to say that whenever a virus is present so
too is a demon, so both viruses and demons cause some illnesses. And it
would be too coincidental if every fire were started by overdetermining
causes analogous to the haystack that is struck by lightning and simul-
taneously lit by a carelessly tossed cigarette. At least some kinds of over-
determination are problematic and unlikely, and this observation can
ground legitimate objections to some accounts of the mental. Assuming
the causal completeness of physics, isn’t it unprincipled – for example,
a violation of Occam’s razor – to posit a mental cause (demons!) of
this brain activity in addition to the physical cause? Wouldn’t it be a
massive coincidence if every action of mine had a physical cause and
a completely distinct mental cause (lightning and cigarettes)? This is a
decent objection to at least some versions of dualism – that is, those that
deny even nomological connections between the physical and mental.
For it would be coincidental or would call out for some kind of coordina-
tion, like a pre-established harmony, if physical and mental causes were
not connected in any way – neither nomologically nor metaphysically –
yet they systematically generated the same effects (e.g., action).
But Kim primarily uses his exclusion argument against non-reduc-
tive physicalists, those who think that everything is physical although
accepting that psychological kinds do not reduce to physical kinds.
The worries about overdetermination that were plausible when raised
against dualists – that positing such causes is unprincipled and coinci-
dental – do not transfer over to the non-reductive physicalist, however.
According to her, the physical metaphysically necessitates the mental.
Metaphysical necessity is as good a reason as any to posit something,
and the necessitation also explains why the mental and physical causes
co-occur – they had to. Naturalistic dualists like Chalmers can give a
similar response.
For this reason we should distinguish between different types of causal
overdetermination, some of which, if systematic, pose more of a problem
than others. In Funkhouser (2002) I distinguished varieties according to
whether supposedly distinct causes differed in their mechanism (e.g.,
spatio-temporal pathway) or merely in their properties while sharing a
mechanism. The former present us with independent overdetermination,
32 Eric Funkhouser

as in the haystack example. The lightning and cigarette work through


distinct mechanisms and distinct spatial regions. Given the independ-
ence of the mechanisms – with no law-like connections between them –
systematic independent overdetermination would be coincidental (or
coordinated by some pre-established harmony). It is simply unlikely.
But for the non-reductive physicalist mental causation is not like this.
Instead, the mental cause and physical cause share a mechanism or
causal pathway and differ only in their properties within this shared
mechanism. In Funkhouser (2002) I termed this type of overdetermi-
nation incorporating overdetermination, as the mental cause incorporates
the physical cause. Given the necessitation relations that define physi-
calism, this co-occurrence is not coincidental.
Karen Bennett (2003, 2008) offers a similar treatment of exclu-
sion principles. Rather than distinguish between coincidental and
non-coincidental overdetermination by attending to the mechanism/
property distinction, she employs a counterfactual test to distinguish
what she considers genuine overdetermination from mere multiple
causation.
e is overdetermined by c1 and c2 only

(O1) if c1 had happened without c2, e would still have happened: (c1
& ~c2) ☐→ e, and
(O2) if c2 had happened without c1, e would still have happened: (c2
& ~c1) ☐→ e. (Bennett 2003, 476)

This has the result that only what I have called independent overdeter-
mination is overdetermination at all. The rest, such as mental causation
on the non-reductive physicalist’s account, avoids the exclusion argu-
ment. But the counterfactual test, as well as the particularly nuanced
way that she argues we should evaluate these counterfactuals, is simply
tailored to capture the good/bad, non-coincidental/coincidental distinc-
tion. Her test likely gets the results right – like a Lewisian account of
causation it is designed just for that task – but it does not draw our
attention to what is most fundamental. It is better to simply point out
the shared mechanisms and metaphysical necessities that show mental
and physical causal co-occurrence to be unproblematic. This is the real
explanation. The contrived counterfactual test is correct to the extent
that it indicates this more fundamental explanation. It is not because
the non-reductive physicalist’s mental and physical causes fail O1 or O2
that they are not bad overdeterminers.
A Call for Modesty 33

We should now wonder just what is supposed to be problematic about


mental causation such that it is threatened to be excluded. There is no
reason to think that causal ‘work’ can be done only once, and there is
nothing coincidental about the co-occurrence of mental and physical
causes. When overdetermination is worrisome, it is because it is coin-
cidental. Now there is one last charge along these lines that could be
made. One could think that while individual, co-occurring mental and
physical causes can be explained by relations of metaphysical neces-
sity, the presence of higher-level kinds, patterns, and the like in the first
place is itself a great mystery that calls out for explanation. That is, given
physics, one might think that it is coincidental that there is chemistry,
biology, psychology or any other special science at all.11 But if this is
a problem, it is not one that should be grounded in a priori specula-
tions. In fact, there is an empirical science – complex systems theory –
that, among other things, seeks to explain why we should expect such
patterns. But I am unaware of anyone seeking these explanations out of a
concern for the autonomy of those higher-level sciences. It is immodest
to conclude on a priori grounds that incorporating overdetermination
is problematic.

3 A role for the a priori

I have provided reasons to reject the critical a priori premises of the


two recent, dominant arguments on the mind-body problem. But I want
to engender a more general sense of the inappropriateness of a priori
methods for contingent, natural kinds. This is not to say that philoso-
phers shouldn’t venture into empirical territory. We can clear up concep-
tual or foundational confusions and even offer up empirical hypotheses
on largely a priori, brainstorming grounds. But we ought not argue
for substantive empirical theses – for example, identity or distinctness
claims, as opposed to proposing possibilities to explore – concerning
contingent kinds within the domain of the sciences on a priori grounds.
The objection here is to a method, not a profession. Philosophers can
still enter into empirical disputes, and they ought to.
To be sure, a priori speculation has something to contribute to the
investigation of scientific kinds. But let’s be more modest here: it never
settles the issue.12 Off the top of my head, purely a priori, I offer the
following roles for a priori reasoning (supplemented with common-sense
or armchair observations) when it comes to the mind-body problem.
34 Eric Funkhouser

These roles also extend to the philosophical investigation of other scien-


tific kinds. Undoubtedly, it has more to contribute than this:

1. Conceptual Investigations and Phenomenology: I have already cast my lot


with an empirical investigation of mental kinds, so I do not favour an
a priori analysis of psychological kinds in the spirit of behaviourism
or analytic functionalism. Psychological essences are ultimately
to be discovered empirically. Still, some conceptual investigation
could delineate the limits of the concepts and corresponding kinds.
Phenomenological approaches to consciousness could contribute
here as well.
2. Derive the Limits and Consequences of Views: Philosophers typically are
good at drawing out the a priori consequences and limits of a view,
both logically and creatively. For example, think of Searle’s Chinese
room as a creative, a priori thought experiment constructed to draw
out the limits of functional analysis. Leibniz, Searle and Chalmers all
attempt to illustrate the limits of mechanism, functional relations and
structure/dynamics, respectively, through creative thought experi-
ments. In addition to proving their a priori point, however, they also
need an empirical premise – for example, that the physical world is
simply mechanical (Leibniz) or structural/dynamical (Chalmers).
3. Hypothesis Formation: While we cannot confirm a mind-body position
on purely a priori grounds, we can certainly generate possible posi-
tions a priori. This is especially true for fundamental issues – think of
Fodor’s LOT and modularity thesis.
4. Reference and Identity: The mind-body problem presents but one
example of natural kinds for which identity or realization relations
likely hold. Philosophers have something to say about the semantics
of natural-kind terms and the logic underlying identity statements
between them. Philosophers can help reveal that some natural-kind
term is logically second-order or that it refers to something with a
mind-independent essence, to give just two examples. If the terms
function as rigid designators, we conclude that the identities are
metaphysically necessary.
5. Kinds: Philosophers contribute not only to our understanding of kind
terms but also to the metaphysical structure of kinds themselves.
Here I do not mean a substantive thesis about the nature of some
particular kind – for example, pain. Rather, we can offer a priori or
metaphysical theories as to the structure of kinds in general. Here I
imagine positions on what it is to be a kind in the first place (and
when to posit one), as well as how kinds are to be individuated. This
A Call for Modesty 35

bears on the question whether we should even accept mental kinds


in the first place, and a method of individuation would obviously
help resolve practical questions of identification.
6. Theoretical Reduction: We would like to have a procedure to determine
when the postulates and generalizations of one theory can be identi-
fied with or explained by another theory. This is the topic of theo-
retical reduction. While it is good to have models that accord with
actual scientific practice, theories of reduction are still largely a priori.
They can then be used to settle questions concerning intertheoretic
relations.
7. Metaphysical Relations: Answers to the mind-body problem depend on
the relations that are taken to hold between the mental and the phys-
ical. We then need theories of relations such as realization, superveni-
ence, nomological necessity and the like. This is largely an a priori
enterprise.

Basically, I am advocating a return to an old-fashioned dichotomy for


the roles of the a priori and posteriori when it comes to theorizing about
contingent kinds that, prima facie, fall within the sciences. The a priori,
largely formal or speculative, cannot ground substantive claims about
contingent reality. It needs a posteriori input. (I would say the converse
for overly empirical treatments of the mind-body problem: they need
a priori input.) The cottage industries that the zombie and exclusion
arguments have generated have been hugely beneficial, by and large, for
their philosophical spin-offs – for example, two-dimensional semantics,
modal rationalism, the nature of the physical, the nature of causation,
the causal relata and the status of the special sciences. But we should be
more than wary of the arguments themselves. Specifically, empirically
based reasons for positing metaphysical necessities should trump our
thoughts about what remote physical outcomes are ideally conceivable
(or whether they reliably track 1-possibility). And we should dismiss an
armchair prohibition against causal overdetermination that is empiri-
cally explicable and metaphysically non-coincidental. I hope that such
immodest uses of a priori reasoning are just an excess of these meta-
physical times.13

Notes
1. Chalmers (2010, 152).
2. This is the version as found in The Monadology, §17 (Leibniz 1991, 70). A very
similar version of this argument is also presented in his New Essays, 66–67.
36 Eric Funkhouser

My understanding of Leibniz’s argument has greatly benefited from reading


Duncan (2012).
3. This is his ‘Type-F Monism’. See Chalmers (2010, 133–137).
4. Leibniz (1996, 66).
5. I am not convinced that there is no appearance/reality distinction for
phenomenal consciousness. But I want to grant this critical premise to
Chalmers and use it to undermine the zombie argument by running the argu-
ment in the opposite direction. More of this below.
6. Chalmers (1996, 137–138).
7. By ‘overdetermination’ I mean multiple numerically distinct sufficient
causes – not steps in a common causal chain – for a common effect.
8. Versions of this argument are found, most prominently, in Kim (1993, 1998,
2005). Sometimes exclusion arguments are used to reach a dilemma – either
psychological kinds are identical to physical kinds or they are epiphenom-
enal. I am concerned with arguments that assume the reality of mental
causation.
9. Even if we were to accept the causal-exclusion principle, there is a host of
objections that deal with individuating the causal relata and articulating the
causal relation itself. In addition to denying the exclusion principle (P3), I
favour a dual explananda approach which answers the exclusion challenge
by holding that the very same considerations that show the mental and
physical causes to be distinct – multiple realizability – also show their effects
to be distinct. So there is no causal overdetermination of a common effect.
10. Sider (2003) offers objections along each of these lines – i.e., the specific
analyses of causation do not rule out overdetermination and causation is not
a limited quantity – as well.
11. I consider this objection in Funkhouser (2002, 344–346), as does Fodor (1997,
161).
12. Scientific kinds contrast with necessary kinds, such as those found in math-
ematics and metaphysics proper, which are more intimately connected with
the a priori. Perhaps normative kinds – like moral obligation and epistemic
justification – that we already accept as philosophical and (typically) not
scientific also are more open to a priori investigation.
13. Thanks go to Mark Sprevak, Jesper Kallestrup, Graeme Forbes, Ted Parent,
Philip Goff, Bence Nanay and Justin Fisher for helpful comments on an
earlier draft of this chapter.

References
Bennett, K. (2003) ‘Why the Exclusion Problem Seems Intractable, and How, Just
Maybe, to Tract It’. Nous, 37(3), 471–497.
Bennett, K. (2008) ‘Exclusion Again’. In Being Reduced: New Essays on Reduction,
Explanation, Causation. Kallestrup and Hohwy (eds), New York: Oxford
University Press.
Chalmers, D. (1996) The Conscious Mind. New York: Oxford University Press.
Chalmers, D. (2010) The Character of Consciousness. New York: Oxford University
Press.
A Call for Modesty 37

Duncan, S. (2012) ‘Leibniz’s Mill Arguments Against Materialism’. Philosophical


Quarterly, 62(247), 250–272.
Fodor, J. (1997) ‘Special Sciences: Still Autonomous after All These Years’.
Philosophical Perspectives, vol. 11, James Tomberlin (ed.), 149–163.
Funkhouser, E. (2002) ‘Three Varieties of Causal Overdetermination’. Pacific
Philosophical Quarterly, 83(4), 335–351.
Grim, P. (2009) Mind and Consciousness: 5 Questions. Automatic Press .
Kim, J. (1993) Supervenience and Mind. New York: Cambridge University Press.
Kim, J. (1998) Mind in a Physical World. Cambridge, MA: MIT Press.
Kim, J. (2005) Physicalism, or Something Near Enough. Princeton, NJ: Princeton
University Press.
Leibniz, G. W. (1991) Discourse on Metaphysics and Other Essays, trans. by Daniel
Garber and Roger Ariew. Indianapolis: Hackett.
Leibniz, G. W. (1996) New Essays on Human Understanding, Peter Remnant and
Jonathan Bennett (eds), New York: Cambridge University Press.
Sider, T. (2003) ‘What’s So Bad about Overdetermination?’ Philosophy and
Phenomenological Research, 67, 719–726.
3
Verbs and Minds
Carrie Figdor

In this chapter I introduce and defend verbialism, a metaphysical frame-


work appropriate for accommodating the mind within the natural
sciences and the mechanistic model of explanation that ties the natural
sciences together. In a mechanistic explanation, the behaviour and
features of a whole are explained in terms of their organized parts and
the organized activities they engage in, and explaining the mind is
explaining how it is composed out of brain parts and their activities
(Bechtel 2005, 2008). Verbialism is the view that mental phenomena
belong in the basic ontological category of activities (a term I use to
refer to any type of occurrent).1 The name verbialism derives from the
fact that activities are the referents of verbs and their linguistic forms or
relatives (e.g., gerunds, nominals, and verbed nouns, such as to google
or to hood). By intention it also brings to mind adverbialism, a theory of
perceptual content that originally aimed to explain illusory perception.
But verbialism is not a theory of perceptual content; it is not a theory
of content at all. It is a metaphysics that prescribes that our theories
of perceptual and cognitive content alike be consistent with the fact
that mental phenomena are activities.2 If minds are what brains do,
explaining the mind is explaining how it occurs (Anderson 2007), and
the ontology of mind is verbialist.
At least, it ought to be. Here I motivate verbialism by revealing a kind
of inattentional blindness philosophers of mind have shown when it
comes to conceiving of their explanandum as a kind of complex activity.
I will also show how the project of naturalizing the mind is altered
when we correct for this inattention. By ‘naturalizing’ the mind, I mean
providing an explanation of mental phenomena that does not involve
supernatural elements or interventions and that ties the mind with some
degree of modal strength to the physical world and its laws, entities and

38
Verbs and Minds 39

activities.3 But I also distinguish two distinct tasks within the project of
naturalization. One is to explain content using only naturalistic ingre-
dients (e.g., Dretske 1988, 1995); another is to articulate the ontological
scaffolding under any naturalistic theory of content (and consciousness,
if it is not already accounted for within the semantic task). The semantic
task has been treated as the only task once physicalism of some sort is
accepted. But physicalism in general has not been fully articulated in
a critical sense, to the detriment of our theories of content; the meta-
physical task is to complete the job.
In the first section, I show how neglecting the metaphysical task has
hampered theorizing about the mind. In the second section, I show how
the verbialist answer alters our approach to the semantic task. In the
third section, I sketch a method for addressing the semantic task within
the verbialist framework.

1 Inattentional blindness in philosophizing


about the mind

To clarify what the metaphysical task involves, consider Descartes’


answer to it. He classified the mind in the ontological category of partic-
ular substance or thing. This answer does not preclude physicalism: to
say the mind is a substance is not to say it is distinct from physical
phenomena of any ontological category. Rejecting substance dualism
can mean rejecting the relation to the physical it asserts or the onto-
logical category to which it assigns the mind. An activity dualist would
reject only the second disjunct, saying that Descartes made this kind of
category mistake, not the other one.
Physicalists reject both disjuncts. The type-type identity theory, for
example, does not identify the mind with the brain or neurons (nor with
an abstract object; I take this for granted in what follows). The contem-
porary ontological picture is one in which there are physical objects and
mental and physical properties and relations corresponding to n-place
mental or physical predicates true of these objects. Physicalism then
asserts a modal relation between the kinds of properties or relations.
In this picture, anything that is not a particular object becomes a prop-
erty of one or relation between at least two objects. This impoverished
ontology is not entailed by the formalism of modern logic, but rather
stems from a philosophical tradition of ontological parsimony coupled
with an object bias. Being heavy, being blue, rotating, and diffusing
through a membrane are treated equally as properties of objects even
though, as Davidson (1967, 1970) argued, activities have just as much
40 Carrie Figdor

call to be concrete particulars as the objects that engage in them (see


also Machamer, Darden and Craver 2000, Machamer 2004). I defend
this particularist position for activities elsewhere based on metaphysical
and scientific considerations.4 The important point here is that when
we reject the substance category for the mind, our metaphysical work is
not done. Ryle (1949) ridiculed Descartes for characterizing the mind as
non-physical substance. But merely characterizing the mind as physical
non-substance is equally inadequate.
To see one result, consider pain experiences. Such experiences strike
us naturally as both monadic (non-relational) and as activities – as
nominalizations of to experience, an activity we engage in on our own.
Empirical facts aside, the stock example of the Identity Theory – ‘pain =
c-fibers firing’ – posits an object (c-fiber) engaging in an activity (firing).
But a c-fiber is not a mental kind of object, and firing is not a mental
kind of activity. So the theory must also claim that this kind of object
engaging in this kind of activity in this way, or that this kind of activity
done by this kind of object in this way, yield a token of a mental kind –
pain-experiencing.5 Either way, it is easy to discern two independent
dimensions to the multiple realization objection to the theory. First,
can other entities besides c-fibers realize pain-experiencing? Second, can
other activities besides firing realize pain-experiencing? Only the first
question has gotten much attention. But for even this most straightfor-
ward of physicalist theories, we need to ask not just about the kinds of
entities but also the kinds of activities that realize the mind.
This attention deficit has theoretically damaging results beyond mere
neglect when we consider the received view of propositional attitudes,
which are often stand-ins for any cognitive state. They are relations to
mental representations, which in turn are symbols with content (Field
1978). This ontology is also used to elaborate the computational idea of
cognition as information-processing. But while in theory the symbols
can belong to any ontological category, a popular metaphor describes
the package in terms of items that are put in boxes, itself an object meta-
phor for roles in a cognitive economy. The same mental symbol put in
one box (moved around in one way) is a belief and put in another box
(moved in another way) a desire. Note how odd it sounds to conceive of
pain experiences in terms of an object with a pain property that is put
in an experience box.
The reason this model in fact requires object-like symbols is because
it becomes non-explanatory when we try to articulate it in terms of
mental symbols as activities – as mental-representings. The model
entails that attitudes and propositions are independently individuated;
Verbs and Minds 41

the point of holding a relational view of the attitudes, after all, is that
these elements can be freely recombined without changing their iden-
tity qua attitude or proposition. But a relational view requires relata,
and if mental symbols are not objects then we need particular activi-
ties as relata that are manipulated (e.g., believed, desired and so on for
each role in one’s mental economy). But to manipulate an activity is to
modify it – to change how, when, or where it occurs – and these changes
are typically type-relevant. For example, modifying the flow of water by
opening or closing a valve, or by changing the temperature, results at
the very least in a distinct species (or determinate) of flowing from what
was occurring before. These differences are marked in language in ways
that include but are not limited to adverbial modifiers. So if attitudes
are activities – for example, if believing is a complex doing, a functional
role – and mental symbols are also activities, then the attitude/propo-
sitional-attitude relation is not adequately seen as involving independ-
ently recombinable elements. Assuming that manipulating an activity
doesn’t invariably yield a different kind of occurrence altogether – that
is, assuming activity kinds are sufficiently robust to allow for some
continuity through change – the relation plausibly is, or often is, one of
genera/species (or determinable/determinate), such that individuating
the attitude is necessarily part of individuating the propositional-atti-
tude.6 Thus, one can have a relational view of the attitudes if mental
symbols are object-like, or a non-relational view if they are activities.
But what one cannot do is pretend that a relational view of the attitudes
is neutral regarding the ontological category of mental representations.7
Activities are not objects.
Standard naturalistic theories of content do treat mental symbols as
object-like – they assume the continuity of type through change that
is normal for objects but not for activities. The vehicles of content are
often described as neural firing patterns that occur, in easy cases and
initially spontaneously, in response to real external objects or states-of-
affairs. But when the firing pattern is embedded in a circuit of internal
activities leading to behaviour, the new connections to other activities
cannot alter the firing pattern’s response profile to external objects in
any type-relevant way. The semantic theory must maintain that the
result of embedding an activity into other activities makes no semantic
difference: once an O-indicator, always an O-indicator. This presumed
identity of spontaneous-indication activity type from indication-in-a-
control-circuit activity type makes it possible to derive the semantic
value assigned to the latter from the indication value assigned to the
former. This is plausible if one thinks of firing patterns as object-like in
42 Carrie Figdor

terms of their persistence through change, but it cannot be taken for


granted if one takes seriously their nature as activities: changing the
circumstances in which an activity occurs typically affects how it occurs,
and such changes can affect its type at least by changing its species (i.e.,
from one determinate to another). What is needed, and missing, is justi-
fication of the implicit assumption that these likely differences in indi-
cation activity type make no semantic difference.
Theorizing about the attitudes themselves, as functional roles, has also
suffered due to inattention to the metaphysical task. Functional indi-
viduation is individuation in terms of activities, given that a functional
role just is a complex of activities. This idea is invariably illustrated with
object artefacts and often involves emphasizing the difference between
what something does and what it is made of. But the theory and the
explanation of the theory are obviously inadequate when we consider
how to apply it to activities. For one thing, it raises the question of how
activity kinds are individuated in the first place. Moreover, an activity
just is what it does – it is not ‘made of’ anything in any usual sense. So
if this stock distinction between what something is and what it does is
to make any sense when applying functionalism to activity individua-
tion, it must be reinterpreted as the claim that the fundamental kind to
which a token activity belongs changes once it plays a role in a system of
activities. (Note how this is inconsistent with the undefended assump-
tion of sameness of kind for indicators, both spontaneous and when
harnessed in control circuits, described in the previous paragraph.) For
example, the theory of functional individuation entails that a token of
transmitting is a token of one kind of activity when done in one context
and another when done in another. This also gets the individuation of
activities wrong: in many cases activities do remain tokens of the same
general type but are different species (or distinct determinates) in each
context. A mousetrap is not a species of wood, but neurotransmission is
a species of transmission. Activities do play roles in systems and do have
functional-role descriptions, but they typically play the roles they do in
virtue of the general kind of activity they already are and remain even
if their species changes. Again: if this were not the case, embedding an
O-indicating neural activity pattern into a new circuit of activities would
entail that one would not have an O-indicator anymore.
Much to their credit, adverbialists did reject the act-object model,
hence an object-like-object model, at least for perceptual states. To
explain non-veridical perception, a felt need for an intentional object
led to the positing of sense data to serve as relata of a mental act given
the absence of real external objects. The term ‘object’ here again does
Verbs and Minds 43

double duty: an intentional object is a target of an intentional act, but


the rejection of the act-object model was also motivated by the idea that
these intentional objects were weird objects in an ontological sense. To
get rid of them, adverbialists claimed that there was act(ivity) and modi-
fications of act(ivity), not act(ivity) and objects.
As usually interpreted, they claimed that to experience was an intran-
sitive rather than a transitive verb: one experiences-in-a-certain-way
(e.g., hallucinates-orangely) just as a top rotates-in-a-certain-way (e.g.,
rotates-wobblingly). They may have been wrong in specifics, but their
enduring ontological insight was the idea that content is not a prop-
erty of objects but of activities. They interpreted Brentano’s mark of the
mental as content-modification of activity kinds. Crudely put, brains
(or parts of them) engage in mental-representing or -experiencing just
as c-fibers engage in firing. Eliminating sense-data entailed an alterna-
tive ontological framework that practically precluded conceiving of the
activities as object-like.
Unfortunately, the main argument against adverbialism – that it
could not explain the fine-grained structure of content, in particular
propositional content – failed to engage this ontological insight. To
borrow Kriegel’s (2012) example, if ‘I am thinking of a green dragon
and a purple butterfly’ is paraphrased adverbially as ‘I am thinking
green-dragon-wise and purple-butterfly-wise’, one could not infer from
‘I am thinking green-dragon-wise’ to ‘I am thinking dragon-wise’ any
more than one could infer from ‘I am thinking catalog-wise’ to ‘I am
thinking cat-wise’. But natural language expressions of valid inferences
do not entail that the mental states expressed are structured in just that
manner. Different possible paraphrases may reflect different ways in
which the same inferences can be implemented by a cognitive system,
or may not provide any clue as to how they are implemented. In effect,
the objection ignores the verbialism underlying adverbialism. Activities
come in kinds and are organized in hierarchies. On the adverbial view,
some of the specific ways of doing activities we call ‘content’. So if the
relation between thinking green-dragon-wise and thinking dragon-
wise is genera/species (or, as Kriegel also suggests, determinate/deter-
minable), we have activities that are modified in specific ways. There is
much we don’t know about the individuation of activities, but it is no
objection to assert that no method of individuating them is sufficiently
fine-grained for content.
One final area worth noting that has been negatively affected by inat-
tention to the nature of mind as activity is the contemporary debate about
disjunctivism. Disjunctivism claims that veridical and non-veridical
44 Carrie Figdor

perception are distinct kinds of mental states; on an adverbial view, they


are distinct content-modified kinds of activities (e.g., hearing-trumpetly
and hallucinating-trumpetly). But the critical issue is whether the differ-
ence between veridical and non-veridical perception entails a difference
in activity kind at the level of type individuation of interest. Without
specifying this level (and thus whether there are two kinds of activi-
ties, as the disjunctivist will want to say, or just one), the debate about
disjunctivism is ill-defined.

2 Distinguishing (verbial) metaphysics


from (adverbial) semantics

Verbialism says mental kinds – pains, perceptions and propositional


attitudes alike – are activity kinds. Adverbialism got the basic ontology
right but did not sufficiently distinguish its implicit answer to the meta-
physical task from the semantic task. Its ontology directed us to explore
the individuation of activity kinds, the nature of complex activities, and
their taxonomic hierarchies (Levin 1993). These are the general theo-
retical questions of which content-modifications of activity kinds, the
classification of some activity kinds as mental, and the relation between
mental activities, such as perceiving-veridically and perceiving-non-ve-
ridically, are specific cases.
For whatever reason, we did not take the hint.8 If naturalistic theo-
ries of cognitive content had started out trying to explain green-dragon
thoughts, more philosophers might have questioned the standard rela-
tional model of the attitudes and its implicit ontology. Instead, they
focus on types of real physical objects – e.g., cats, and hence how
cat-thoughts are distinguished from those of dogs or mammals or
undetached-cat-parts or cats-or-dogs – that are the relata of unspecified
‘causal relations’. But verbialism insists where adverbialism only hinted.
If we want to explain the mind naturalistically, we need to consider the
nature of activities in general in order to explain mental activities. This
is a broader project than can be pursued here, but I can indicate two
ways in which it will affect how we think about the semantic task.
First, different kinds of activities have different occurrence condi-
tions. Some are such that entities can engage in them independently of
anything else, although they may require triggering by an external item
to occur. Rotating is not the general kind of activity it is in virtue of the
existence of anything other than the rotater. Others, such as hitting,
cannot occur without two objects (although one relatum can be a part of
the other). Yet others, such as hurricanes, are not performed by objects
Verbs and Minds 45

(are ‘unowned’) at all, and are related to them in some other way. These
distinct occurrence conditions are distinguished in ordinary language
when verbs are used transitively (direct object: The farmer smells the
rotten corn), intransitively (no direct object: The rotten corn smells) or
in a linking manner (subject complement: The corn smells fresh). The
same verbs can be used to pick out activities with different occurrence
conditions on different occasions of use.
These different occurrence conditions entail that the ontological
category of activities cross-cuts the available categories of first-order
predicate logic (variable and n-place predicate, usually taken to range
over objects and properties/relations respectively). This logical diaspora
does not entail that the ontological category is illegitimate (and the fact
that we do not use predicates like ‘is-pegasizing’ shows that the onto-
logical category of object is implicitly legitimate). To the contrary, in the
sciences, and in mechanistic explanation in particular, the category of
activity usefully groups phenomena that play explanatory roles at least
as essential as those of objects. This justifies favouring unity of explana-
tory role over unity of formal logical role, especially for the purpose of
naturalization.
Verbialism does not restrict the types of occurrence conditions that
mental activity kinds may involve. For example, think may not have one
kind of occurrence condition, and perceive or experience may pick out activ-
ities like rotate or hit depending on context.9 It does claim that general
kinds of activities (e.g., perceiving) can be modified in various cognitively
relevant ways, which adverbialists posited as special cases of general kinds
(e.g., perceiving-redly). These content-modified special cases may involve
implicit or explicit reference to external objects if the general kind allows
them (e.g., thinking) or requires them (seeing, used transitively) but not
if the general kind precludes them (e.g., hallucinating). This occurrence-
condition neutrality of verbialism explains why it is consistent with
adverbialism that being appeared-to F-ly could be a way of representing
that something is F (Siegel 2013; although the adverbialist would call it
a way of representing-F-ly). If content is accuracy conditions, where to be
accurate is to correctly represent some aspect of the external world, there
is no reason why adverbial content might not be determined by causal
relations to external objects: for from the verbialist perspective it depends
on the nature of the general activity kinds and how they are modified. For
the same reason, verbialism is neutral regarding disjunctivism, semantic
internalism and externalism, and the bounds of cognition.
Second, verbialism directs us to examine which modifications of
general activities yield contentful special cases – that is, the mental
46 Carrie Figdor

activity kinds we call perceiving-redly, thinking-green-dragon-wise, and so


on. This is also not the place for a theory of content-modifications – an
adverbial theory – although I suggest how we might develop one in the
next section. But I will note a fruitful new way of understanding what is
required for successful naturalization.
It is consistent with the naturalistic goal of explaining mental activi-
ties in terms of non-mental activities that a specific kind of activity may
be essentially mental even while its general type is not. Moreover, some
general kinds of mental activities (e.g., perceiving, recognizing, etc.)
may be classified as mental not because the general kind is mental but
because all cases we consider paradigmatic of that activity kind involve
human-realized tokens of content-modified specific types. For example,
we may understand perceiving by acquaintance with or inference from
human performances associated with perceiving-P-ly, but it does not
follow that perceiving as a general kind is mental. Perceiving-P-ly may be
the special, mental, case we get when a non-mental activity is modified
in a particular way. For the same reason, propositional attitudes may be
mental even if attitudes are not, given that the former are special cases
of the latter.
As a result, while a naturalistic explanation of mind may only avail
itself of explanantia found in the material world, from a verbialist
perspective it may do so by extending familiar mental concepts to what
are usually considered non-mental domains. As a result, the question of
whether an activity kind picked out by a mental concept is essentially
mental, or whether a so-called mental concept is essentially mental,
may have a negative answer. In the next section I will sketch how this
might come about.

3 Doing semantics within a verbial framework

Verbialism directs our attention to the question of how activities may


be individuated to as fine-grained a degree as contentful mental states
appear to be and which modifications are content-modifications. In this
section I show how we can take advantage of some interesting features
of activities to pursue the semantic task in a new way. I will use Kriegel’s
(op. cit., 2007) ‘anchoring instance’ model of natural-kind concept-
fixing to show how we might go about specifying content-modifica-
tions and thus what a theory of perceptual or cognitive content within
a verbialist framework might look like.10
Kriegel’s original model is a resemblance-to-cluster-of-paradigms
account. For example, we use anchoring instances of mammals, such
Verbs and Minds 47

as horses and cats, to fix the category mammal. We posit a common


underlying nature that non-anchoring instances can be found to have
and that some anchoring instances can be found not to have (and thus
ejected from the category). Anchoring instances have a special epistemo-
logical status, not a special metaphysical status. Our goal may be a meta-
physical category, but our method does not and need not ensure that
we get the contours of the category correct right from the beginning. In
scientific theory development, this is par for the course.
This account fits not just the object-kind concepts Kriegel uses to illus-
trate the view, but also activity-kind concepts. Anchoring instances of
activity kinds would include those of the sort exemplified by ‘See Spot
run!’: Spot’s running is an anchoring instance of running, and our seeing
Spot run is an anchoring instance of seeing. Similarly, a rotating top is an
anchoring instance of rotating, and a person recognizing an object is an
anchoring instance of recognizing. But just as the discovery that whales
are mammals shows that the category was open to new non-anchoring
instances, the discovery that alpha-helixes rotate shows that the concept
of rotating was open to its instances being done by alpha-helixes, even
though these could not be anchoring instances.
Importantly, the new uses can be literal. Whales are not prototypical
mammals, but the term ‘mammal’ is not used metaphorically when
applied to whales. When we discovered rotating alpha-helix chains in
neural cell membranes, the term ‘rotating’ is not used metaphorically.
In both cases the terms are found to have wider-than-initially-realized
domains. With mammal, we find that the category includes new species.
With rotation, we find that the same activity is performed by different
kinds of entities. The extension also opens up the possibility of indi-
viduating a subordinate level of the activity kind – e.g., a species of
rotating individuated in part by a kind of entity that performs it. For
some specific activity kinds – neurotransmission, batting – do involve
explicit or implicit reference to object-kinds in their individuation, even
if the general kind does not.
It follows that part-whole (or micro-macro) relations do not determine
basicness for activities even if it does for objects. Otherwise put, the fact
that a kind of activity appears in an explanation in a particular science
does not make it essentially of that science. There may well be a periodic
table of activities, but what appears in the table is not determined by the
kinds of objects that perform them. Vibrating is vibrating whether it’s
done by an atomic nucleus or a piano string. Even if electron-bonding is
a special case of bonding performed by entities in basic physics, bonding
simpliciter isn’t a basic-physical activity since many things bond that
48 Carrie Figdor

are not basic-physical entities. The explanation of how non-basic objects


bond is a distinct issue; given that they do bond, the status of bonding
simpliciter as basic or not is not determined by a science of one type of
entity that does it.
These general considerations about activities are relevant to any
ontology of mind in which mental kinds are activity kinds. Many if
not all anchoring instances of mental activities – perceiving, recog-
nizing, etc. – are of these activities as they are performed in their
content-specified human cases. But it does not follow that perceiving or
recognizing are activities that only humans perform, even if it turns out
(which it may not) that their content-modified species – the activities we
pick out with propositional-attitude or other content-specified descrip-
tions – are restricted to being performed by and ascribed to humans. In
traditional terms, mental ascriptions can employ verbs that create inten-
sional contexts but are not essentially mental. And even if all anchoring
instances of a mental kind are conscious instances it doesn’t follow that
being experienced consciously is essential to that kind of activity.
Moreover, just because a new instance is not an anchoring instance
does not mean that the use of the term for a new case must be meta-
phorical. When any activity we are acquainted with in everyday life –
running, rotating, breaking, seeing, recognizing, experiencing, and so
on – is found in a new domain, the uses of old terms to pick them out
can be literal extensions to non-anchoring cases. Even if all anchoring
instances of spinning or recognizing are tops spinning and people recog-
nizing, alpha helixes can really spin and antibodies can really recog-
nize, even if they may not recognize antigens in the same specific way
a person recognizes a friend or recognizes that she is being rude.11 We
may classify an activity as mental, cognitive or psychological because
the anchoring cases are those performed by human beings, but this does
not entail that the general kind of activity is mental even if the special
case is.
An example might help clarify the above points. I will use the language
of adverbialism and borrow an example from Kriegel (op. cit.). Suppose
Emorie hallucinates a triangle. The adverbialist says Emorie is experi-
encing-trianglishly – in verbialist terms, doing-something-in-a-content-
modifying-way.12 Her semantic account of Emorie’s experience might
go like this. First, she hypothesizes that Emorie’s experience is a token
of a content-modified activity kind. For she does not name the token
experience, she types it. Second, she relies on her knowledge of object
kinds to invent a term to pick out this occurrence as a token of a hypoth-
esized content-modification of the general activity of experiencing. She
Verbs and Minds 49

extends her object-kind concept of triangle from anchoring instances


of triangles by adverbing the concept, such that experiencing-triangl-
ishly is to triangles what googling (better: searching-googlishly) is to
Google. This method not only makes her hypothesis intelligible; it also
may be true. In a similar way, representing-catly or experiencing-redly
are content-modified species of activities individuated by relying on our
knowledge (respectively) of cats or red things and adverbing the object-
types.
But there are many things her use of the new type label does not
determine. She is not (yet) picking out an instance of experiencing-tri-
anglishly because she does not know that this type occurs in our world.
But suppose it does. It is still an open possibility that experiencing-redly
is multiply realized by different activity kinds, or that experiencing
simpliciter is multiply realized even if experiencing-redly is not. It is
also possible that the activity type denoted by her hypothesis (in either
the general or specific case) may be as constitutively dependent for its
occurrence on external states of affairs as most cases of hitting. For her
underlying verbialism leaves open whether experiencing-trianglishly is
constitutively connected to external instantiations of triangularity, and
if it is, how it is. As noted, being an anchoring case is an epistemic status,
not a metaphysical one. Non-veridical and veridical experiencings-tri-
angishly can share the epistemic fact that we pick them out with the
same adverbed nouns without fixing the metaphysics either way.
The adverbialist’s method is familiar throughout science when positing
a new phenomenon, whether it be an object or an activity (or anything
else). It does no metaphysical harm but it does plenty of epistemic good. If
we do not already understand what a verb refers to, we can be shown an
example. If we explain a piano’s vibrating partly in terms of a vibrating
string, the string’s vibrating is literal and explanatory of the whole’s
activity. It is a separate issue that neither explanandum nor explanans
will be intelligible to someone who does not understand what vibrating
is. If you know enough to ask the question, the answer is intelligible,
and it also may be true. There is no failure of explanation in either sense
if it is true. But if you don’t know enough to ask the question, no explan-
atory failure of either type can arise.
It follows that from a verbialist perspective naturalizing the mind is
not constrained to show reduction to (or supervenience on) non-mental
activity kinds. The assumption behind this constraint is that activity
kinds are essentially individuated at a particular level of performers,
such that when verbs that pick them out are used for new occurrences
by other performers at other levels of material composition the uses are
50 Carrie Figdor

non-literal. Call this assumption the Unique Literal Performers (ULP)


thesis. If ULP is true, mental activities are performed literally only at
one level of material complexity (humans, and maybe some other
sufficiently complex animals) and a naturalistic explanation of them
requires us to replace or redescribe organism-level mental activity kinds
in terms of non-mental kinds of activities literally performed by their
parts. But there is no reason to think ULP is true for mental activities if it
is false in general for activities. Mental activities are special, but this may
be because human beings perform them in special ways, not because
the activities we anchor in cases of human performances are unique to
humans.

4 Concluding remarks

Saying what the mind is not, as physicalists have done, does not suffice
for a firm ontological framework for naturalization. We also need to say
what it is. Verbialism provides this positive account. Moreover, once
we pay attention to the nature of activities, we discover problems in
our current theories of mind and new ways to approach the semantic
task. Verbialism leaves to further empirical and philosophical investiga-
tion many of the details of understanding activity kinds and mental
activity kinds. But the success of this research depends on getting the
basic metaphysics right.13

Notes
1. The term ‘occurrent’ (Simons 1987, 2000) is familiar in metaphysics, but in
this context I follow Machamer, Darden and Craver (2000) and use ‘activity’
to refer to activities, events, processes, performances, actions and any other
item in the general metaphysical category whose standard contrast class is
that of continuants or objects. (I use the term ‘process’ or ‘event’ in the same
general sense for variety.) This chapter is neutral on whether four-dimen-
sionalism is true for objects; in any case, I agree with MDC that mechanistic
explanations quantify over both objects and activities.
2. The term ‘state’ does not denote a stable ontological category (Steward 1997;
Marcus 2006, 2009). But while it has a connotation of ‘unchanging’ or ‘undy-
namic’, it is used in the sciences to denote equilibria, which require main-
tenance in open systems, or idealized momentary time slices of dynamic
phenomena. I use ‘state’ when customary usage demands it but only as an
ontologically neutral term equivalent to ‘phenomenon’.
3. The language here is intended to be neutral among all views which attempt to
naturalize the mind by linking the mental to the physical with some degree
of modal strength. What is ruled out is any view in which the mind has
no necessary connection of any sort to the physical. I define non-reductive
Verbs and Minds 51

physicalism as the denial of reductive physicalism (which I identify with the


Identity Theory) and consider property dualism, like functionalism, a special
case of non-reductive physicalism. On the main functionalist views, mental
states are either identical to physical states which happen to be picked out
by functional-role descriptions or are identical to the functional-roles picked
out by the functional-role descriptions (and in accordance with physicalism,
are realized by physical states).
4. Thus, my defense of verbialism is only partial in this paper. The whole
position claims that (a) minds are complex activities and (b) activities are
concrete (non-abstract) particulars. Here I defend only (a). One might equally
distinguish weak and strong forms of verbialism, depending on whether one
supports (a) or both (a) and (b).
5. If all the world is states of affairs (Armstrong), then mental phenomena will
also be states of affairs. But these include activities as components.
6. For clarity of discussion, I here take the species/genera or determinable/deter-
minate relations as common relationships between activity types (including
mental activities).
7. A standard argument against a monadic view of the attitudes is that a rela-
tional view is required if natural languages have compositional truth-theo-
retic semantics (Schiffer 1981). such that certain acceptable inferences come
out valid (A and D both believe that P; therefore there is something they
both believe). Thus, ‘believes “maintains its semantic value in believes that
P” and “believes that Q”’. The verbialist ontological (and adverbial semantic)
response is that the relation does differ but that if A and D both believe that
P, there is indeed something they both do: believe that P.
8. However, the tide is changing. For example, Kriegel (op. cit.) extends adver-
bialism to all conscious mental states, and Crane (2011) indicates that the
failure of existential generalization for cognitive states (how do we explain
how A is thinking about P even though we cannot infer from this that there
is, in physical reality, something A is thinking about?) and the problem of
intentional inexistence for perceptual states both motivate adverbial expla-
nations of mental representation.
9. From this perspective, the debate between the two dominant views of percep-
tion – the relational view, or naive realism, in which to perceive is to stand
in a primitive relation of awareness or acquaintance with the world; and
the representational or content view, in which to perceive is to represent
the world in a certain way (Campbell 2002) – may rest on distinct, equally
legitimate, ways of using the same verb to pick out different activities. I must
set aside here the important questions of whether mental activity kinds are
natural kinds and which, if any, are natural kinds.
10. Kriegel argues for experiential intentionality (i.e., phenomenal states) as the
basis for all intentionality by arguing that all anchoring instances of inten-
tionality are of experiential intentionality. My use of his model is neutral on
this issue. Also, since perceptual and cognitive states both may have expe-
riential aspects, what distinguishes them are the kinds of content they are
thought to have – although even here the distinction between perceptual
content (e.g., of a visual percept) and cognitive content (e.g. of a belief) is no
longer clear. It is in verbialism’s favour that it provides a unified ontological
framework for both kinds of states.
52 Carrie Figdor

11. Lakoff and Johnson (1980) claim that many folk explanations utilize a
limited set of primitives, including activity primitives derived from bodily
and cognitive activities. Their work is obviously relevant here, but I deny
their assumption that the extensions are metaphorical. Scientific explana-
tions and hypotheses obey similar intelligibility constraints, but the meta-
physics of activities shows that they can also be literally true.
12. Although here I discuss only kinds or types, naturalization is often discussed
in terms of tokens: Does a mental token fall under an F-ly representational
type because it stands in a causal relation to F-things (to instantiations of
F-ness)? On my view, it is one thing to individuate a kind or type and another
to determine how a token may meet the conditions for falling under the
type. (One can become an American citizen in more than one way.) Thus,
what experiencing and experiencing trumpetly are as kinds and what it is
for a token occurrence to be of the experiencing-trumpetly type are separate
issues.
13. Many thanks to Mark Sprevak, Justin Fisher, Phil Woodward, Colin Klein,
Uriah Kriegel and Mazviita Chirimuuta for comments on and responses to
drafts of this chapter.

References
Anderson, J. R. (2007) How Can the Human Mind Occur in the Physical Universe?
Oxford: Oxford University Press.
Bechtel, W. (2005) ‘The Challenge of Characterizing Operations in the Mechanisms
Underlying Behavior’. Journal of the Experimental Analysis of Behavior, 84,
313–325.
Bechtel, W. (2008) Mental Mechanisms. New York: Erlbaum.
Campbell, J. (2002) Reference and Consciousness. Oxford: Clarendon Press.
Crane, T. (2011) ‘The Problem of Perception’. In The Stanford Encyclopedia of
Philosophy, Edward N. Zalta (ed.), http://plato.stanford.edu/archives/spr2011/
entries/perception-problem/.
Davidson, D. (1967) ‘The Logical Form of Action Sentences’. In The Logic of Decision
and Action, N. Rescher (ed.), Pittsburgh: University of Pittsburgh Press.
Davidson, D. (1970) ‘Events as Particulars’. Nous, 4(1), 25–32.
Dretske, F. (1988) Explaining Behavior. Cambridge, MA: MIT Press.
Dretske, F.. (1995) Naturalizing the Mind. Cambridge, MA: MIT Press.
Figdor, C. (n.d.) Making Physicalism Matter. Manuscript.
Field, H. (1978) ‘Mental Representation’. Erkenntnis, 13(1), 9–61. Reprinted in vol.
1 of Readings in the Philosophy of Psychology, N. Block (ed.), Cambridge, MA:
Harvard University Press, 1980.
Jackson, F. (1977) Perception: A Representative Theory. Cambridge: Cambridge
University Press.
Kriegel, U. (2007) ‘The Dispensability of (Merely) Intentional Objects’. Philosophical
Studies, 141, 79–95.
Kriegel, U. (2012) The Sources of Intentionality. New York: Oxford University
Press.
Lakoff, G., and M. Johnson. (1980) Metaphors We Live By. Chicago: University of
Chicago Press.
Verbs and Minds 53

Levin, B. (1993) English Verb Classes and Alternations. Chicago: University of


Chicago Press.
Machamer, P. (2004) ‘Activities and Causation: The Metaphysics and Epistemology
of Mechanisms’. International Studies in the Philosophy of Science, 18(1), 27–39.
Machamer, P., L. Darden and C. F. Craver. (2000) ‘Thinking about Mechanisms.’
Philosophy of Science, 67(1), 1–25.
Marcus, E. (2006) ‘Events, Sortals and the Mind-Body Problem’. Synthese, 150,
99–129.
Marcus, E. (2009) ‘Why There Are No Token States’. Journal of Philosophical
Research, 34, 215–241.
Ryle, G. (1949) The Concept of Mind. Chicago: University of Chicago Press.
Schiffer, S. (1981) ‘Truth and the Theory of Content’. In Meaning and Understanding,
H. Parret and J. Bouveresse (eds), Berlin: de Gruyter, 204–222.
Siegel, S. (2013) ‘The Contents of Perception’. In The Stanford Encyclopedia of
Philosophy, Edward N. Zalta (ed.), http://plato.stanford.edu/archives/spr2013/
entries/perception-contents/.
Simons, P. J. (1987) Parts: A Study in Ontology. New York: Oxford University
Press.
Simons, P. J.. (2000) ‘Continuants and Occurrents: I’. Aristotelian Society Suppl.,
74(1), 59–75.
Smart, J. C. C. (1959) ‘Sensations and Brain Processes’. Philosophical Review, 68,
141–156.
Steward, H. (1997) The Ontology of Mind: Events, Processes and States. Oxford:
Oxford University Press).
4
Meanings and Methodologies
Justin C. Fisher

One new wave in the philosophy of mind involves connecting recent


work in philosophy of mind to our ‘metaphilosophical’ understanding
of methodologies for doing philosophy. This paper charts relations
between (a) views in philosophy of mind and language regarding the
correct application conditions, or ‘meanings’, of our words and concepts
and (b) methodologies that people have proposed for doing philosophy,
especially methodologies that have aimed to uncover the meanings of
philosophical concepts like knowledge, freedom and justice. I iden-
tify three broad classes of theories of concept meaning. Two of these,
descriptivist and causal/informational classes of theories, correspond
closely to familiar philosophical methodologies – intuitive conceptual
analysis and ‘naturalized’ analysis. A third, the teleo/pragmatic class,
has many adherents in philosophy of mind but does not yet have a
well-known corresponding philosophical methodology. To fill this
gap, I describe a general methodology that I call Pragmatic Conceptual
Analysis. I offer some examples of this methodology and argue that this
methodology enjoys distinct advantages over more familiar philosoph-
ical methodologies.
I first lay out a space of possible methodologies and then link some
of these to theories of concept meaning. Each of these methodologies
is a guide to conceptual engineering, a guide that helps determine what
concepts1 to employ to determine exactly when and how we will apply
these concepts and what beliefs to form employing these concepts. We
might naturally think of each such methodology as involving two steps.
In the first step, we articulate various desiderata regarding the concep-
tual framework we are attempting to engineer. Most plausible method-
ologies will agree on some very general desiderata – for example, that
(ceteris paribus) our theories should be simple and self-consistent. But

54
Meanings and Methodologies 55

different methodologies may disagree upon how we might go about


articulating further desiderata – for example, whether we should demand
that proposed explications match pre-theoretic intuitions surrounding
a concept. In the second step we then try to explicate the relevant
concepts in a way that does well to meet the desiderata articulated in
the first step. Different methodologies might propose different ways of
resolving the conflicts that arise when different desiderata make incom-
patible demands.
There is a wide variety of possible conceptual engineering methodolo-
gies corresponding to the many possible desiderata we might impose
upon our theory construction and to the many ways we might resolve
tensions between conflicting desiderata. It is useful to categorize these
competing methodologies on the basis of their answers to two ques-
tions (illustrated in Figure 4.1): Does the methodology aim to explicate pre-
existing concepts? And if so, which features of these concepts do we wish our
explications to preserve? We consider each question in turn.

Does the methodology aim to explicate pre-existing concepts?

no yes

Free-Standing
Theory Conceptually Anchored Theory Construction
Construction

Which features of concepts are to be preserved?

Intuitions about Paradigm Beneficial uses


hypothetical instances
cases

Intuitive Naturalized Pragmatic


Conceptual Analysis Conceptual Analysis Conceptual Analysis

Descriptivist Causal/Informational Teleo/Pragmatic


Semantic Theories Semantic Theories Sematic Theories

Figure 4.1 Methodologies for conceptual engineering


56 Justin C. Fisher

1 Free-standing theory construction

We may begin with the question of whether a methodology’s goal is to


explicate pre-existing concepts. When a theorist engages in conceptu-
ally anchored theory construction, she aims to propose clear statements
of application conditions which in some appropriate sense capture in
a more rigorous way what certain pre-existing concepts were supposed
to have captured all along. Conceptually anchored theory builders are
constrained by the desideratum that their explications should preserve
important features of pre-existing concepts. (In a moment we ask which
features are worth preserving in this way.)
In contrast, the theorist who engages in free-standing theory construc-
tion may construct her theory from scratch and claim that as a whole,
it is well worth considering, even if its various theoretical notions don’t
explicate any pre-existing concepts.2,3 Free-standing theory builders are
constrained only by general desiderata regarding the conceptual frame-
works they propose. For example, one might hope that free-standing
theories will be simple, self-consistent, generally applicable, explanato-
rily powerful, predictively accurate and pragmatically useful. Different
versions of free-standing theory construction might give these desid-
erata different weights, and some versions might impose additional
desiderata.
Most philosophical work in the English-speaking ‘analytic’ tradition
has been conceptually anchored. It has aimed to clarify what is supposed
to have been captured by various ‘folk’ concepts – knowledge, justice,
moral goodness, intentions, responsibility and the like. However, this
tradition does contain some examples of free-standing theory construc-
tion. One clear example is Ruth Millikan’s (1984) introduction of theo-
retical concepts like ‘proper function’ to help account for how people
and animals get around in their environments:

‘Proper function’ is intended as a technical term. It is of interest


because it can be used to unravel certain problems, not because it
does or doesn’t accord with common notions such as ‘purpose’ or
the ordinary notion of ‘function’. My program is far removed from
conceptual analysis; I need a term that will do a certain job, and so I
must fashion one. (Millikan 1984, 18)

Other examples include eliminativism of folk psychology in favour


of contemporary neuroscience (e.g., Churchland 1981), prescriptive
Meanings and Methodologies 57

Bayesian accounts in epistemology (e.g., Bovens and Hartmann 2003)


and a great deal of work in pure logic.
There is a deep and useful connection between my distinction
between conceptually anchored and free-standing theory construction
and Thomas Kuhn’s (1962) distinction between ‘normal’ and ‘revo-
lutionary’ scientific practice. For Kuhn, normal science is work that is
anchored to (partly inchoate) concepts commonly employed within an
accepted paradigm, while scientific revolutions are periods in which many
theorists engage in free-standing theory construction, each attempting
to propose a new paradigm to replace an existing one and to avoid the
anomalies that plagued it.
Just as it is important that at least some people attempt revolutionary
science, it is important that at least some philosophers engage in revo-
lutionary or free-standing theory construction. For this approach some-
times produces good new ways of conceiving of problems – ways that
may eventually take a seat within mainstream philosophical, scientific
or even ‘folk’ conceptions of these problems. My commitment to the
worthwhileness of free-standing theory construction is evidenced by the
fact that I consider the present chapter (especially my articulation of
Pragmatic Conceptual Analysis) to be largely an exercise in free-standing
theory construction.
Unfortunately, free-standing theory construction is very difficult,
precisely because it is so free of conceptual anchors to familiar and
well-tested grounds. Free-standing theories usually must be very large
and so usually offer only limited returns to people who aren’t prepared
to immerse themselves completely in a new framework. In light of
these drawbacks, there is much to be said for work that is conceptually
anchored, to the concepts either of ordinary life or of some theory already
quite familiar to at least some people. Conceptually anchored theory
building is comparatively easy, yields comparatively high marginal
returns and requires comparatively little departure from entrenched
ways of conceiving of issues.

2 Conceptually anchored theory construction

By definition, conceptually anchored approaches aim to provide an


important sort of continuity with pre-existing concepts. However,
such approaches also typically demand that we depart at least a little
from our antecedent understanding of our concepts, if only because
such approaches seek explications that are formal and explicit, while
pre-theoretic understandings are typically informal and implicit. But a
58 Justin C. Fisher

proposed explication can’t depart too far, lest it lose its claim to capture
the same thing the pre-existing concept was supposed to capture. This
opens a question: which features of a pre-existing concept must a proposed
explication retain if it is to count as a good explication of that concept?
One tempting answer is to say that an explication should retain the
extension of the explicated concept – the set of things to which that
concept could be correctly applied. In what follows I often abbreviate
‘correct application conditions’ for a concept to the ‘meaning’ of that
concept. However, it is worth emphasizing that this chapter focuses
upon purely extensional aspects of concept meaning. Some people think
there are finer-grained, or ‘intensional’, aspects of concept meaning – for
example, aspects upon which HESPERUS and PHOSPHORUS differ in meaning.
Such ‘intensional’ aspects are beyond the scope of this paper.4 It will be
hard enough to determine how various methodologies relate to exten-
sional aspects of meaning without worrying about further intensional
aspects.
Unfortunately, there are at least two compelling reasons not just to
say that the goal of explication is to preserve the extension or meaning
of explicated concepts. First, it is controversial which theory of concept
meaning is correct. Different semantic theories say that the meanings
of our concepts depend on different factors, and hence these theories
would identify different features of our concepts as the ones that would
need to be preserved if our explications are to preserve concept meaning.
Second, even after we adopt a particular theory of concept meaning, the
question why we should want to seek explications that preserve this
sort of meaning will remain. Might it not be the case that we should
sometimes adopt semantically revisionary explications – explications that
are worth adopting despite the fact that they require us to change what
some of our concepts mean?
We can forestall these worries by considering specific proposals
regarding which features should be preserved in explicating a concept.
Then we can ask which (if any) of these specific proposals preserve the
features that are determinative of concept meaning. (Our answer will
depend, of course, upon which semantic theory we think is correct.) And
we can ask which (if any) of these specific proposals provide explications
that are worth adopting, even if (at least according to some semantic
theories) doing so requires semantic revision. I give special attention to
three specific proposals:

1. Pragmatic Conceptual Analysis aims to preserve the ways in which


usage of the concept in question has regularly delivered benefits.
Meanings and Methodologies 59

2. Intuitive Conceptual Analysis aims to preserve the truth of various intu-


itions surrounding that concept.
3. Naturalized Conceptual Analysis aims to preserve what we would count
as paradigm instances of that concept.

Readers sometimes balk at my using the label ‘conceptual analysis’


for anything other than Intuitive Conceptual Analysis. I use this label
because these conceptually anchored methodologies all share the
general goal of traditional conceptual analysis: carefully exploring the
relevant features of pre-existing concepts to state the application condi-
tions of those concepts in other terms. Admittedly, these methodologies
don’t involve ‘analysis’ in the old-fashioned sense of breaking concepts
apart into component parts. But virtually everyone is now convinced
by psychological research5 that few, if any, concepts have the neat defi-
nitional structure that would be required for them to be ‘broken apart’
in this way. So even contemporary practitioners of Intuitive Conceptual
Analysis think ‘analysis’ involves, not breaking concepts into compo-
nent parts, but instead examining the roles concepts play in broader
cognition. Naturalized and Pragmatic Conceptual Analysis hold that the
scope of analysis – including the roles concepts play in interaction with
external objects – is a bit more broad, but that doesn’t make this exami-
nation any less of an analysis.
These three answers embody three general methodologies for concep-
tual engineering. But they also correspond to three general theories of
concept meaning, where each methodology yields explications that
preserve the corresponding sort of concept meaning. In what follows,
I continue to take methodologies as the primary locus of discussion. As
I discuss each methodology, I note the links between it and the corre-
sponding theories of concept meaning (these links were also noted at
the bottom of Figure 4.1).
In considering these three methodologies (and the corresponding
theories of concept meaning), it will help to also consider how each
might apply to a particular example: the familiar philosophical debate
regarding whether free action is compatible with determinism.

3 Pragmatic Conceptual Analysis

I begin with the methodology I call Pragmatic Conceptual Analysis, as I


think it enjoys important advantages over the more familiar methodolo-
gies considered below. Pragmatic Conceptual Analysis proposes that we
‘reverse-engineer’ an existing conceptual scheme to determine how it
60 Justin C. Fisher

works as well as it does so that we might then modify it to more consist-


ently deliver benefits in these ways. Hence, the key desideratum imposed
by Pragmatic Conceptual Analysis is that our explications preserve the
ways in which our applications of pre-existing concepts have regularly
delivered benefits.
While Pragmatic Conceptual Analysis has not yet received as much
attention as the other methodologies I consider, there have been a few
articulations of this general sort of methodology.6 One quite good artic-
ulation is given by the epistemologist Sally Haslanger:

[T]he best way of going about a project of normative epistemology is


first to consider what the point is in having a concept of knowledge:
what work does it, or (better) could it, do for us? And second, to
consider what concept would best accomplish this work. (Haslanger
1999, 467)

I say a great deal elsewhere about the various particular ways in which
one could and should develop this sort of methodology. But we can get
a good sense of the general way in which Pragmatic Conceptual Analysis
works without going into excessive detail.
As an example let us consider a classic philosophical question: is free
action compatible with determinism? The answer to this question depends,
of course, upon which possible actions are to be counted as falling under
our concept FREE ACTION. This is something that Pragmatic Conceptual
Analysis can help determine.
As with other methodologies, Pragmatic Conceptual Analysis may
naturally be divided into two steps. In the first step the desiderata that
will constrain our choice of explications are articulated. For Pragmatic
Conceptual Analysis, the primary desideratum embodies a job description
that outlines the regular ways in which our use of a concept has regu-
larly delivered benefits. For present purposes, we may define ‘benefit’
as anything the person using the concept in question has practical
reason to pursue.7 I leave it to other philosophers to determine exactly
what the pursuitworthy benefits are, but I presume that there are at
least some uncontroversial examples of these, including many instances
of achieving happiness or satisfaction and many instances of avoiding
pain, injury or death. It is an empirical question how our concepts have
delivered such benefits, and hence it is an empirical question what job
description will be delivered by a reverse engineering analysis of our use
of a shared concept.
Meanings and Methodologies 61

For the purpose of illustration, I stipulate an answer to this question


in the case of FREE ACTION (an answer that is a fairly plausible empirical
hypothesis anyway), and we may consider what explication to adopt
if this stipulation turns out to be true.8 Let us suppose that it will turn
out that determinism is true and hence that each human action has a
complete cause in the distant past. But let us also suppose that it will
turn out that there are systematic patterns in what is caused by our
categorization of some actions as ‘free’ and others as ‘not free’. In partic-
ular, let us suppose that the regular way in which categorizing actions
as ‘free’ has yielded benefits is by leading us to reward or punish the
people who perform such actions, which in turn has encouraged people
with well-functioning deliberative systems to perform beneficial actions
and discouraged them from performing harmful actions. And let us
suppose that the regular way in which categorizing actions as ‘not free’
has yielded benefits has been by getting us to seclude and give medical
treatment to people who have a tendency to perform harmful actions
not under the control of well-functioning deliberative systems.
This pattern of benefit delivery embodies a job description that one
might reasonably hope any good explication of FREE ACTION would be
able to perform. To allow us to continue reaping benefits in these ways,
an explication must count as ‘free’ those sorts of actions whose perform-
ance will be well regulated by standard practices of offering praise and
blame (or punishments and rewards), and it must categorize as ‘not free’
those actions whose performance is more effectively regulated by seclu-
sion and medical treatment.
The second step of Pragmatic Conceptual Analysis takes a job descrip-
tion like this and seeks to find an explication that will do optimally well
at fulfilling it. Since we are presuming that determinism is true, this
notion will need to be compatibilist – it must count some actions as free
even if they have complete causes in the distant past. This compatibilist
conclusion does not depend much upon the particular ways in which
use of FREE ACTION delivers benefits. So long as determinism is true (or
close enough to true) and categorizations of actions as ‘free’ and ‘not
free’ are beneficial in regular ways, only compatibilist explications of
FREE ACTION will allow us to continue to achieve benefits in these ways.
Given particular empirical details about how such categorizations
yield benefits, Pragmatic Conceptual Analysis can lead to adoption of a
particular sort of compatibilism. For example, given my empirical stipu-
lations above, the notion that will best fulfil the identified job descrip-
tion might be something like ‘action susceptible to causal control by
62 Justin C. Fisher

the normal human deliberative processes that normally are sensitive to


considerations of rewards and benefits.’
We may define the pragmatic meaning of a concept as the explication
that Pragmatic Conceptual Analysis would deliver for that concept.
(Since there are different ways of working out the details of Pragmatic
Conceptual Analysis, there will likewise be different but closely related
technical notions of pragmatic meaning.) Pragmatic meaning is closely
related to teleo/pragmatic semantic theories that have been defended by
various authors, including William James (1906), Frank Ramsey (1927),
Ruth Millikan (1984), Anthony Appiah (1986), David Papineau (1987),
Fred Dretske (1988), Jamie Whyte (1990) and Simon Blackburn (2005).
In addition, I argue elsewhere (Fisher 2006; n.d. 2, chs. 4–6) that we
should accept success-linked theories of content in general and that,
in particular, we should think of pragmatic meaning as capturing ‘the
meanings’ of our concepts.
Note too how worthwhile it is to use Pragmatic Conceptual Analysis
to identify the pragmatic meaning of a concept. The key desideratum
placed by Pragmatic Conceptual Analysis is that we find explications
that will deliver pursuitworthy benefits if employed. Because of this,
there is a guarantee that Pragmatic Conceptual Analysis will deliver
explications worth employing.9 While some people may resist thinking
that the meaning of a given concept is its pragmatic meaning, these
people still must acknowledge that there is a strong case for stipulating
that, henceforward, we will use a given concept in accord with its prag-
matic meaning.10
My deepest objection to competing conceptually anchored method-
ologies is that they offer no similar guarantees of beneficial results. I
drive home this general concern in Section 7 below. But first let us see
what these other methodologies are.

4 Intuitive Conceptual Analysis

I contrast Pragmatic Conceptual Analysis with two other conceptually


anchored methodologies. I call the first Intuitive Conceptual Analysis. This
familiar philosophical methodology aims to deliver explications that
preserve the truth of intuitions surrounding the concepts being expli-
cated. One staunch defender of this methodology is Frank Jackson:

[H]ow should we identify our ordinary conception? The only possible


answer, I think, is by appeal to what seems to us to be most obvious
and central about free action, determinism, belief, or whatever,
Meanings and Methodologies 63

as revealed by our intuitions about possible cases. [ ... ] Intuitions


about how various cases, including various merely possible cases,
are correctly described in terms of free action [ ... ] are precisely what
reveal our ordinary conceptions of free action. (Jackson 1998, 31)

There are different versions of Intuitive Conceptual Analysis corre-


sponding to the different ways in which one might use intuitions to
constrain one’s choice of explication.
One potential point of difference involves the sort of intuitions
to be appealed to. In the quote above, Jackson calls upon intuitions
about whether a concept would apply in various possible cases. But
other versions of Intuitive Conceptual Analysis might also give weight
to intuitions as to which general claims should hold true regarding a
concept. (For example, if we have the strong intuition that all bach-
elors are men, one might take this generalization itself to be something
worth preserving in our explication of BACHELOR.) And some versions of
Intuitive Conceptual Analysis might give special weight to strong intui-
tions that certain things definitely are instances of a given concept. (For
example, one might find it intuitively obvious that this rock is solid and
might take preserving the truth of this intuition as an important desid-
eratum for any analysis of SOLIDITY.)
A second potential point of difference involves the question of how
much weight to give to the intuitions of different people. Some versions
of Intuitive Conceptual Analysis would favour the ‘pre-theoretic’ intui-
tions of ordinary people not in the grip of any theory, while other
versions might favour the intuitions of people who are empirically well
informed, who have significant practical experience or who have spent
years reflecting carefully upon a topic.
Third, different versions of Intuitive Conceptual Analysis might
endorse different ways of resolving conflicts between intuitions. For
example, some versions might allow us to disregard intuitions that can
be ‘explained away’ as stemming from unreliable sources. Some might
also propose ways of giving different amounts of weight to intuitions
that are felt to have different degrees of ‘strength’ or confidence.
Intuitive Conceptual Analysis is a mainstay of recent analytic philoso-
phy.11 This is evidenced by the fantastic menagerie of ‘intuition pumps’
with which all analytic philosophers are painfully familiar, including
trolley problems, Gettier cases, the Chinese Room, Swampman and
Twin Earth.
Intuitive Conceptual Analysis is also closely related to traditional
descriptivist theories of meaning, which hold that the meaning of a person’s
64 Justin C. Fisher

concept is determined by the description(s) that that person (perhaps


tacitly) associates with that concept (Frege 1892, Russell 1905, Strawson
1950). Insofar as we expect intuitions to do a good job of revealing tacit
descriptions, we can expect that some version of Intuitive Conceptual
Analysis would do well to reveal meanings, in that term’s descriptivist
sense.
Most people working in philosophy of mind and language have aban-
doned descriptivism and moved on to either causal/informational or
teleo/pragmatic theories, according to which meaning is determined in
large part by patterns of causal interaction between us and things in
the world.12 The mass exodus from descriptivism bodes poorly for the
hope that Intuitive Conceptual Analysis will be a good way to reveal the
meanings of concepts. Granted, some intuitions may track the semanti-
cally relevant external causal relations. Indeed, insofar as our intuitions
are shaped by practical experience (via either learning or evolution), it
may be especially likely that we will end up having quite good intui-
tions regarding ordinary practically relevant cases. However, Intuitive
Conceptual Analysis has been characterized by its consideration of cases
that are far from ordinary: Swampman, the Chinese Room, TrueTemp,
Frankfurt’s neuroscientist. For suchextraordinary cases, there is no
reason to think intuitions will do especially well at tracking truths about
the referents we have latched onto via the sorts of causal interaction
that are deemed semantically relevant by most contemporary semantic
theories.
Furthermore we now know that folk intuitions often do fail when
extrapolated to extraordinary cases. Despite intuitions to the contrary,
solid objects aren’t impermeable at small scales, water isn’t an indivis-
ible element, species don’t have immutable essences, and space-time
isn’t Cartesian at small scales or high speeds. It would take great hubris
to expect folk philosophical intuitions to fare better than folk intui-
tions surrounding other concepts. Descriptivism offered one potential
basis for such hubris: intuitions would be trustworthy if they reflected
tacit descriptions that determined reference. But once we reject descrip-
tivism, we lose grounds for optimism about the reliability of intuitions
regarding extraordinary cases, and hence we lose grounds for trusting
Intuitive Conceptual Analysis to correctly reveal the meanings of our
concepts.
Even while our best semantic theories give us strong reason to doubt
our intuitions, a great deal of work in analytic philosophy still proceeds
by Intuitive Conceptual Analysis. This is an unstable situation. Sooner
or later some sort of correction will need to be made to resolve it: (a)
Meanings and Methodologies 65

philosophers of mind and language will need to reincarnate something


much like descriptivism,13 or (b) analytic philosophers can keep pumping
intuitions but give up their goal of revealing the correct application
conditions of philosophical concepts, or else (c) we will need to shift to
methodologies that better fit our state-of-the-art semantic theories.
Further challenges for Intuitive Conceptual Analysis will become
apparent when we consider how this methodology would apply to the
example of FREE ACTION. Frank Jackson, a leading defender of Intuitive
Conceptual Analysis, consults his own intuitions surrounding FREE
ACTION and concludes,

Speaking for my part, my pre-analytic conception of free action is


one that clashes with determinism. (Jackson 1998, 44)

One might wonder whether Jackson’s incompatibilist intuitions actu-


ally match the relevant intuitions of whoever the people are whose
intuitions would be relevant to certain versions of Intuitive Conceptual
Analysis.14 However, let us suppose for the sake of illustration that
Jackson has applied Intuitive Conceptual Analysis correctly. Even if
many real people’s intuitions differ from Jackson’s, they might just as
easily have been the same as his, and this possibility is enough to illus-
trate how Intuitive Conceptual Analysis may fail to be useful.
Given that determinism is true (or close enough to true), Jackson’s
incompatibilist analysis of FREE ACTION does indeed fail to be useful, for
it lumps all our actions together in a single category: unfree. This fails to
offer many of the useful distinctions that we ordinarily use the concept
FREE ACTION to make – for example, the distinction between the acts of
ordinary vandals and the acts of sleepwalkers.
Jackson, himself recognizing this failing of his incompatibilist anal-
ysis, writes:

[It is] only sensible to seek a different but ‘nearby’ conception that
does, or does near enough, the job we give [to the concept being
analyzed] in governing what we care about, our personal relations,
our social institutions of reward and punishment, and the like, and
which is realized in the world. (Jackson 1998, 45)

Here Jackson admits that Intuitive Conceptual Analysis may lead us


to explications that aren’t very useful and that when this happens,
it is ‘only sensible’ to employ a fallback strategy instead. Jackson’s
fallback strategy is to seek an explication capable of doing the jobs
66 Justin C. Fisher

we give that concept in governing the things we care about. This is a


fairly good characterization of Pragmatic Conceptual Analysis. Hence,
Jackson admits that when push comes to shove – when Intuitive
Conceptual Analysis and Pragmatic Conceptual Analysis disagree – it
is ‘only sensible’ to embrace the guidance of Pragmatic Conceptual
Analysis.15,16
This point is really quite general. We saw above that (given a few
reasonable caveats) Pragmatic Conceptual Analysis guarantees that its
explications will be useful. This is a guarantee that Intuitive Conceptual
Analysis cannot match. Our intuitions have been shaped by (indi-
vidual, cultural, and evolutionary) experience in various situations.
These experiences have probably tended to shape our intuitions so that
they fit fairly well with what has been beneficial in common cases and/
or important ones. But our intuitions still must be a fallible guide to
what is beneficial even in these cases; they become very fallible indeed
as we attempt to apply them to cases that differ significantly from
the ordinary high-stakes cases they have been most strongly shaped
to deal with.
Since our intuitions probably correlate somewhat well with the ways
in which our concepts are actually useful, our intuitions are a reason-
able starting point for generating plausible hypotheses regarding what
sorts of useful work our concepts are doing. But additional empirical
evidence often reveals that our intuitions have overlooked some ways in
which our concepts have regularly delivered benefits and that they are
significantly mistaken about others. When faced with such discoveries,
Pragmatic Conceptual Analysis and Intuitive Conceptual Analysis point
us in opposite directions. Intuitive Conceptual Analysis proposes that
we enshrine intuitions that we have discovered to be poor guides to
what is beneficial, while Pragmatic Conceptual Analysis proposes that
we abandon these misguided intuitions and instead continue to reap
benefits in the regular ways we have been reaping them.
At such a point, Intuitive Conceptual Analysis calls upon us to forgo
the sorts of benefits that our use of a concept has regularly delivered.
What does it offer in exchange? Only the preservation of pre-existing
intuitions that we now know fail to track what is beneficial. When faced
with a choice between achieving tangible benefits and catering to intui-
tive prejudice, there is strong reason to follow the tangible benefits. As
Jackson himself admitted, when Pragmatic Conceptual Analysis and
Intuitive Conceptual Analysis diverge, it is ‘only sensible’ to follow
Pragmatic Conceptual Analysis.
Meanings and Methodologies 67

I have just argued that we should not use our intuitions surrounding
a concept as the final arbiter in determining what explication of that
concept to adopt. But it is worth noting that there are at least three
limited ways in which intuitions might play a central role in philosoph-
ical analysis, even on my view.
First, as just noted, our intuitions surrounding a concept may serve
as a good source for initial hypotheses regarding the usefulness of
that concept, hence as a good starting point for Pragmatic Conceptual
Analysis. But they are just a starting point. When we discover that our
intuitions are mistaken regarding the useful work a concept has been
doing, then, on my view, we no longer have any reason to allow these
intuitions to continue to constrain our theorization.
Second, if we are to have reason to embrace the explications produced
by Pragmatic Conceptual Analysis, we will want to use a version of this
methodology which is defined in terms of a notion of benefit that we
have reason to pursue. For all I know, it may be that intuitions about
what sorts of things are worth pursuing will play a large role in helping
to choose this notion of pursuitworthy benefit. But notice that these
are intuitions about what consequences are worth pursuing rather than
about the concept(s) being analysed (e.g., FREE ACTION). This fact sets this
approach apart from Intuitive Conceptual Analysis.
Third, Pragmatic Conceptual Analysis draws upon empirical accounts
of how various concepts have regularly delivered benefits. In producing
such empirical accounts, we might draw upon the sorts of intuitions
scientists use to determine which empirical generalizations are supported
by empirical evidence. These are quite different from the intuitions
called upon by familiar forms of Intuitive Conceptual Analysis.
Hence, I stake out a fairly moderate position regarding the use of
intuitions in philosophy and allow for at least three limited ways in
which intuitions might be quite useful. However, I maintain that
Intuitive Conceptual Analysis is at best a starting point and that strong
empirical evidence about the actual usefulness of our concepts should
trump our intuitions surrounding those concepts. It is quite reason-
able to suspect that contemporary analytic philosophers have pushed
our intuitions far beyond the limits of their usefulness. Rather than
continue to pump intuitions about fantasy swampmen, esoteric trolley
problems or strange new variants of the Gettier problem, philosophers
would do well instead to seek good empirical evidence regarding the
sorts of useful work that our concepts have actually been doing for us
and then to seek explications that would allow our concepts to do this
work more efficiently.
68 Justin C. Fisher

5 Naturalized Conceptual Analysis

Let us move on to consider another conceptually anchored method-


ology, one that takes as its conceptual anchor the various ‘paradigm
instances’ of a given concept. The job of the philosopher or scientist is
to go out and discover empirical facts about what (if anything) unifies
these paradigm instances as a single sort of natural phenomenon. One
strong advocate of this methodology is Hilary Kornblith:

We begin, often enough, with obvious cases, even if we do not yet


understand what provides the theoretical unity to the kind we wish
to examine. Understanding what that theoretical unity is is the object
of our study, and it is to be found by careful examination of the
phenomenon, that is, something outside of us, not our concept of
the phenomenon, something inside of us. (Kornblith 2002, 10–11)

I call this methodology Naturalized Conceptual Analysis. Philosophers


commonly call such approaches ‘naturalized’ because they draw heavily
upon empirical findings from natural sciences. Such approaches are
also ‘conceptual analyses’ in that they take pre-existing concepts as
their launching point and aim to arrive at good explications of those
concepts.
Advocates of Naturalized Conceptual Analysis take this methodology
to be a mainstay of empirical science,17 which clearly has delivered
useful analyses for ordinary concepts like FISH, WATER, PLANET, and LIGHT-
NING. Naturalized Conceptual Analysis also enjoys a growing following
in philosophy, including proposed analyses of KNOWLEDGE (Quine 1969;
Kornblith 2002), EMOTION (Griffiths 1997), COLOUR (e.g., Hilbert 1987),
CONSCIOUSNESS (Dennett 1991) and MORAL GOODNESS (e.g., Boyd 1988).
Naturalized Conceptual Analysis pairs naturally with causal and infor-
mational theories of reference, like those proposed by Saul Kripke (1972),
Gareth Evans (1973), Richard Boyd (1988), Jerry Fodor (1990) and Robert
Rupert (1999). Corresponding to the many different particular ways
that one might spell out such theories of reference, there are different
versions of Naturalized Conceptual Analysis.
Different versions of Naturalized Conceptual Analysis might disagree
about what makes something count as a paradigm instance of a concept.
In the passage above, Kornblith suggests that we start with ‘obvious’
cases. This suggests that the paradigm instances are those which intui-
tively strike us as especially clear instances of a given concept.18 Other
proponents of Naturalized Conceptual Analysis take the paradigm
Meanings and Methodologies 69

instances to be things that were present at the initial baptism of a


concept or term (Kripke 1972). Others take paradigm instances to be the
things that are a source of information we associate with the concept
(Evans 1973, Boyd 1988). Still others take paradigm instances to be the
things which cause us to token the concept most consistently (Rupert
1999) or most robustly (Fodor 1990).
Different versions of Naturalized Conceptual Analysis might also disa-
gree about what sort of theoretical unity to seek among the paradigm
instances. For any given set of paradigm instances, there will likely be
numerous more-or-less natural kinds which unify most of the instances.
For example, let’s grant for the sake of argument that our shared concept
FREE ACTION somehow designates a set of paradigm instances. The problem
is that these instances will have all kinds of features in common. How
will we know which sort of commonality to look for? We need some
clear specification of what sort of commonality is relevant even to get
in the ballpark of a good analysis. (Devitt and Sterelny [1987] call this
the ‘qua problem’.)
There are several potential ways to deal with this problem. Adding a
set of paradigm counterinstances might rule out many candidate ways
of unifying the paradigm instances. Counting some ways of unifying
paradigm instances as ‘more natural’ than others (cf. Lewis 1983) might
give us a ‘natural’ way to prefer some candidates to others. Or we might
prefer those (‘natural’) commonalities that include minimally many
further instances beyond the given paradigm instances. However, none
of these proposals seems to capture the plausible fact that on the basis
of a given set of paradigm instances, a person might form a concept of
water (i.e., the chemical compound H2O) or a concept of ice (i.e., the
solid phase of H2O) or might even form both these concepts. Perhaps for
this reason, many advocates of Naturalized Conceptual Analysis have
taken it that a given concept provides conceptual anchors not only to a
set of paradigm instances but also to a sortal that specifies what sort or
kind these instances are all supposed to belong to. Two concepts (like
WATER and ICE) with the same paradigm instances might refer to different
natural kinds in virtue of the fact that the two concepts are somehow
associated with different sortals.
The need to associate concepts with sortals presses the advocate of
Naturalized Conceptual Analysis to lean upon one of the above method-
ologies. Should we determine the sortal by consulting someone’s intui-
tions surrounding the given concept? This move would lead to many of
the disadvantages of Intuitive Conceptual Analysis mentioned above.
Or should the sortals be determined in some other way – perhaps even
70 Justin C. Fisher

by something like Pragmatic Conceptual Analysis? Doing this would


help to ensure that Naturalized Conceptual Analysis will be useful but
at the cost of conceding that Pragmatic Conceptual Analysis is a good
guide after all.
There is a deeper problem here, paralleling the deep problem recog-
nized above in Intuitive Conceptual Analysis. Even when we can find a
way to carry Naturalized Conceptual Analysis through to a determinate
conclusion, there is no guarantee that this conclusion will be all that
useful – there is no guarantee that the analysis will latch on to a natural
kind that it is worthwhile to use a shared concept to track. For example,
a great deal of work in epistemology has taken the paradigm instances
of KNOWLEDGE to be abstract cases like ‘I think I therefore I am’ and ‘2 +
3 = 5’. But there is no guarantee that the best account of what these
paradigm cases have in common will be at all useful when applied to our
ordinary beliefs regarding important empirical matters.
Pragmatic Conceptual Analysis does give us a guarantee like this: if
a shared concept delivers regular benefits that make it worth having,
then Pragmatic Conceptual Analysis will at least preserve this usefulness
and often extend it. By contrast, Naturalized Conceptual Analysis might
sometimes take what had been a quite useful concept and have us focus
the application of that concept upon some natural kind that it is not
nearly so useful for us to track.
Jackson’s comments about what is ‘sensible’ apply here, too. When
push comes to shove – when one is forced to choose between preserving
beneficial usage and preserving paradigm instances – it is sensible to
abandon the so-called paradigm instances and instead to accept the
guidance of Pragmatic Conceptual Analysis.

6 Other methodologies

I have considered four general conceptual engineering strategies. Free-


standing theory construction aims only to produce a useful concep-
tual framework, and is not concerned about explicating pre-existing
concepts. Pragmatic Conceptual Analysis aims to preserve beneficial
uses of our pre-existing concepts. Intuitive Conceptual Analysis aims to
preserve intuitions surrounding our concepts. Naturalized Conceptual
Analysis aims to preserve the paradigm cases and sortals associated with
our concepts.
Some philosophical projects clearly employ just one of these meth-
odologies. But more often, philosophers employ some combina-
tion of them. Conceptually anchored projects often introduce some
Meanings and Methodologies 71

free-standing technical terminology to help articulate their analyses.


Naturalized approaches often lean heavily upon intuitive approaches to
specify sortals. Sophisticated advocates of intuition-driven approaches
often allow that some intuitions might licence empirical findings to
supplant other intuitions (Lewis 1984; Jackson 1998; Chalmers 2002);
some advocates are also willing to turn to pragmatic approaches when
intuitions fail to be useful (e.g., Jackson 1998; Schmidtz 2006).
In addition to the desiderata placed by these four methodologies and
their hybrids, there might be other desiderata that one might think
conceptual engineers should heed. In particular, there might be other
aspects of pre-existing concepts that one might hope a proposed concep-
tual analysis would preserve. I briefly present the two most plausible
candidates I have encountered.
First, some recent philosophical arguments have attempted to use
various subtle linguistic findings to help guide selection of philosoph-
ical analyses. For example, Jason Stanley (2005) argues that the word
‘know’ fails various subtle linguistic tests for context sensitivity. Stanley
takes this linguistic evidence to militate against accepting a contextu-
alist analysis of KNOWLEDGE.
Second, people are often quite reluctant to accept a proposed explica-
tion which fails to preserve the metaphors they had previously associ-
ated with a concept (e.g., Niyogi 2005). For example, we now know that
oxidizable materials are actually responsible for much of what phlogiston
was once thought to do (e.g., enabling burning and rusting). However,
phlogiston theorists were strongly attached to the metaphor of phlo-
giston as a fundamental sort of fluid released from objects when they
burn or rust. In contrast, oxidizable materials encompass a diverse
assortment of substances that are not released through oxidation but are
instead transformed into oxidized materials. Our contemporary under-
standing of oxidizable materials fits quite poorly with the old meta-
phors for phlogiston, and this might explain some people’s reluctance
to embrace oxidizable material as an explication of PHLOGISTON.
One might take it as a stand-alone desideratum that conceptual
analyses should deliver explications that preserve subtle linguistic
features of analysed concepts or that allow us to continue using familiar
metaphors for our concepts. However, my own inclination is to grant
that like intuitions and paradigm cases, linguistic cues and associated
metaphors are a mere source of defeasible evidence regarding the ways
in which a concept is successfully used. I have no problem with using
any of these as sources for initial empirical hypotheses regarding the
usefulness of concepts. However, once we get strong empirical evidence
72 Justin C. Fisher

that a concept has regularly been delivering benefits in some other way
than what these cues initially suggest, we will have strong reason to
prefer explications that preserve these beneficial uses to explications
that sacrifice benefits to preserve these other cues. When we discover
that our metaphors or our linguistic practices are in tension with the
continued beneficial usage of our concepts, we should abandon the old
metaphors or linguistic practices, not the beneficial uses.
Similar considerations apply regarding any conceptual anchor one
might dream up. Suppose we are considering a version of Pragmatic
Conceptual Analysis defined in terms of a notion of benefits that we
have reason to pursue and suppose we are comparing this against some
other conceptually anchored methodology – call it M – which includes
other conceptual anchors besides pursuitworthy benefits and, hence,
sometimes offers explications that differ from those of Pragmatic
Conceptual Analysis. By definition, Pragmatic Conceptual Analysis
yields explications that will do optimally well to preserve the regular
ways in which analysed concepts have delivered pursuitworthy benefits.
Since M sometimes offers other explications, M must sometimes call
upon us to sacrifice regular ways of achieving pursuitworthy benefits to
preserve whatever features M takes to determine concept meaning (intu-
itions, metaphors or whatever). But when push comes to shove like this,
Jackson’s advice clearly applies: it is ‘only sensible’ to employ Pragmatic
Conceptual Analysis rather than a competing methodology that would
ask us to sacrifice pursuitworthy benefits in order to achieve something
less worthy of pursuit.19

Notes
1. I think of concepts as mental particulars that play a folder-like information-
coordinating role in cognition. In this, I follow a rich tradition in cognitive
psychology and the philosophy of cognitive science. For a good introduction
to this tradition, see Margolis and Lawrence (1999); for a view of concepts
very similar to my own, see Millikan (1998, 2000). This tradition may be
contrasted with an equally rich philosophical tradition that takes concepts
to be abstract entities.
2. My distinction between free-standing and conceptually anchored theory
construction closely mirrors Clark Glymour’s (2004) distinction between
Euclidean and Socratic approaches.
3. It may be difficult to draw a principled line between what is conceptually
anchored and what is free-standing. Even the most ‘free-standing’ of theory
builders usually intends her theories to be linked in some way to pre-existing
concepts, if only to those of observational evidence and simple logical rela-
tions. Such linkages may be taken to shed at least some light upon those
Meanings and Methodologies 73

pre-theoretical concepts. On the other side, most ‘conceptually anchored’


theorists are willing to wriggle free of pre-theoretic expectations when doing
so yields enough theoretical usefulness. Even if there are some hard-to-call
borderline cases, many are easy to call, and so this distinction is still a useful
one.
4. For my own views on ‘intensional’ aspects of meaning, see Fisher (2006, n.d.
2, esp. §§1.2, 7.6).
5. For an overview of relevant research, see Laurence and Margolis (1999).
6. Closely related methodologies have been proposed in epistemology by
Edward Craig (1990) and in philosophy of science by James Woodward
(2004). I offer a much more detailed articulation and defence of Pragmatic
Conceptual Analysis in Fisher (2006, n.d. 2) and also survey the relations
between it and other somewhat similar approaches, including William James
(1906) and Rudolf Carnap (1950).
7. Elsewhere I consider various detailed understandings of ‘benefit’ that might
be employed by different versions of Pragmatic Conceptual Analysis and the
relative merits of each. See Fisher (2006, n.d. 2).
8. In other works, I consider ways in which research by experimental philos-
ophers (and other experimentalists) might provide a better-justified job
description for this concept; see Fisher (forthcoming, n.d. 2, ch. 5).
9. This guarantee is subject to several caveats. E.g., it may be that some
concepts – including perhaps PHLOGISTON – just don’t merit continued use,
even after we use Pragmatic Conceptual Analysis to tweak them. When a
concept turns out not to be all that useful, it may be reasonable to abandon it
and adopt another conceptual framework – e.g., that of modern chemistry –
in its place. Such caveats do not weigh against the claim that Pragmatic
Conceptual Analysis generally yields explications worth adopting and gener-
ally does markedly better at this than rival methodologies.
10. I expand on this idea in Fisher (2006, ch. 4, n.d. 1, n.d. 2, ch. 4), where I
argue that Pragmatic Conceptual Analysis has ‘normative authority’ in that
we generally have both practical and epistemic reason to embrace the explica-
tions it delivers.
11. For an alternative point of view, see Capellen (2012), which argues that
philosophers have relied upon intuitions much less often than many have
thought. I remain unconvinced but will not argue the point here. The primary
purpose here is just to consider potential advantages and disadvantages of an
intuition-driven methodology, regardless of how many people have actually
used it.
12. The exodus from descriptivism was spurred in part by classic examples by
Kripke (1972), Putnam (1973) and Burge (1979). These include examples
where people don’t have in mind descriptions sufficient to pick out their
referents uniquely (e.g., Kripke’s Feynman case, Putnam’s Twin Earth), and
examples where the descriptions people do have in mind don’t even fit their
actual referents (e.g., Kripke’s Gödel/Schmidt case, Burge’s arthritis case).
13. Some contemporary philosophers aim to do just that, including Lewis (1984)
and Chalmers (2002, 2004).
14. For example, many experimental subjects have intuitions that disagree
with Jackson’s (Nahmias et al. 2005, 2006). I discuss the significance of this
research in Fisher (forthcoming, n.d. 2, ch. 5).
74 Justin C. Fisher

15. To be fair, Jackson insists that this application of Pragmatic Conceptual


Analysis will be a revisionary departure from the concept’s original meaning.
Whether Jackson is right depends upon which semantic theory is correct. I
have argued elsewhere (Fisher 2006, ch. 4, n.d. 1, n.d. 2, ch. 4) that we should
embrace a teleo-semantic theory upon which Pragmatic Conceptual Analysis
is semantically conservative, not revisionary.
16. Even while ‘sensibly’ embracing a compatibilist explication of FREE WILL for
ordinary usage, one might still want to coin other concepts (e.g., perhaps
LIBERTARIAN FREE WILL) for other more esoteric uses. I encourage introducing
as many technical concepts as you like into the marketplace of ideas and
seeing how they fare. Concepts explicated by Pragmatic Conceptual Analysis
will generally be robust competitors in this marketplace as they will have
been honed to continue yielding pursuitworthy benefits. But it’s possible
that some alternative concepts without this promising pedigree might find
a way to earn their keep as well. Indeed, I’ve already acknowledged this as a
worthy goal of free-standing theory construction.
17. I think that we would actually do better to consider much normal scientific
practice as involving Pragmatic Conceptual Analysis rather than Naturalized
Conceptual Analysis, but I leave those arguments for elsewhere.
18. Such an appeal to intuition would blur the line between Naturalized
Conceptual Analysis and Intuitive Conceptual Analysis, as many proponents
of sophisticated versions of Intuitive Conceptual Analysis (e.g., Jackson 1998;
Chalmers 2002) are inclined to do.
19. This chapter overlaps heavily with ch. 2 of my book manuscript (Fisher n.d.
2), and both draw upon work in my doctoral dissertation (Fisher 2006). I
am indebted to more people than I can name for helpful comments but
especially Terry Horgan, David Chalmers, Joseph Tolliver, Chris Maloney and
my colleagues at SMU, as well as audiences at the AAPA, the eastern APA,
the SPP, the SSPP, the WCPA, many other combinations of letters, Florida
International University and the Universities of Arizona, British Columbia
and St Andrews. I am also grateful for helpful comments from other contrib-
utors to this New Waves volume.

References
Appiah, A. (1986) ‘Truth Conditions: A Causal Theory’. In Language, Mind and
Logic, Thyssen Seminar Volume, Jeremy Butterfield (ed.), Cambridge: Cambridge
University Press, 25–45.
Blackburn, S. (2005) ‘Success Semantics’. In Ramsey’s Legacy, H. Lillehammer and
D. H. Mellor (eds.), New York: Oxford University Press.
Bovens, L. and Hartmann, S. (2003) Bayesian Epistemology. Oxford: Clarendon
Press.
Boyd, R. (1988) ‘How to be a Moral Realist’. In Essays on Moral Realism, Sayre
McCord (ed.), Cambridge University Press, 181–228.
Burge, T. (1979) ‘Individualism and the Mental’. In Studies in Metaphysics,
P. French, T. Uehling and H. Wettstein (eds.), Minneapolis: University of
Minnesota Press.
Meanings and Methodologies 75

Capellen, H. (2012) Philosophy without Intuitions. Oxford: Oxford University


Press.
Carnap, R. (1950) Logical Foundations of Probability. Chicago: University of
Chicago Press.
Chalmers, D. (2002) ‘The Components of Content’. In Philosophy of Mind:
Classical and Contemporary Readings, David J. Chalmers (ed.), New York: Oxford
University Press, 608–633.
Chalmers, D. (2004) ‘The Foundations of Two-Dimensional Semantics’. In
Two-Dimensional Semantics: Foundations and Applications, M. Garcia-Caprintero
and J. Macia (eds.), Oxford: Oxford University Press.
Churchland, P. (1981) ‘Eliminative Materialism and the Propositional Attitudes’.
Journal of Philosophy 78: 67–90.
Craig, E. (1990) Knowledge and the State of Nature. Oxford: Oxford University
Press.
Dennett, D. (1991) Consciousness Explained. Boston: Little, Brown.
Devitt, M. and Sterelny, K. (1987) Language and Reality. Cambridge, MA: MIT
Press.
Dretske, F. (1988) Explaining Behavior. Cambridge, MA: MIT Press.
Evans, G. (1973) ‘The Causal Theory of Names’. In The Philosophy of Language, A.
P. Martinich (ed.) New York: Oxford University Press, 1996.
Fisher, J. (2006) ‘Pragmatic Conceptual Analysis’. Ph.D. diss., University of
Arizona.
Fisher, J. (forthcoming) ‘Pragmatic Experimental Philosophy’. Philosophical
Psychology.
Fisher, J. (n.d. 1) ‘The Authority of Pragmatic Conceptual Analysis.’ In
preparation.
Fisher, J. (n.d. 2) ‘Pragmatic Conceptual Analysis’. Under review.
Fodor, J. (1990) A Theory of Content and Other Essays. Cambridge, MA: MIT Press/
Bradford.
Frege, G. (1892) ‘On Sense and Nominatum’. In The Philosophy of Language, 3rd
edn, A. P. Martinich (ed.) 186–198. Oxford: Oxford University Press, 1996.
Griffiths, P. (1997) What Emotions Really Are: The Problem of Psychological Categories.
Chicago: University of Chicago Press.
Haslanger, S. (1999) ‘What Knowledge Is and What It Ought to Be: Feminist
Values and Normative Epistemology’. Philosophical Perspectives, 13, 459–480.
Hilbert, D. (1987) Color and Color Perception. Stanford, CA: CSLI Publications.
Jackson, F. (1998) From Metaphysics to Ethics: A Defense of Conceptual Analysis. New
York: Oxford University Press.
James, W. (1906) ‘What Pragmatism Means’. www.marxists.org/reference/subject/
philosophy/works/us/james.htm.
Kornblith, H. (2002) Knowledge and Its Place in Nature. Oxford: Oxford University
Press.
Kripke, S. (1972) Naming and Necessity. Cambridge, MA: Harvard University
Press.
Kuhn, T. (1962) The Structure of Scientific Revolutions. Chicago: University of
Chicago Press.
Lewis, D. (1983) ‘New Work for a Theory of Universals’. Australasian Journal of.
Philosophy 61: 343–377.
76 Justin C. Fisher

Lewis, D. (1984) ‘Putnam’s Paradox’. Australasian Journal of Philosophy, 62,


221–236.
Margolis, E., and S. Laurence (eds) (1999) Concepts: Core Readings. Cambridge,
MA: MIT Press.
Millikan, R. (1984) Language, Thought, and Other Biological Categories. Cambridge,
MA: MIT Press.
Millikan, R. (1998) ‘A Common Structure for Concepts of Individuals, Stuffs, and
Basic Kinds: More Mama, More Milk and More Mouse’. Behavioral and Brain
Sciences, 22(1), 55–65.
Millikan, R. (2000) On Clear and Confused Ideas. Cambridge: Cambridge University
Press.
Nahmias, E., S. Morris, T. Nadelhoffer and J. Turner (2005) ‘Surveying Freedom:
Folk Intuitions about Free Will and Moral Responsibility’. Philosophical
Psychology, 18, 561–584.
Nahmias, E., S. Morris, T. Nadelhoffer and J. Turner (2006) ‘Is Incompatibilism
Intuitive?’. Philosophy and Phenomenological Research, 73, 28–53.
Niyogi, S. (2005) ‘Aspects of the logical structure of conceptual analysis.’
Proceedings of the 27th Annual Meeting of the Cognitive Science Society.
Papineau, D. (1987) Reality and Representation. Oxford: Blackwell.
Putnam, H. (1973) ‘Meaning and Reference’. In The Philosophy of Language, A. P.
Martinich (ed.), New York: Oxford University Press, 1996.
Quine, W. V. O. (1969) ‘Epistemology Naturalized’. In Ontological Relativity and
Other Essays. New York: Columbia University Press.
Ramsey, F. (1927) ‘Facts and Propositions’. In The Foundations of Mathematics, and
Other Logical Essays, R. B. Braithwaite (ed.), London: Routledge and Kegan Paul,
1931, 138–155.
Rupert, R. (1999) ‘The Best Test Theory of Extension: First Principle(s)’. Mind &
Language, 14, 321–355.
Russell, B. (1905) ‘On Denoting’ In The Philosophy of Language, 3rd edn. A. P.
Martinich, (ed.), Oxford: Oxford University Press, 1996, 199–207.
Schmidtz, D. (2006). Elements of Justice. Cambridge: Cambridge University Press.
Stanley, J. (2005). Knowledge and Practical Interests. Oxford: Oxford University
Press.
Strawson, P. F. (1950). ‘On Referring’. In The Philosophy of Language, 3rd edn, A. P.
Martinich (ed.), Oxford: Oxford University Press, 1996, 215–230.
Woodward, J. (2004) Making Things Happen: A Theory of Causal Explanation. New
York: Oxford University Press.
Whyte, J. (1990) ‘Success Semantics’. Analysis, 50, 149–157.
5
Entangled Externalisms
Mark Sprevak and Jesper Kallestrup

The debate between internalists and externalists is multifaceted, strad-


dling vexed issues in contemporary philosophy. This chapter focuses on
the distinction between content and vehicle as it pertains to the inter-
nalism/externalism debate in philosophy of mind and cognitive science.
Whereas content internalism/externalism seeks to give an account of
what makes mental states have the contents they have rather than some
other contents or no contents at all, vehicle internalism/externalism
seeks to give an account of the processes or mechanisms that enable
mental states with contents to play a causal role in, for example, guiding
behaviour.1 In general, we understand externalism as the negation of
internalism.
What then is content internalism (CI)? The basic idea of CI is that
the contents of mental states are narrow in the sense of supervening on
internal features of individuals who are in those states. By ‘individual’
we henceforth understand a cognitive system, capable of being in
content-bearing mental states. In the following, we are only concerned
with the contents of beliefs and other propositional attitudes. We also
assume throughout that physicalism is true of our world such that those
internal features are physical. We later try to make precise the relevant
notion of supervenience. In contrast, content externalism (CE) is the
view that the contents of mental states are wide in that they fail to
supervene on internal physical features of individuals.
CE is typically motivated by Twin Earth-style cases. Suppose I utter
the sentence ‘that apple is wholesome’ while pointing at the apple – call
it apple1 – in front of me. What I have said is true iff apple1 is whole-
some. Now suppose an internal physical duplicate of me also utters
the sentence ‘that apple is wholesome’ while pointing at a distinct yet
superficially indistinguishable apple – call it apple2 – in front of him.
What my duplicate said is true iff apple2 is wholesome. Given that

77
78 Mark Sprevak and Jesper Kallestrup

the truth-conditional contents of utterances of the same sentence by


internal physical duplicates differ, those contents fail to supervene
on internal physical features. On the assumption that the contents of
beliefs are given by the sentences (and accompanying demonstrative
identifications) used to correctly report those beliefs, then the corre-
sponding mental contents also fail to supervene on such features. In
response, proponents of CI might try to factor out a narrow component,
shared by internal physical duplicates. Perhaps what my duplicate and I
have in common is the belief that the demonstratively identified apple
is wholesome, where the description ‘the demonstratively identified
apple’ picks out different apples in different contexts of utterance. The
dispute between CE and CI is not merely over the content of sentences
containing demonstrative expressions. Friends of CE also hold that the
content of sentences containing natural kind, or artefactual, terms fails
to supervene on internal physical features. Instead, such content is wide
in virtue of being partially individuated by environmental features to do
with the microstructure of the relevant natural kinds or sociolinguistic
facts about language use and speakers’ deferential dispositions. The same
is supposedly true of mental content. Friends of CI have devised various
strategies for resisting both claims, which will not detain us here.2
How about vehicle internalism (VI)? The vehicle of content is the
physical item that has, or expresses, that content – for example, a
sentence, if we talk about linguistic content, or some piece of cognitive
architecture, if we talk about mental content. The basic idea of VI is that
an individual’s mental processing is brain- or at least body-bound; cogni-
tive processes and mental states are located inside the skin and skull of
individuals. One can get an intuitive grip on VI by thinking of the mind,
roughly speaking, as a sensation-cognition-action sandwich.3 Cognition
is the ‘filling’ of this sandwich: cognition takes place after sensory input
and before motor output. Since sensation and motor activity occur at
bodily interfaces and cognition occurs between sensation and motor
activity, it appears that cognition must occur inside the body or, rather,
the physical processes that correspond to cognition must lie inside the
body. In contrast, vehicle externalism (VE) claims that human cogni-
tion is neither brain- nor body-bound: our cognitive processes ‘extend’
outside the human body to include objects and processes in the external
environment.
VE is motivated by a range of arguments. One argument, the extend-
ed-functionalism argument, begins with the widely held claim that
functional structure is the essential feature of cognition. What makes a
physical process a cognitive process – say, of deductive reasoning, induc-
tive reasoning or word association – is its informational states and the
Entangled Externalisms 79

way in which those states are manipulated in the process. If a system


has a mechanism with the right informational states and the right
functional structure, that system counts as having the relevant cogni-
tive process. This holds no matter what the states are made out of or
where they occur. Extended functionalists argue that the requirements
for cognition to occur are met, not only by human neural activity (as
functionalists have claimed since the 1960s), but also by the conjunc-
tion of that neural activity and the use of external resources.4
To fix ideas, consider the well-trodden example of Otto. Imagine that
Otto has a mild form of Alzheimer’s and he always carries a notebook
with him. When Otto needs to store new information, he always writes
it down in his notebook, and when he needs to recall information, he
always looks it up in his notebook. Advocates of extended functionalism
argue that Otto’s notebook, if used in a sufficiently reliable way, plays
the same functional role in Otto’s mental life as neural memory does for
Otto’s healthy counterpart, Inga. In Inga’s case, the functional require-
ments of memory are fulfilled by her brain activity alone; in Otto’s case,
those requirements are fulfilled by the joint operation of Otto’s brain,
body and notebook. Otto’s storage and recall of information from his
notebook is, by functionalist lights, a case of extended cognition. In
response, proponents of VI object that there are functional differences
between Otto and Inga that show that Otto and his notebook do not
fulfil the functionalist requirements for memory. Typically, advocates
of VI draw attention to fine-grained functional differences, such as the
precise shape of Inga’s reaction times when recalling information. They
argue that these differences involve essential, rather than accidental,
properties of memory. Defenders of VE respond that making these fine-
grained properties essential to cognitive status commits us to a form
of chauvinism about mental life that functionalism was designed to
avoid.5
In this chapter, we are not concerned to pronounce judgment on the
merits of VI versus VE or CI versus CE considered in isolation. Rather, we
are interested in whether, on the one hand, taking sides in the dispute
over CI and CE implies a commitment to VI or VE, respectively, and
on the other hand, whether taking sides in the dispute over VI and VE
implies a commitment to CI or CE, respectively. Our primary target is
the principle:

INDEPENDENCE
CE and VE are distinct claims that can be accepted or rejected
independently.
80 Mark Sprevak and Jesper Kallestrup

INDEPENDENCE has generally been assumed to be true by all parties in the


disputes above. For example, Rupert, a prominent critic of VE, writes,
‘Content externalism and [VE] are distinct, though mutually consistent,
theses: neither [VE] nor its negation follows from content externalism,
and [VE] does not entail content externalism. ... I treat [VE] independ-
ently of the sort of issues normally addressed in discussions of content
externalism’ (Rupert 2004, 397). Clark, one of the main proponents of
VE, agrees: ‘In [our original] paper, we showed ... why [VE] was orthog-
onal to the more familiar Putnam-Burge style externalism’ (Clark 2008,
78).6 In this chapter, we challenge this received view by arguing that
INDEPENDENCE is not straightforwardly true. The relationship between,
on the one hand, CI and CE and, on the other, VI and VE is more
complex than previously suspected. We gestured to VE above, but it
turns out to be far from clear how to state VE precisely. Depending on
how VE is cashed out, INDEPENDENCE may be either true or false. In the
following, we explore some of the intriguing dependencies between VE
and CE.
Bear in mind that any branch of externalism is defined as the nega-
tion of internalism. It thus follows that if CE and VE are independent,
then CI and VI are also independent, and vice versa. Correspondingly,
if CI and VI are not independent, then neither are CE and VE, and vice
versa. Note, for the record, that we henceforth consider VI/VE and CI/
CE claims that could be variously made about one’s cognitive, conscious
and/or mental life. We also consider them claims that could be vari-
ously made about states or processes. A final point to note is that we do
not take a stand on precisely where the boundary between the internal
and external lies for either VI/VE or CI/CE. We are neutral, for example,
about whether ‘internal’ includes the body, all the nervous system, only
the central nervous system or only the brain. Our question is, given a
choice for drawing the boundary, what is the relationship between VI/
VE and CI/CE?

1 Content externalism

Content internalism (CI) and content externalism (CE) make incom-


patible claims about the individuation of those mental properties or
states which carry content. Individuation is about what makes some-
thing what it is. The basic idea of CI is that such contentful properties
or states are individuated narrowly, whereas CE takes them to be indi-
viduated widely. Consider being a footprint. This is a wide property,
because a certain indentation in the sand is a footprint only if caused
Entangled Externalisms 81

by a foot. In contrast, the property of being a foot-shaped imprint is


narrow in that any intrinsic duplicate of a footprint is a foot-shaped
imprint, even if not caused by a foot. Note that both are properties of
the sand (or configurations of grains of sand). In particular, the fact that
some properties are individuated in virtue of their causal origin does not
mean they are properties of those causes. In the case of mental prop-
erties or states with wide content, individuation is about patterns of
causal relationships. According to CE, an individual can be in a mental
state with content only if she sustains appropriate causal relations with
her external physical or social environment. The claim is not that every
single occurrence of a wide-content mental state has to be caused by
certain environmental features. That would be to confuse causation
with individuation. Some philosophers use a slightly different termi-
nology to draw essentially the same distinction. Thus, Burge (2010)
defines what he calls ‘anti-individualism’ as the view that ‘the natures
of many mental states constitutively depend on relations between a
subject matter beyond the individual and the individual that has the
mental states’ (61). That should be distinguished from the claim that
the occurrence of a particular mental state causally depends on a subject
matter beyond the individual. CI denies that individuals need sustain
particular causal relations with their social or physical environment in
order to be in a mental state with content. According to CI, such states
are individuated solely in terms of features that do not extend into the
external environment.
Claims about narrow and wide individuation of contentful mental
states are typically cashed out in terms of supervenience. Thus, CI is
the claim that the contents of mental states are narrow in the sense
of supervening on internal features of individuals who are in those
states. Assuming that such states are individuated by their contents, CI
amounts to the claim that those states themselves supervene on internal
features of individuals. So any two internally identical individuals will
also be in the same content-bearing mental states. In contrast, CE is
the view that content-bearing mental states are wide in that they fail to
supervene on internal features of individuals. Instead, they supervene
on the conjunction of such internal features and external features of the
individual’s social or physical environment. So any two internally iden-
tical individuals need not be in the same content-bearing mental states
if those environmental features are relevantly different.
So far, content internalism (CI) has been understood to be making a
claim about the individuation of content-bearing mental states in terms
of internal physical features of individuals who are in those states or,
82 Mark Sprevak and Jesper Kallestrup

alternatively, about the constitutive dependency of such states on such


internal features. Content externalism (CE) would then be the negation
of those claims. However, the two notions of individuation and consti-
tutive dependency are not exhaustive. For instance, as we see later, CI
and CE can also be characterized in terms of distinct notions of (wide)
realization. However, what they all have in common is a commitment
to specific supervenience claims. We can thus understand such claims
as respective minimal definitions of CI and CE. Thus, consider the
following way of cashing out CI in terms of supervenience relations:

SUPERVENIENCE
Content-bearing mental states supervene on internal physical features
of individuals who are in those states.

But how exactly should the key notions of ‘supervenience’ and ‘internal’
be understood? Stalnaker (1989) and Jackson and Pettit (1993) emphasize
that narrow content should be understood as content shared between
internal physical duplicates who occupy the same world: an intrinsic
physical duplicate of me need not share my mental contents if located
in a world with deviant laws of nature or linguistic practices. To use the
analogy above, an intrinsic physical duplicate of a foot-shaped imprint
in our world is not itself a foot-shaped imprint if located in a possible
world where feet have abnormal shapes. If there is a viable notion of
narrow content, it is better to be intraworld narrow than interworld
narrow. So the pertinent notion of supervenience should be weak (indi-
vidual) supervenience, roughly:

SUPERVENIENCE*
States with mental content weakly supervene on internal, physical
features P iff necessarily; if individual I1 is in a state S with content M,
then there are some P such that I1 has P and every other individual I2
that has P is in state S with M.

Here, internal physical duplicates I1 and I2 are content duplicates only if


located in the same possible world. Consequently, CE should be under-
stood in terms of failure of such a weak (individual) supervenience claim.
Put more positively, CE has it that the supervenience base for states with
mental content includes external physical features.
Entangled Externalisms 83

2 Vehicle externalism

Vehicle externalism (VE) appears, on its face, to make a clear and


surprising claim about the nature of the mind: the mechanisms of
human cognition extend outside the brain and head. On reflection,
however, it is not clear exactly what is meant by this claim or whether
it really involves a departure from traditional thinking about the mind.7
We need a precise formulation of VE. Below, we review four ways of
formulating VE. As will be seen, these are not equivalent; they interact
in different ways with CI/CE, and they result in different truth values for
INDEPENDENCE. The four versions of VE we consider are not meant to be
exhaustive, but they do represent some of the principal ways in which
VE has been understood to date.
The first proposal for stating VE takes its cue from the original descrip-
tion by Clark and Chalmers (1998) of VE as ‘active externalism’:

ACTIVE
VE is true iff an external resource is active: the resource is coupled
to the agent by a two-way causal loop such that it plays an action-
guiding role for the agent in the here and now.

ACTIVE cashes out VE in terms of the presence of a two-way causal loop


between the agent and the external resource and characteristic behav-
ioural consequences for the agent with changes to that external resource.
Let us look at these conditions more closely.
The first condition requires that the agent’s internal states be not
mere causal subjects of the external resource; the agent should be able
to modify the resource by causal means. This provides a first contrast
between VE and CE. CE, unlike VE, permits external resources to be mere
causes for agents. An external resource relevant to CE may lie beyond
the agent’s power to control; for example, the distal and historical water
samples described by Putnam (1975) affect the content of an agent’s
‘water’ thoughts even though the agent cannot change or causally affect
those samples.
Second, VE requires that the external resource guide the agent’s action
in the here and now. The relevant sense of ‘action’ is non-intentional;
‘action’ means something like bodily movement. This provides a second
contrast between VE and CE. A characteristic of CE is that changes
to an external resource – for instance, swapping H2O for XYZ – may
change an agent’s intentional content but do not change an agent’s
84 Mark Sprevak and Jesper Kallestrup

(non-intentionally described) action; the agent would undergo exactly


the same bodily movements in both situations.8 In contrast, VE requires
that changes to, or interventions on, an external resource produce char-
acteristic changes in the agent’s (non-intentionally described) behav-
iour. The effect of these external interventions in a case of VE should be
patterned on the effects in a case of neural intervention: ‘If we remove
the external component the system’s behavioural competence will drop,
just as it would if we removed part of its brain’ (Clark and Chalmers
1998, 8–9).
The second formulation specifies VE in terms of our explanatory
commitments:

EXPLANATORY
VE is true iff an external resource is explanatorily ineliminable: one is
unable to explain the existence or character of one’s mental state/
process without making reference to that resource.

Noë uses this formulation of VE. Noë’s particular concern is human


perceptual experience. He claims that the character of our perceptual
experience cannot be adequately explained by neural activity alone; one
has also to consider how the brain interacts with the world via bodily
knowledge of sensorimotor contingencies. Noë claims that the brain,
body and world feature in an explanation of perceptual experience:

I argue that we have reason to believe that the substrates of experi-


ence – whatever they are, wherever they are – must be explanatory
substrates; I argue that the substrates of experience are extended
because it is only in terms of non-neural features that we can explain
how experience has the character that it does. (Noë 2007, 459)

Even if one disagrees with Noë’s claim about perceptual experience, one
may nevertheless find EXPLANATORY appealing as a way of stating VE.
EXPLANATORY suggests that the fortunes of VE are tied to the success or
failure of various explanations of mental phenomena. If our explana-
tion of the mind turns out to appeal to extraneural elements, VE is true;
otherwise, VE is false. EXPLANATORY is different from ACTIVE. Suppose, for
the sake of argument, that reference to an external resource is inelimi-
nable from the explanation of the character of an agent’s mental life;
that is no guarantee that the same resource also plays an ‘active’ role for
the agent concerned. The resource may be a cause rather than an effect
Entangled Externalisms 85

for the agent, and intervening on the resource may fail to change the
agent’s behaviour in the here and now.
A third formulation of VE, suggested by Block (2005), uses the notion
of a minimal supervenience base:

MIN-SUPERVENIENCE
VE is true iff an external resource is part of the minimal supervenience
base for that mental state/process.

The minimal supervenience base of a mental state/process is the minimal


physical activity needed for that mental state/process to occur. Brain acti-
vation of some sort is part of the minimal supervenience base of all our
mental processes/states. If there were no brain activation, there would
be no mental processes or states. The question concerning VE is what
more, if anything, than brain activation is required for human mental
states/processes to occur. MIN-SUPERVENIENCE identifies VE with the claim
that external resources feature in this minimal supervenience base. Note
that the ‘minimal’ condition is necessary; otherwise VE would be trivi-
ally true. If brain activity alone were sufficient to produce one’s mental
states, then brain activity plus activity in an external resource would
also be sufficient. For VE to be true, the external resource must play a
non-redundant role in the relevant supervenience base. MIN-SUPERVENI-
ENCE differs from EXPLANATORY. There is no reason why explanation of the
existence or character of a mental state/process should make inelimi-
nable reference to everything in its minimal supervenience base; indeed,
such an explanation is likely to be too detailed to be informative. MIN-
SUPERVENIENCE also differs from ACTIVE. Even if an external resource lies
inside the minimal supervenience base, that does not guarantee that the
resource plays a suitably active role for the agent; many of the neural
elements of an agent fail to satisfy ACTIVE’s conditions despite being in
the minimal supervenience base.
A fourth way of cashing out VE uses the notion of realization:

REALIZATION
VE is true iff a mental state/process of an agent is realized by the
conjunction of the agent’s neural activity and an external resource.

REALIZATION identifies VE with the claim that human mental states/proc-


esses are realized, not by brain activity alone, but by the conjunction
of brain activity and activity in some external environmental resource.
86 Mark Sprevak and Jesper Kallestrup

REALIZATION has obvious affinity with MIN-SUPERVENIENCE, but it makes


a stronger claim. REALIZATION requires not only supervenience on the
external resource but also that a particular relation obtain between the
mental state/process and that resource – namely, realization. Precisely
what this amounts to depends on one’s theory of realization. REALISATION
appears particularly apt as a way of understanding VE if one favours
extended functionalist arguments concerning VE, since those argu-
ments issue directly in conclusions about the realizers of mental states.
REALIZATION differs from EXPLANATORY. The explanation of the existence
or character of a mental state/process need not appeal to all, or indeed
appeal to only, the realizers of that mental state/process. REALIZATION also
differs from ACTIVE. A realizer need not play a suitably active role in the
agent’s mental life: it need not be subject to the agent’s causal control,
and interventions on the realizer may not change the agent’s behaviour
in the here and now.
Now that CE and a number of versions of VE are in focus, let us assess
whether CE and VE are independent.

3 Assessing INDEPENDENCE

Let us consider how INDEPENDENCE fares under the different formulations


of VE.
First, ACTIVE. As we saw above, ACTIVE provides two contrasts between
CE and VE: an external resource should be under the agent’s causal
control for VE but not CE, and changes to an external resource should
produce characteristic behavioural changes in the agent in the here and
now for VE but not CE. Either of these two conditions entails that a case
of CE need not be a case of VE. The converse claim – that VE fails to
entail CE – also appears to be true under ACTIVE. Even if VE’s conditions
are met, mental content may still weakly supervene on internal features
of the agent. VE does not entail the counter factual conditional that if
one were to keep the agent’s internal state fixed and change the external
resource in appropriate ways, the agent’s mental content would change.
Indeed, the only counterfactuals concerning changes in the external
resource entailed by VE concern cases in which the internal physical
state of the agent is not kept fixed, since the agent’s internal physical
state must change if her bodily behaviour changes. Hence, CE does not
entail VE. This is sufficient to establish INDEPENDENCE. An ACTIVE reading
of VE is almost certainly behind the claim of Clark and Chalmers (1998)
that VE and CE are logically distinct forms of externalism.
Entangled Externalisms 87

ACTIVE allows us to assert that CE and VE are logically independent, but


there is a problem with ACTIVE as a general strategy for defending INDE-
PENDENCE. The problem is that ACTIVE is regarded as an inadequate way of
formulating VE. As is seen in Section 2, other formulations of VE take the
view that ACTIVE places overly demanding conditions on VE. There seems
no reason why an external resource cannot be part of a cognitive mecha-
nism even if that resource only plays the role of a cause, or if changes to
that resource fail to produce behavioural changes in the here and now for
the agent. Many of our neural resources fail to satisfy ACTIVE’s stringent
conditions despite playing a role in our cognitive mechanisms. The Parity
Principle – a key claim in many arguments for VE – forbids external and
internal resources be judged by different standards when deciding their
cognitive status.9 If one accepts the Parity Principle, ACTIVE cannot be a
viable formulation of VE. ACTIVE also suffers from the problem of being
too weak as a formulation of VE. Many external resources that advocates
of VE do not intend as instances of VE satisfy ACTIVE’s conditions. For
example, the current state of an agent’s clothes – whether they are dirty,
clean, warm, cold, dry, wet and so on – stands in a two-way causal rela-
tion with the agent – the agent will change the state of her clothes if they
are in undesirable condition – and interventions that affect the agent’s
clothes will guide the agent’s behaviour in the here and now: spilling
ink on the clothes will produce an immediate behavioural response. But
just because the agent stands in a two-way causal relation to her clothes
which guides her behaviour in the here and now does not mean that one
has a case of extended cognition.
Second, EXPLANATORY. According to EXPLANATORY, VE is true just in case
one is unable to explain the existence or character of a mental state/
process without making reference to an external resource. EXPLANATORY
supports one half of INDEPENDENCE by blocking entailment from VE to
CE. That reference to an external resource is ineliminable from our
explanation does not entail that CE is satisfied, as it does not guarantee
that an agent’s mental content fails to weakly supervene on her internal
features. Suppose that reference to an external resource, E, is inelimi-
nable from the explanation of the character of agent A’s mental life.
It could be that E plays no role in individuating A’s mental content. E
may be ineliminable to explain some other, non-content-individuating,
aspect of the character of A’s mental life, such as the way in which A
processes states with content. Two internal duplicates, A and A*, could
have the same mental content and differ in other aspects of their mental
lives. Therefore, it is possible for VE to be true without CE being true.
This would only be threatened if one had independent reason to think
88 Mark Sprevak and Jesper Kallestrup

that the identity of an agent’s mental content depends on E, or on the


aspect of the agent’s mental life to which VE pertains. This may happen
if, for example, one understands VE as pertaining to how the agent proc-
esses mental content and combines this with a version of inferential role
semantics.
However, EXPLANATORY fails to block entailment in the other direction.
An agent’s mental content is an important part of her mental life. If two
agents differ in mental content, the character of their mental lives should
be explained in different ways. Suppose that agent A has a belief about
water. According to CE, in order for A to have this belief, and in order
for that belief to be about water rather than twater, A needs to stand in
an appropriate causal relation to external instances of water. An internal
duplicate of A who lacks these causal relations would have a mental life
with different mental content, or no equivalent mental content at all.10
In order to explain the particular character of A’s mental life, one needs
to make reference to resources outside A – namely, to external water
samples. But if reference to an external resource is ineliminable to the
explanation of the character of an agent’s mental state, then VE is true.
Under EXPLANATORY, CE entails VE, and INDEPENDENCE fails.
Third, MIN-SUPERVENIENCE. According to MIN-SUPERVENIENCE, VE is true
just in case the minimal supervenience base of an agent’s mental life
includes an external resource. Like EXPLANATORY, MIN-SUPERVENIENCE
supports INDEPENDENCE in one direction but not the other. MIN-SUPERVENI-
ENCE blocks entailment from VE to CE. If an external resource, E, is part
of the minimal supervenience base, that does not entail that E plays a
role in individuating the agent’s mental content. E may be required in
the supervenience base to fix other aspects of the agent’s mental life,
such as how she processes her mental content. Mental content can
weakly supervene on internal features even if other aspects of mental
life do not. Therefore, VE does not entail CE. This would only be threat-
ened if one had reason to think that an agent’s mental content is indi-
viduated by E, or by the aspect of the agent’s mental life to which VE
pertains. Certain forms of inferential role semantics may offer this. A
more serious problem is that MIN-SUPERVENIENCE fails to block entailment
from CE to VE. CE is understood as failure of mental content to weakly
supervene on internal features of the agent. Mental content is one aspect
of the agent’s mental life. If CE is true, then an external resource is in the
minimal supervenience base of the agent’s mental life. Hence, VE is true.
Therefore, INDEPENDENCE fails.
Can one somehow disentangle VE and CE in the supervenience base
and save INDEPENDENCE? Intuitively speaking, VE and CE seem to make
Entangled Externalisms 89

claims about distinct aspects of the supervenience base. CE is a claim


about the supervenience base for content; VE is a claim about the super-
venience base for the vehicles that represent that content. The problem,
as we saw above, is that SUPERVENIENCE* and MIN-SUPERVENIENCE fail to
reproduce this distinction. The challenge is to distinguish between those
physical features of the supervenience base that constitute the external
vehicle and those that individuate content. We call this the Demarcation
Problem.
In response to the Demarcation Problem, it might be suggested that
we can separate the two aspects of the supervenience base if the notion
of minimality in MIN-SUPERVENIENCE is understood in terms of being a
metaphysically necessary part of a metaphysically sufficient condi-
tion. For example, Block (2005) suggests that a minimal supervenience
base should be understood as a Mackie-style INUS condition.11 But an
INUS rendition of VE represents no progress in solving the Demarcation
Problem. The reason is that CE is cashed out in much the same way.
So far, we have emphasized that content externalists include external
features as part of the supervenience base for states with mental content.
But typically they do not exclude from that supervenience base all
internal features. So according to content externalists, a state S with
mental content M weakly supervenes on a conjunction of internal
physical features Pint and external physical features Pext. On their view, it
follows that the conjunction Pint & Pext is a sufficient (but unnecessary)
condition for state S with content M to occur. That is exactly what the
second part of SUPERVENIENCE* says. In fact, since Pint & Pext cannot be a
causal condition on the obtaining of state S, it is natural to view that
conjunction as a metaphysical condition. Further, since such conjunc-
tions have their conjuncts essentially, hence with metaphysical neces-
sity, Pext will then be a metaphysically necessary part of a metaphysically
sufficient condition for S.
The upshot is that if the vehicle externalist opts for Block’s INUS-style
formulation of VE, she faces the problem that CE is also formulated in
INUS-style terms. The Demarcation Problem arise again: how to distin-
guish those features that constitute the external vehicle from those that
individuate wide content. The INUS-style proposal gives no resources to
draw this distinction. In particular, whatever external physical features
form a metaphysically necessary part of a metaphysically sufficient
condition for a state with mental content to exist are bound to include
features that form a metaphysically necessary part of a metaphysically
sufficient condition for the external vehicle that carries the content of
that state. For if the features necessary for the agent to have that content
90 Mark Sprevak and Jesper Kallestrup

are missing, then the agent’s processing of vehicles with that content
would simply not occur.
We do not pursue this particular line any further. Instead, we argue
that, more surprisingly, the Demarcation Problem afflicts someone who
endorses both CI and VE. At first blush, the Demarcation Problem does
not seem to arise for this particular combination of views. After all, if
we draw the internal/external distinction around the skin and skull,
it looks as if the skin/skull boundary could do useful work in distin-
guishing the relevant features of the supervenience base. We said above
that CI involves the claim that mental content weakly supervenes on
internal physical features. To say that a physical feature is internal to
some individual is to say that it is located inside the skin and skull of
that individual. In contrast, VE says that an external resource is part of
the minimal supervenience base for the mental state/process in ques-
tion. To say that a resource is external to some individual is to say that
it resides outside the boundary of the individual’s skin and skull. So
it looks like a friend of CI and VE can avail herself of the skin/skull
boundary to solve the Demarcation Problem: the physical features that
play a role in individuating the content of mental states are internal to
the individual, but physical features pertaining to the vehicles of those
mental states include features external to the individual.
While the foregoing looks initially promising, a problem arises.
According to VI, the internal/external boundary can safely be drawn
around the skin and skull, but once VE is accepted, the boundary
between the cognitive system and the external environment may be
revised to include whatever external resource – notebook, iPhone or
what have you – as an integral part of the cognitive system. Importantly,
this consequence is explicitly accepted by content internalists. Here are
two illustrative passages from Chalmers and Jackson, who also both
endorse at least the possibility of cognitive or mental extension:

It may even be that in certain cases, epistemic [narrow] content can itself
be constituted by an organism’s proximal environment, in cases where
the proximal environment is regarded as part of the cognitive system: if a
subject’s notebook is taken to be part of a subject’s memory, for example
(see Clark and Chalmers 1998). Here, epistemic content remains internal
to a cognitive system; it is just that the skin is not a God-given boundary
of a cognitive system. (Chalmers 2002, fn 22)
... the live issue, and the issue on the table here, is whether or not
duplicates from the skin, doppelgangers, in our world might differ in
Entangled Externalisms 91

belief by virtue of a difference in their environment. In worlds where


people think with major assistance from machines that they plug
their brains into, doppelgangers will differ in what they believe (the
skin will not be the pertinent boundary). (Jackson 2003, 57)

The point is that once the internal/external distinction is redrawn to


reflect the cognitive or mental extension of the original skin-and-skull-
bound individual, the friend of CI and VE can no longer claim that
the physical features that play a role in individuating the content of
mental states are distinctively internal to the individual who is in those
states or that the physical features that pertain to vehicle externalism are
distinctively external to the individual who is in those states. The skin
and skull no longer constitute the pertinent boundary. CI will instead
assert that the content of mental states supervenes on physical features
of individuals that are inside the extended cognitive system – the system
comprising the biological organism plus whatever augmenting tech-
nological devices serve to extend the mind. But thus understood, the
physical features that pertain to the vehicles and the physical features
that pertain to the content will both count as internal to the individual
whose states they are. Consequently, the Demarcation Problem reap-
pears as a concern about how to separate the internal physical features
that play a role in individuating the content of mental states from those
internal features on which the vehicles supervene. Once the internal/
external distinction is no longer of any avail, it looks as if CI and VE are
no better placed than CE and VE when it comes to responding to the
Demarcation Problem.12
Finally, let us assess INDEPENDENCE under the REALIZATION formulation of
VE. According to REALIZATION, VE is true just in case one’s mental states/
processes are realized by the conjunction of one’s neural activity and
an environmental resource. As mentioned in Section 2, the distinctive
content of REALIZATION hinges on one’s notion of realization. If REALIZA-
TION is to do better than MIN-SUPERVENIENCE in establishing INDEPENDENCE,
it must impose more constraints than MIN-SUPERVENIENCE. What might
those be? One obvious strategy is to employ Shoemaker (1984) and
Shoemaker (2007)’s distinction between core and total realizers and its
subsequent elaboration by Wilson (2001) into narrow, wide and radically
wide realization. Shoemaker draws the initial distinction as follows:

A total realizer of a property will be a property whose instantiation is


sufficient for the instantiation of that property. A core realizer will be
92 Mark Sprevak and Jesper Kallestrup

a property whose instantiation is a salient part of a total instantiation


of it. (Shoemaker 2007, 21)

Wilson operates with a trichotomy of core realizations, non-core parts


of total realizations, and background conditions. A total realization of a
realized property is constituted by a core realization plus the non-core part
of the total realization. The core realizer of a property is the specific part
of the physical system most readily identified as playing a causal role in
producing or sustaining the realized property – the role which defines
that property. The non-core realizer is the part of the system which needs
to be activated if the core realizer is to play the causal role in question.
The background conditions pertain to general features beyond the system
necessary for its existence and functioning. The total realizer of a prop-
erty is then a property of the system, containing any given core realiza-
tion as a proper part, that is metaphysically sufficient for that property.
While the realized property is one that an individual has, the system need
not be identical to that individual. So we must distinguish between the
bearer of the realized property and the system whose complete states
comprise the total realizer.
Wilson proceeds to define a narrow realization as a total realization
whose non-core part is located within the individual who has the real-
ized property. In contrast, a wide realization is a total realization whose
non-core part is not located entirely within the individual who has the
realized property. Finally, a radically wide realization is a wide realization
whose core part is not located entirely within the individual who has
the realized property.
How are we to understand the contrast between CE and VE using
this framework? On the one hand, CE appears to be best characterized
as a case of wide realization. This fits neatly with the definition of CE
offered in terms of weak supervenience. A case of wide realization is
one in which weak supervenience on internal features of an individual
fails. Another way in which weak supervenience could fail of course is
by radically wide realization. And we saw above that both CE and VE
violate weak supervenience. What we wanted was to distinguish between
different ways in which weak supervenience could fail. This appears to
be precisely what the contrast between wide and radically wide realiza-
tion provides. Background conditions appear to offer yet another way
in which weak supervenience could fail. However, as seen in Section
1, although background conditions violate supervenience on the indi-
vidual, they do not violate weak supervenience. An intrinsic duplicate of
an individual who is in some narrow content mental state need not be
Entangled Externalisms 93

a narrow content duplicate if she is in a possible world where different


background conditions obtain. So we have:

REALIZATION-CE
CE is true iff the property of having a content-bearing mental state/
process is widely realized by the individual’s neural activity and an
external resource.

VE is naturally characterized as a case of radically wide realization. The


vehicle is the core part that most saliently plays the causal role in ques-
tion. So we have:

REALIZATION-VE
VE is true iff the property of being in a content-bearing mental state/
process is radically widely realized by the individual’s neural activity
and an external resource.

The REALIZATION proposal equates the contrast between VE and CE with


the contrast between wide realization and radically wide realization.
Although initially promising, this proposal faces a number of chal-
lenges. First and foremost is that the distinction between core and
non-core realizers is not of a kind that univocally supports INDEPENDENCE.
As both Wilson and Shoemaker emphasize, the core/non-core realizer
distinction is interest relative. A core realizer is defined as the part of
total realizer that plays a salient causal role in producing or sustaining
the realized property. But what makes a contribution salient? And salient
to whom? In certain contexts, some physical resources are salient, in
other contexts those elements fade into the (non-core realizer) back-
ground. Whether a given physical resource is a salient contributor to the
total realizer depends on one’s explanatory, descriptive, predictive and
other interests. Consider a spring-loaded-bar system, which is a total
realizer of the kind mousetrap because it fulfils the causal role of catching
and killing mice. Suppose one’s explanatory interests lie in how this
physical system kills mice. Certain features of the spring-loaded bar
system will stand out as salient and therefore as core realizers of the
mousetrap: the high-tension spring, the rigid bar, the sensitive trigger.
Other elements will be demoted to non-core realizers. Suppose now that
one’s explanatory interests change and one wishes to explain how the
instance of mousetrap attracts mice. Other physical features will stand
out as core realizers: the accessible open-air platform, the ripeness of the
94 Mark Sprevak and Jesper Kallestrup

cheese, the high-friction wooden base. In this explanatory context, the


spring, bar and trigger will be demoted to non-core realizers. As Wilson
emphasizes, the core/non-core realizer distinction is not an objective
and mind-independent distinction but a malleable boundary that is
reshaped as a function of our interests.
This complicates the contrast between VE and CE. On the face of it,
the core realizers of cognition are the vehicles of cognition with which
VE is concerned. In some contexts, this seems to be true. For example,
if we wish to explain how Otto is able to recall that there is water on
Mars but not on the Moon, the most salient causal contributor to Otto’s
realizing this property is the physical state of Otto’s notebook and his
related neural and bodily processes. Similarly, the most salient causal
contributors to Inga realizing the same cognitive property are her neural
states. In this explanatory context, external content-fixing facts – such
as distal samples of water – fade into being non-core realizers. But if our
explanatory interests change and we wish to explain how Otto or Inga
is able to remember facts about water rather than twater, then according
to CE, environmental features do play a salient causal role in producing
and sustaining the relevant cognitive ability. In this context, content-
fixing facts are part of the core realizer. Therefore, in this explanatory
context, we also have an instance of radically wide realization. Hence,
CE entails VE. INDEPENDENCE is not true or false simpliciter but indexed to
our explanatory, descriptive, predictive and other interests. INDEPENDENCE
may flip-flop from true to false as those interests change.
A second problem with REALIZATION is that it fails to block entailment
in the other direction, from VE to CE. Paradigmatic cases of VE involve
external resources that include representational states as a salient causal
player (e.g., inscriptions written in Otto’s notebook). If VE is correct,
then these external representational states are among the core real-
izers of some of Otto’s mental states/processes. But these external states
cannot play this causal role alone; they need help from other instances
of properties in the external environment. For example, in order for the
inscriptions in Otto’s notebook to play their role in Otto’s mental life,
other external supports need to be in place: Otto needs pockets to carry
the notebook, Otto needs functioning arms and fingers to use the note-
book, Otto may need spectacles to read the inscriptions, and Otto may
need a pen to correct entries. In any given case of VE, there is a nexus
of additional external property instances that need to be in place for the
extended core realizer to play its causal role. These external property
instances will be among the non-core realizers of Otto’s content-bearing
Entangled Externalisms 95

mental state. But if any non-core realizers of Otto’s content-bearing


mental states extend, then CE is true too.

4 Conclusion

Both advocates and critics of VE have assumed that CE and VE are logi-
cally independent. We have found this assumption to be problematic.
The relationship between the views is more complex than it first appears.
We have seen that the primary reason for this entanglement is due to
variation in stating VE. We have examined four ways of stating VE and
found that none offer straightforward grounds to accept INDEPENDENCE.
We wish to propose that a priority for future work on VE is the formula-
tion of an agreed statement of the view that can be used for evaluating
its place in the philosophical landscape.

Notes
1. The terminology owes much to Hurley (2010).
2. For more details, see Kallestrup (2011).
3. Hurley (1998) calls this the ‘Input-Output Picture’.
4. For the functionalist argument for VE, see Clark (2008), Sprevak (2009),
Wheeler (2010).
5. For responses along this line, see Adams and Aizawa (2007), Rupert (2004),
Sprevak (2009).
6. See also Chalmers (2002).
7. For a worry along these lines, see Fodor (2009) and Ladyman and Ross
(2010).
8. As the so-called slow-switching cases (Burge 1988) illustrate, environmental
changes will not immediately result in intentional changes. If you were to
be unwittingly transported to Twin Earth, you would begin to think twater
thoughts only after you sustain enough causal connections to XYZ (or to
other speakers who have interacted with XYZ). Your wide intentional behav-
iour would then change accordingly, e.g. you would reach for twater, where
on Earth, when you were thinking water thoughts, you would have reached
for water. Still, the physical movements of your arm would remain the
same.
9. See Clark and Chalmers (1998) and Sprevak (2009).
10. For duplicates with no mental content, see Putnam (1981, ch. 1).
11. Mackie (1965) proposed that talk of causes involves INUS conditions: insuf-
ficient but necessary parts of a condition which is itself unnecessary but suffi-
cient for the occurrence of the effect. The condition Block has in mind is
different in that a minimal supervenience base for a mental state is distinct
from whatever caused that state.
12. There are reasons independent of the debate over VE for thinking that the
internal/external distinction should not be drawn around the skin/skull.
96 Mark Sprevak and Jesper Kallestrup

Take the meningitis example of Farkas (2003). You and I both have symp-
toms typical of meningitis, but whereas mine are caused by meningitis, yours
are caused by a different bacterium. So while we are physically distinct from
the skin in, we yet inhabit identical physical environments. Without our
knowledge, our token sentences containing ‘meningitis’ express distinct
propositions.

References
Adams, F., and K. Aizawa (2007) The Bounds of Cognition. Oxford: Blackwell.
Block, N. (2005) ‘Review of Alva Noë’s Action in Perception’. Journal of Philosophy,
102, 259–272.
Burge, T. (1988) ‘Individualism and Self-Knowledge’. Journal of Philosophy, 85,
649–663.
Burge, T. (2010) Origins of Objectivity. Oxford: Oxford University Press.
Chalmers, D. J. (2002) ‘The Components of Content’. In Philosophy of Mind:
Classical and Contemporary Readings, edited by D. J. Chalmers. Oxford: Oxford
University Press, 608–633.
Clark, A. (2008) Supersizing the Mind. Oxford: Oxford University Press.
Clark, A., and D. J. Chalmers (1998) ‘The Extended Mind’. Analysis, 58, 7–19.
Farkas, K. (2003) ‘What Is Externalism?’ Philosophical Studies, 112, 187–208.
Fodor, J. A. (2009) ‘Where Is My Mind?’ London Review of Books, 31, 13–15.
Hurley, S. (1998) Consciousness in Action. Cambridge, MA: Harvard University
Press.
Hurley, S. (2010) ‘Varieties of Externalism’. In The Extended Mind, edited by R.
Menary. Cambridge, MA: MIT Press, 101–153.
Jackson, F. (2003) ‘Narrow Content and Representationalism, or Twin Earth
Revisited’. Proceedings and Addresses of the American Philosophical Association,
77, 55–70.
Jackson, F., and P. Pettit (1993) ‘Some Content Is Narrow’. In Mental Causation,
edited by J. Heil and A. Mele. Oxford University Press, 259–282.
Kallestrup, J. (2011) Semantic Externalism. London: Routledge.
Ladyman, J., and D. Ross (2010) ‘The Alleged Coupling-Constitution Fallacy and
the Mature Sciences’. In The Extended Mind, edited by R. Menary. Cambridge,
MA: MIT Press, 155–165.
Mackie, J. L. (1965) ‘Causes and Conditions’. American Philosophical Quarterly, 2,
245–264.
Noë, A. (2007) ‘Magic Realism and the Limits of Intelligibility: What Makes Us
Conscious?’ Philosophical Perspectives, 21, 457–474.
Putnam, H. (1975) ‘The Meaning of “Meaning”’. In Mind, Language and Reality,
Philosophical Papers, vol. 2. Cambridge: Cambridge University Press, 215–271.
Putnam, H. (1981) Reason, Truth and History. Cambridge: Cambridge University
Press.
Rupert, R. D. (2004) ‘Challenges to the Hypothesis of Extended Cognition’.
Journal of Philosophy, 101, 389–428.
Shoemaker, S. (1984) ‘Some Varieties of Functionalism’. In Identity, Cause and
Mind. Cambridge: Cambridge University Press, 261–286.
Shoemaker, S. (2007) Physical Realization. Oxford: Clarendon Press.
Entangled Externalisms 97

Sprevak, M. 2009. ‘Extended Cognition and Functionalism’. Journal of Philosophy


106: 503–27.
Stalnaker, R. (1989) ‘On What’s in the Head’. Philosophical Perspectives: Philosophy
of Mind and Action Theory, 3, 287–316.
Wheeler, M. (2010) ‘In Defense of Extended Functionalism’. In The Extended
Mind, edited by R. Menary. Cambridge, MA: MIT Press, 245–270.
Wilson, R. A. (2001) ‘Two Views of Realization’. Philosophical Studies, 104, 1–30.
6
The Phenomenal Basis of
Epistemic Justification
Declan Smithies

For much of the last century, phenomenal consciousness occupied a


curious status within philosophy of mind: it was central in some ways
and yet peripheral in others. On the one hand, this topic attracted a
significant amount of philosophical interest owing to metaphysical
puzzlement about the nature of phenomenal consciousness and its
place in the physical world. On the other hand, this metaphysical
puzzlement also provided much of the impetus for a research program
of understanding the mind as far as possible without making reference
to phenomenal consciousness.
One defining characteristic of this research program was the idea that
the ‘hard problem’ of explaining phenomenal consciousness could be
divorced from the comparatively ‘easy problems’ of explaining mental
representation and our knowledge of the external world.1 For instance,
one of the central projects in late-twentieth-century philosophy of mind
was to explain mental representation in terms of causal connections
between the mind and the external world specified without appealing
to phenomenal consciousness.2 At the same time, one of the central
projects in epistemology was to explain knowledge and justified belief
in terms of causal or counterfactual connections between the mind and
the external world – again, specified without reference to phenomenal
consciousness.3
One of the new waves in philosophy of mind over the last couple of
decades has been a growing recognition of the importance of phenom-
enal consciousness and its centrality in our understanding of the mind.
In philosophy of mind, it has become increasingly common to argue
that phenomenal consciousness is the basis of mental representation and
hence that the problem of explaining mental representation cannot be
divorced from the problem of explaining phenomenal consciousness.4

98
The Phenomenal Basis of Epistemic Justification 99

This chapter argues for a related thesis in epistemology – namely, that


phenomenal consciousness is the basis of epistemic justification and
hence that the problem of explaining epistemic justification cannot be
divorced from the problem of explaining phenomenal consciousness.5
These two claims about the role of phenomenal consciousness are
related in ways that are symptomatic of the more general interactions
between issues in epistemology and philosophy of mind. If phenom-
enal consciousness is the basis of epistemic justification, as I argue, then
we can ask what it must be like in order to play this epistemic role.
Arguably, phenomenal consciousness cannot play this epistemic role if
it is constituted by brute, non-representational sensations, or ‘qualia’.
On the contrary, the role of phenomenal consciousness in grounding
epistemic justification depends upon its role in grounding mental repre-
sentation. More specifically, we do not need the strong thesis that all
mental representation has its source in phenomenal consciousness but
only the weaker thesis that some mental representation has its source in
phenomenal consciousness – namely, the kind of mental representation
that plays an epistemic role.6
This chapter is primarily concerned with arguing for the epistemic
role of phenomenal consciousness rather than its role in grounding
mental representation, although these issues are interconnected in ways
that will emerge. The aim of the chapter is to give a synoptic overview
of a larger research project; so many details are left for development
elsewhere. In the first three sections, I motivate the connection between
phenomenal consciousness and epistemic justification by appealing to
various thought experiments and defending it against objections. In the
final section, I sketch the more theoretical line of argument that the
connection between phenomenal consciousness and epistemic justifica-
tion best explains the independently motivated thesis of access inter-
nalism. The result is a theory of epistemic justification that is designed
to bring intuition and theory into reflective equilibrium.

1 The basis of epistemic justification

What is the basis of epistemic justification? A presupposition of the


question is that facts about epistemic justification are not without some
basis: they are not brute facts. All epistemic facts are determined by non-
epistemic facts, in the sense that there can be no epistemic differences
without some corresponding non-epistemic differences in virtue of
which those epistemic differences obtain. Determination, unlike super-
venience, is an asymmetric relation that captures an order of explanatory
100 Declan Smithies

priority: epistemic facts supervene on their non-epistemic determinants


and vice versa, but epistemic facts are determined by their non-epis-
temic determinants and not vice versa. In a slogan, the determinants of
epistemic facts are epistemic difference makers.
This chapter is concerned to address the following question about the
determinants of epistemic justification:

The Question: What are the non-epistemic facts that determine the
epistemic facts about which doxastic attitudes one has justification
to hold?

To clarify, the target question is exclusively concerned with epistemic


justification rather than practical justification – that is, the kind of justifi-
cation that attaches to beliefs and other doxastic attitudes as opposed to
actions. Moreover, it is concerned with epistemic justification as distinct
from any other epistemic properties that may be necessary for knowl-
edge, such as reliability, safety, sensitivity and so on. And finally, it is
concerned with epistemic justification in the propositional sense rather
than the doxastic sense – that is, the sense in which one has justification
to hold certain doxastic attitudes regardless of the way in which one
holds them or, indeed, whether one holds them at all.
Reliabilism is one mainstream account of the determinants of epis-
temic justification. According to reliabilism, epistemic facts about which
doxastic attitudes one has justification to hold are determined by non-
epistemic facts about the reliability or unreliability of one’s doxastic
dispositions:

Reliabilism: The reliability of one’s doxastic dispositions determines


which doxastic attitudes one has justification to hold.

On a simple version of reliabilism, one has justification to hold a belief if


and only if one has a disposition to hold the belief that is sufficiently reli-
able, in the sense that it generates a sufficiently high ratio of true beliefs
to false beliefs in sufficiently similar counterfactual circumstances.7
Reliabilism is subject to well-known counterexamples which illus-
trate that differences in the reliability of one’s doxastic dispositions are
neither necessary nor sufficient to make a difference with respect to
epistemic justification:

Envatment: My envatted mental duplicate has justification to form


beliefs on the basis of perceptual experience, memory, testimony
The Phenomenal Basis of Epistemic Justification 101

and so on, although forming beliefs in this way is unreliable in the


circumstances.8
Clairvoyance: My clairvoyant mental duplicate lacks justifica-
tion to believe on the basis of blind hunches, wishful thinking
and so on, although forming beliefs in this way is reliable in the
circumstances.9

These cases have a common structure: in each case, we vary the facts
about the reliability of the subject’s doxastic dispositions, but we do
not thereby vary the ways in which the subject has justification to form
beliefs so long as we hold fixed the facts about the subject’s mental states.
Moreover, this common structure suggests a common explanation:
namely, that epistemic justification is not determined by facts about the
reliability of the connections between the subject’s mental states and
the external world but rather by facts about the subject’s mental states
themselves.
Mentalism is a prominent alternative to reliabilism, according to
which epistemic facts about which doxastic attitudes one has justifica-
tion to hold are determined by non-epistemic facts about one’s mental
states:

Mentalism: One’s mental states determine which doxastic attitudes


one has justification to hold.10

Mentalism implies that mental duplicates are also duplicates with


respect to which doxastic attitudes they have justification to hold. For
instance, if I have justification to form beliefs on the basis of percep-
tual experience, memory, testimony and so on, then so does any mental
duplicate of mine, even if forming beliefs in that way is unreliable in the
circumstances. Similarly, if I lack justification to form beliefs on the basis
of blind hunches, wishful thinking and so on, then so does any mental
duplicate of mine, even if forming beliefs in that way is reliable in the
circumstances. In this way, mentalism provides a common explanation
of intuitive verdicts about envatment and clairvoyance alike.
The problem with mentalism as formulated here is that not all mental
states are justificational difference makers. For instance, my envatted
duplicate does not share all my factive mental states, such as seeing that
there is a cup on the table, so we need a restriction to non-factive mental
states in order to explain why he has justification to adopt all the same
doxastic attitudes.11 Indeed, we need to impose further restrictions, since
not all of one’s non-factive mental states are justificational difference
102 Declan Smithies

makers. Consider the subdoxastic mental representations that figure in


computational explanations in cognitive science, such as Chomsky’s
(1965) tacit knowledge of syntax and Marr’s (1982) primal, 2.5D and 3D
sketch: these subdoxastic mental representations do not play any epis-
temic role in determining which doxastic attitudes one has epistemic
justification to hold.12
Therefore, proponents of mentalism need to address the following
question:

The Generalization Question: Which mental states play an epis-


temic role in determining which doxastic attitudes one has justifica-
tion to hold?

It is tempting to answer this question by invoking Dennett’s (1969)


distinction between personal and subpersonal levels. Subdoxastic mental
representations are not states of the person but rather states of parts of
the person – namely, their computational subsystems. And it is natural
to suppose that epistemic justification is determined solely by personal-
level mental states that figure within the person’s subjective perspective
or point of view on the world. But this is no more than a promissory
note in the absence of a further account of which mental states are prop-
erly attributed to the person, as opposed to the person’s subsystems, and
so figure within the person’s subjective perspective on the world.
Broadly speaking, there are two options for explicating the sense in
which epistemic justification depends upon the subject’s perspective:
one can appeal either to phenomenal consciousness or to functional
role. In the next section, I decide between these options by exploiting a
series of imaginary variations on the empirical phenomenon of blind-
sight. I argue that there is an epistemic asymmetry between conscious
sight and blindsight, which is best explained by appealing to phenom-
enal differences, rather than functional differences, between them. The
general strategy is to argue that however much we complicate the func-
tional role of blindsight, the epistemic asymmetry with conscious sight
remains so long as there is a corresponding phenomenal asymmetry.
I conclude that phenomenal consciousness plays a crucial role in the
determination of epistemic justification.

2 The epistemic role of perceptual experience

My starting point is that conscious perceptual experience plays a foun-


dational epistemic role in providing a source of justification for beliefs
The Phenomenal Basis of Epistemic Justification 103

about the external world. This is not to prejudge questions about the
structure of the justification that perceptual experience provides. On one
view, perceptual experience provides immediate, non-inferential justifi-
cation for beliefs about the external world (Pryor 2000). On another
view, perceptual experience provides justification for beliefs about the
external world in a way that is inferentially mediated by justification for
beliefs about the reliability of perceptual experience (Wright 2004). But
even on the second view, perceptual experience plays a foundational
role in providing immediate, non-inferential justification for beliefs
about which perceptual experiences one is having at any given time.
So on either view, one’s justification for beliefs about the external world
has its source in their relations to perceptual experience and not solely
in their relations to other beliefs.
Moreover, it is extremely plausible that perceptual experience plays
this foundational epistemic role in virtue of its phenomenal character.
It is because perceptual experience has the phenomenal character of
confronting one with objects and properties in the world around me that
it justifies forming beliefs about those objects and properties. This point
is most vividly illustrated by reflecting on cases in which the phenom-
enal character of perceptual experience goes missing – most notably in
the empirical phenomenon of blindsight (Weiskrantz 1997).
Patients with blindsight lose conscious visual experience in ‘blind’
regions of the visual field owing to lesions in the visual cortex. As a result,
they do not initiate spontaneous reasoning, action or verbal reports
directed towards stimuli in the blind field, but they are nevertheless
reliable in discriminating stimuli in the blind field under forced choice
conditions. For example, when asked to guess whether a presented item
is an X or an O, patients are able to report correctly in a high propor-
tion of trials. What explains this reliability is the fact that perceptual
information from stimuli in the blind field is represented and processed,
although it does not surface in conscious experience.
Does unconscious perceptual information in blindsight provide a source
of justification for beliefs about stimuli in the blind field? Intuitively, it
does not. After all, blindsighted subjects are not at all disposed to use
unconscious perceptual information in forming beliefs about stimuli
in the blind field. Instead, they tend to regard their reports in forced
choice tasks as mere guesswork and express surprise when informed of
their reliability. Moreover, this seems perfectly reasonable. Blindsight
is not plausibly regarded as a cognitive deficit in which subjects are in
possession of perceptual evidence that justifies forming beliefs about
the blind field, although they are cognitively disabled from using it in
104 Declan Smithies

forming justified beliefs. On the contrary, it is more plausibly regarded


as a perceptual deficit in which subjects lack the perceptual evidence
that is needed to justify forming beliefs about the blind field in the first
place. Intuitively, subjects with blindsight have no more justification to
form beliefs on the basis of unconscious perceptual information than to
form beliefs on the basis of blind guesswork.
These questions about the epistemic status of blindsight cannot be
settled by appealing to facts about its reliability, since clairvoyance
shows that reliability is not sufficient for epistemic justification. Indeed,
subjects with blindsight are in much the same epistemic predicament as
clairvoyant subjects. They have a reliable perceptual mechanism, which
enables them to make accurate guesses on the basis of representation
and processing of unconscious perceptual information about stimuli in
the blind field. However, they have no justification to believe that they
have this reliable mechanism, since the relevant perceptual information
is represented and processed unconsciously. Intuitively, then, blindsight
is no more a source of justification than clairvoyance.
Of course, subjects with blindsight may eventually learn of their own
reliability through induction or testimony and so acquire inferentially
mediated justification for beliefs about the blind field. So, for instance,
they might be justified in believing that their guesses are likely to be true
on the grounds that they were true in the past. In that case, however,
their justification does not have its source in unconscious perceptual
information but solely in background beliefs that are independently
justified. In normally sighted subjects, by contrast, justification for
beliefs about the external world has its source in phenomenal character
of perceptual experience and not solely in independently justified back-
ground beliefs.
The epistemic asymmetry between conscious sight and blindsight
seems best explained by the corresponding phenomenal asymmetry
between them. This suggests a version of mentalism on which a mental
state plays an epistemic role in determining epistemic justification if
and only if it is phenomenally conscious:

Phenomenal Mentalism: One’s phenomenally conscious mental


states determine which doxastic attitudes one has justification to
hold.

Nevertheless, there are functional differences, as well as phenomenal


differences, between conscious sight and blindsight. We should there-
fore consider whether the epistemic asymmetry between conscious sight
The Phenomenal Basis of Epistemic Justification 105

and blindsight can be explained in terms of functional differences rather


than phenomenal differences.
Block (1997) suggests that our ordinary concept of consciousness is
a ‘mongrel concept’ that conflates a phenomenal concept with certain
functionally defined concepts, including access consciousness and
metacognitive consciousness. According to Block’s definition, ‘A state
is access conscious if it is poised for direct control of thought and action’
(1997, 382). Meanwhile, a metacognitively conscious state is defined
as ‘a state accompanied by a thought to the effect that one is in that
state ... arrived at nonobservationally and noninferentially’ (1997, 390).
Now, conscious sight is conscious in all of these senses, whereas blind-
sight is conscious in none of these senses. So why explain the epistemic
asymmetry between conscious sight and blindsight in terms of phenom-
enal consciousness rather than access consciousness or metacognitive
consciousness?
Block claims, in my view plausibly, that the phenomenal concept of
consciousness is distinct from any functionally defined concept and
that there are conceptually possible cases in which they come apart. For
instance, it is conceptually possible that a functional zombie has states
that are conscious in any functionally defined sense, although not in
the phenomenal sense. One might reasonably object that there is no
intuitive sense in which the states of a functional zombie are conscious
as opposed to merely ersatz functional substitutes for consciousness.
For current purposes, though, we can set this issue aside. The key ques-
tion is whether we can explain the epistemic asymmetry between
conscious sight and blindsight in terms of functional differences rather
than phenomenal differences. For these purposes, we can follow Block
in assuming that the functional properties of access and metacogni-
tion are neither conceptually necessary nor sufficient for phenomenal
consciousness.
One of the most striking functional differences between blindsight
and conscious sight is that unconscious perceptual information in
blindsight is not access conscious in the sense that it is poised for use in
the direct control of action, reasoning and verbal report. So perhaps we
can explain the epistemic asymmetry between blindsight and conscious
sight by appealing to a version of mentalism on which a mental state
plays an epistemic role if and only if it is access conscious:

Access Mentalism: One’s access conscious mental states determine


which doxastic attitudes one has justification to hold.
106 Declan Smithies

This proposal fails, however, since access consciousness is neither neces-


sary nor sufficient for a mental state to play an epistemic role.
Against necessity: consider perceptual experience in the absence of
attention. The functional role of attention is, roughly, to make infor-
mation access conscious in the sense that it is poised for use in the
direct control of thought and action. Information that is represented in
perceptual experience in the absence of attention is not access conscious
and so cannot play an epistemic role in providing subjects with doxasti-
cally justified beliefs. Arguably, however, it can play an epistemic role in
providing subjects with propositional justification to form beliefs, even
if they cannot use this in forming doxastically justified beliefs in the
absence of conscious attention. On this view, there is an important epis-
temic contrast between blindsight on the one hand and inattentional
blindness on the other.13
Against sufficiency: consider Block’s hypothetical case of super-
blindsight, which is just like ordinary blindsight except that the
subject is disposed to use unconscious perceptual information in
the direct and spontaneous control of thought and action without
any need for prompting.14 That is to say, perceptual information in
super-blindsight is access conscious but not phenomenally conscious.
Notwithstanding these functional differences, however, the super-
blindsighter is in the same epistemic predicament as the blindsighter.
The only relevant difference is that the super-blindsighter is disposed
to form beliefs about objects in the blind field automatically and
with confidence, whereas the ordinary blindsighter is disposed to
make tentative guesses under conditions of prompting. However, the
mere feeling of confidence is not sufficient to justify forming beliefs –
justification is not that easy to come by! In effect, the only relevant
difference between blindsight and super-blindsight is the addition of
a reliable doxastic disposition, but as the clairvoyance case illustrates,
the mere fact that beliefs are formed in a reliable way is not sufficient
to make them justified.
Another striking functional difference between blindsight and
conscious sight is that unconscious perceptual information in blindsight
is not metacognitively conscious in the sense that it is accompanied by
higher-order thoughts that are formed in a non-inferential and non-
observational way. So perhaps we can explain the epistemic asymmetry
between blindsight and conscious sight by appealing to a version of
mentalism on which a mental state plays an epistemic role if and only if
it is metacognitively conscious:
The Phenomenal Basis of Epistemic Justification 107

Metacognitive Mentalism: One’s metacognitively conscious mental


states determine which doxastic attitudes one has justification to
hold.

Once again, however, this proposal fails, since metacognitive conscious-


ness is neither necessary nor sufficient for a mental state to play an epis-
temic role.
Against necessity: consider perceptual experiences of unreflective
creatures, including young children and higher animals, who can form
justified beliefs about the world but cannot form beliefs about their own
experience. Evidence from developmental psychology suggests that
three-year-old children do not have the conceptual resources required
to understand questions about whether their beliefs are formed on the
basis of perception, inference or testimony (Gopnik and Graf 1988).
But it would be an overintellectualization to maintain that their beliefs
about the world cannot be justified on the basis of perceptual experience
unless they can also form higher-order beliefs about the experiences on
which their beliefs are based.
Against sufficiency: consider the hypothetical case of hyper-blind-
sight, which is just like super-blindsight except that the subject has a
reliable disposition to form higher-order thoughts about unconscious
perceptual information in a non-inferential and non-observational
way. That is to say, perceptual information in hyper-blindsight is both
access conscious and metacognitively conscious but not phenomenally
conscious. Notwithstanding these functional differences, however,
the hyper-blindsighter is in much the same epistemic predicament as
the super-blindsighter. The only relevant difference is that the hyper-
blindsighter is reliable not only about stimuli in his blind field but also
about the unconscious perceptual representations that carry informa-
tion about those stimuli. Once again, however, the mere addition of
a reliable disposition is not sufficient to make a justificational differ-
ence from the original blindsight case. If adding a reliable first-order
doxastic disposition is not sufficient to justify first-order beliefs about
the external world, then why should adding a reliable second-order
doxastic disposition be sufficient to justify higher-order beliefs about
the internal world? Intuitively, the hyper-blindsighter’s higher-order
beliefs about his unconscious perceptual states are no more justified
than the super-blindsighter’s beliefs about objects in the blind field. And
we cannot turn unjustified beliefs into justified beliefs by adding more
unjustified beliefs!
108 Declan Smithies

The moral to be drawn from the discussion so far is that conscious


perceptual experience justifies belief in virtue of its phenomenal char-
acter rather than its functional role. No matter how much we compli-
cate its functional role, unconscious perceptual information cannot
play the epistemic role of conscious perceptual experience. A functional
zombie has unconscious states that exactly duplicate the causal role of
conscious perceptual experiences, but they do not thereby provide justi-
fication to form beliefs about the world. Therefore, we cannot explain
the epistemic asymmetry between conscious sight and blindsight except
in terms of the epistemic role of phenomenal consciousness.

3 The epistemic role of belief

The appeal to phenomenal consciousness as the basis of epistemic


justification needs to be handled carefully. In this section, I argue that
phenomenal consciousness is neither necessary nor sufficient for a
mental state to play a role in determining epistemic justification, and so
the following version of phenomenal mentalism is false:

Phenomenal Mentalism: One’s phenomenally conscious mental


states determine which doxastic attitudes one has justification to
hold.

Nevertheless, I argue that these problems can be avoided by a revised


version of phenomenal mentalism on which the mental states that play
a role in determining epistemic justification are phenomenally individu-
ated in a sense to be explained. I therefore propose the following revised
version of phenomenal mentalism:

Phenomenal Mentalism (revised version): One’s phenomenally


individuated mental states determine which doxastic attitudes one
has justification to hold.

The first problem is that phenomenal consciousness is not sufficient for


a mental state to play a role in determining epistemic justification. As
we have already seen, factive mental states, such as seeing that there is
a cup on the desk, do not play this kind of epistemic role. For instance,
my envatted phenomenal duplicate has as much justification as I do for
believing that there is a cup on the desk, although he does not share
my factive mental state of seeing that there is a cup on the desk. Yet
my factive mental state is phenomenally conscious, in the sense that
The Phenomenal Basis of Epistemic Justification 109

there is something it is like for me to see that there is a cup on the desk.
To solve this problem, we need the notion of a phenomenally individu-
ated mental state – that is, a type of mental state that is individuated
by its phenomenal character in the sense that all and only tokens of
that type have the same phenomenal character. Factive mental states
are phenomenally conscious, but they are not phenomenally individu-
ated, since not all mental states with the same phenomenal character
are factive mental states.15
The second problem is that phenomenal consciousness is not neces-
sary for a mental state to play a role in determining epistemic justifica-
tion. After all, beliefs play an epistemic role in justifying other beliefs.
Indeed, Davidson (1986) went as far as to claim that nothing can justify
a belief except another belief. This is surely an overreaction, since beliefs
can also be justified by perceptual experiences, which are distinct from
the beliefs they justify. Yet it is surely an overreaction in the opposite
direction to claim that beliefs can never be justified by other beliefs.
Nevertheless, beliefs are not phenomenally conscious states: they are
disposed to cause phenomenally conscious states of judgment, but these
dispositions need not be manifested for beliefs to play an epistemic role.
To illustrate the point, suppose you observe that the streets are wet and
infer that it has been raining. Your justification to draw this conclu-
sion depends on all sorts of background beliefs about the relative prob-
ability of various hypotheses conditional on the streets being wet. More
generally, which conclusions one has inductive justification to draw
from observed evidence is a matter that depends upon vast amounts of
background information that is represented unconsciously in the belief
system; not all of this can be brought into consciousness in the process
of drawing a conclusion.
Any plausible answer to the generalization question must therefore be
permissive enough to include beliefs while also being restrictive enough
to exclude subdoxastic mental representations, such as unconscious
perceptual information in blindsight. What is needed is an account of
what beliefs and experiences have in common in virtue of which they
play their epistemic role. However, many philosophers are pessimistic
about the prospects for giving a unified account of the mental that
includes beliefs as well as experiences. Thus, Rorty (1979, 22) writes,
‘The attempt to hitch pains and beliefs together seems ad hoc – they
don’t seem to have anything in common except our refusal to call them
“physical”.’
In my view, however, this pessimism can be resisted. The key is to
recognize two distinct but related senses in which a mental state can be
110 Declan Smithies

phenomenally individuated. A mental state type is phenomenally indi-


viduated in the primary sense if and only if it is individuated wholly by
phenomenal character – that is, all and only tokens of that type have the
same phenomenal character. In contrast, a mental state type is phenom-
enally individuated in the derivative sense if and only if it is individu-
ated wholly by phenomenal dispositions – that is, all and only tokens
of that type have the same dispositions to cause mental states that are
phenomenally individuated in the primary sense.
Beliefs are not phenomenally conscious experiences, but they are
disposed to cause phenomenally conscious experiences of judgment.
These phenomenally conscious experiences of judgment are individu-
ated wholly by their phenomenal character, in the sense that all and
only judgments of the same kind have the same phenomenal character.
Moreover, beliefs are individuated wholly by their phenomenal disposi-
tions, in the sense that all and only beliefs of the same kind have the
same phenomenal dispositions. So phenomenally conscious experi-
ences of judgment and unconscious states of belief are both phenom-
enally individuated but in different ways: judgments are individuated
by their phenomenal character, whereas beliefs are individuated by their
phenomenal dispositions.16
Subdoxastic mental representations, unlike beliefs, are not individu-
ated wholly by their phenomenal dispositions. On the contrary, they
are individuated at least in part by their dispositions to play a role in
unconscious computational processes. To illustrate the point, consider
Davies’s (1989) hypothetical example of states of tacit knowledge of
language that are disposed to cause phenomenally conscious itches or
tickles. Presumably, what makes it the case that these states embody
tacit knowledge of language is not their disposition to cause itches and
tickles but rather their roles in linguistic processing.
A similar point emerges from reflection on Quine’s (1970) challenge
to Chomsky’s (1965) notion of tacit knowledge. The challenge is to
explain what constitutes tacit knowledge of a rule if it is less demanding
than explicit knowledge of the rule but more demanding than merely
exhibiting linguistic behaviour that conforms to the rule. The standard
account (Evans 1981; Davies 1987) is that tacit knowledge of a rule is a
matter of having the right kind of causal structure in the psychological
processing that underpins one’s linguistic behaviour. More specifically,
one has tacit knowledge of a rule if and only if the causal structure of
one’s psychology mirrors the logical structure of a theory that includes
that rule. There could be two subjects that exhibit the same linguistic
behaviour, although their behaviour is explained by psychological
The Phenomenal Basis of Epistemic Justification 111

processes that embody tacit knowledge of different linguistic rules.


Therefore, tacit knowledge is individuated not merely by its disposition
to cause linguistic behaviour but also by its role in unconscious psycho-
logical processes.
This point can be generalized to other subdoxastic mental representa-
tions, including those involved in vision. There could be two subjects
that have the same visual experiences, although their visual experiences
are explained by different kinds of visual processing involving different
representations and rules. Thus, visual representations and rules are
individuated not just by their role in explaining conscious experience
but also by their role in psychological processing that occurs beneath
the level of phenomenal consciousness.
In this way, we can answer the generalization question in a way that
explains why beliefs, unlike subdoxastic mental representations, play an
epistemic role in determining epistemic justification because they are
individuated wholly by their phenomenal dispositions. This proposal
relies on various commitments in the philosophy of mind that I cannot
defend in this chapter, although they have been defended in detail else-
where.17 My view is that each of these commitments can be defended
on its own merits, but in addition, I claim that the arguments of this
chapter provide additional theoretical support for these commitments
insofar as they are indispensable for making sense of the epistemic role
of phenomenal consciousness.
The first commitment is intentionalism: all phenomenal properties are
identical with intentional properties.18 A consequence of intentionalism
is that experiences have intentional properties just by virtue of having
phenomenal properties. Intentionalism is needed for avoiding the
objection that mental states are individuated by their intentional prop-
erties rather than their phenomenal properties. If intentionalism is true,
then we need not choose between these ways of individuating mental
states, since their phenomenal properties are identical with intentional
properties.
The second commitment is the thesis that intentionalism can be
extended from perception to cognition in the following sense: both
perceptual and cognitive experiences have intentional properties that
are identical with their phenomenal properties. This extended version
of intentionalism is needed in order to avoid the objection that the
phenomenal properties of judgment are not specific enough to indi-
viduate their intentional contents and attitude types. On the extended
version of intentionalism, the phenomenal properties of judgment are
content-specific and attitude-specific.19
112 Declan Smithies

The third commitment is antireductionism: not all phenomenal prop-


erties are identical with low-level properties of sensory perception.
Antireductionism is needed to block the objection that the phenomenal
properties of judgment are identical with low-level properties of sensory
perception that underdetermine the intentional properties of judgment.
This objection can be avoided if the phenomenal properties of judgment
are either sui generis, non-sensory properties or high-level sensory prop-
erties that correspond to the experience of semantic content.20
The fourth commitment is narrow intentionalism: some intentional
properties are narrow (i.e. intrinsic) properties of the subject. This is a
consequence of intentionalism together with the plausible assumption
that all phenomenal properties are narrow properties.21 Narrow inten-
tionalism does not imply that all intentional properties are narrow prop-
erties. On the contrary, it is consistent with the plausible claim that
some intentional properties are wide, extrinsic properties that depend
upon the subject’s relations to the external world.22
The fifth commitment is a consequence of narrow intentionalism
combined with phenomenal mentalism: namely, that mental states play
a role in determining epistemic justification in virtue of their narrow
intentional properties rather than their wide intentional properties.
On this view, which intentional contents one believes depends on one’s
relations to the external world, but which intentional contents one has
justification to believe depends only upon one’s narrow, intrinsic prop-
erties. Thus, Oscar on Earth and Toscar on Twin Earth have justifica-
tion to believe all the same intentional contents, although they believe
different intentional contents in virtue of their different relations to the
external world.23
The final commitment is that beliefs are individuated by their
phenomenal dispositions as opposed to their behavioural dispositions.
Much of the resistance to this proposal can be undercut by defending the
commitments mentioned above. But one might accept that beliefs are
disposed to cause judgments that are individuated by their phenomenal
character while denying that beliefs are individuated wholly by these
dispositions. So the question arises, why privilege phenomenal disposi-
tions over behavioural dispositions in the individuation of belief?
The main argument of this section is that the phenomenal individu-
ation of belief is indispensable for explaining the epistemic asymmetry
between beliefs and subdoxastic mental representations. This argument
is not conclusive, but it does raise a challenge for opponents to explain
the epistemic asymmetry between beliefs and subdoxastic mental repre-
sentations in some other way. Moreover, the main argument of the next
The Phenomenal Basis of Epistemic Justification 113

section is that beliefs and other mental states play an epistemic role only
if they are introspectively accessible and hence phenomenally individu-
ated. This provides another more theoretical line of argument for the
phenomenal individuation of belief.

4 From access internalism to phenomenal mentalism

Consider the following pair of questions:

The Generalization Question: Which mental states play an epis-


temic role in determining which doxastic attitudes one has justifica-
tion to hold?
The Explanatory Question: Why do some mental states rather
than others play an epistemic role in determining which doxastic
attitudes one has justification to hold?

In response to the generalization question, I have argued for a version


of phenomenal mentalism on which one’s phenomenally individu-
ated mental states determine which doxastic attitudes one has justifi-
cation to hold. But the explanatory question remains. Why is it only
one’s phenomenally individuated mental states that play an epistemic
role? In this section, I answer this question by arguing that all mental
states that determine epistemic justification are introspectively acces-
sible and all introspectively accessible mental states are phenomenally
individuated.
It is important to distinguish between ambitious and modest strat-
egies for answering the explanatory question. The ambitious strategy
seeks to derive the connection between phenomenal consciousness and
epistemic justification from more fundamental facts that do not presup-
pose it at all. In my view, the ambitious strategy cannot succeed, since
the connection between phenomenal consciousness and epistemic justi-
fication is fundamental and so cannot be derived from anything else.
Instead, I pursue the more modest strategy of arguing that we can acquire
some reflective understanding of the connection between phenomenal
consciousness and epistemic justification by recognizing how it explains
the independently motivated thesis of access internalism.
Access internalism is the thesis that epistemic facts about which
doxastic attitudes one has justification to hold are accessible to one by
introspection and a priori reflection alone. A condition is accessible just
in case one has justification to believe that it obtains when and only
when it obtains. Access internalism can now be defined as follows:
114 Declan Smithies

Access Internalism: One has justification for some doxastic attitude


if and only if one has justification for believing on the basis of intro-
spection and a priori reflection alone that one has justification for
that doxastic attitude.

Notice that access internalism is formulated here as a thesis about propo-


sitional justification rather than doxastic justification, and so there is no
commitment to the claim that having justified beliefs requires holding –
or even having the capacity to hold – justified higher-order beliefs. In
other work (see. Smithies 2012b, 2014), I have argued for access inter-
nalism and defended it against objections, but for reasons of space, I
do not rehearse those arguments here. Instead, I argue for the condi-
tional claim that if access internalism can be independently motivated,
as I believe it can, we can use it in explaining the connection between
phenomenal consciousness and epistemic justification.
If access internalism is true, an account of the determinants of epis-
temic justification must explain why it is true. Reliabilism, for instance,
cannot explain why access internalism is true, since the non-epistemic
facts about the reliability of one’s doxastic dispositions are not acces-
sible by introspection and reflection alone. This point is illustrated by
the examples of envatment and clairvoyance with which we began: my
envatted duplicate has unreliable doxastic dispositions, but he has justi-
fication to believe that they are reliable, whereas my clairvoyant dupli-
cate has reliable doxastic dispositions, but he does not have justification
to believe that they are reliable.
What explains why epistemic facts about which doxastic attitudes one
has justification to hold are accessible on the basis of introspection and
a priori reflection alone? As far as I can see, there is only one plausible
candidate for such an explanation. First, these epistemic facts must be
determined by non-epistemic facts about one’s mental states that are
introspectively accessible, in the sense that M obtains if and only if one has
introspective justification to believe that M obtains. And second, these
epistemic facts must be determined in a way that is a priori accessible, in
the sense that M determines E if and only if one has a priori justification
to believe that M determines E. More precisely, for every accessible epis-
temic fact E, there must be some non-epistemic fact about one’s mental
states, M, such that it is introspectively accessible that M and a priori
accessible that M determines E. Access internalism is explained by the
introspective accessibility of the mental states that determine epistemic
justification together with the a priori accessibility of the way in which
epistemic justification is determined.
The Phenomenal Basis of Epistemic Justification 115

Access internalism therefore provides the rationale for a version of


introspective mentalism on which the mental states that determine
epistemic justification are one’s introspectively accessible mental states:

Introspective Mentalism: One’s introspectively accessible mental


states determine which doxastic attitudes one has justification to
hold.

But this version of introspective mentalism raises two further questions.


First, what explains how it is that any mental states are introspectively
accessible at all? And second, if some mental states are introspectively
accessible, then which ones?
Let us begin with the first question. Reliabilism cannot explain the
fact that some mental states are introspectively accessible. According to
reliabilism, one has introspective justification to believe that one has
a certain kind of mental state if and only if one has an introspective
mechanism that disposes one to believe that one has a mental state of
that kind. On this view, however, one’s mental states are not introspec-
tively accessible in the sense defined unless one’s introspective mecha-
nisms are perfectly reliable, in the sense that one is disposed to believe
that one is in a certain kind of mental state if and only if one is in a
mental state of that kind. However, standard forms of reliabilism do not
make it a requirement for justification that one’s doxastic dispositions
are perfectly reliable but only that they are sufficiently reliable to meet
some less perfectly demanding threshold.
Elsewhere, I have argued for a simple theory of introspection on which
introspective justification is a primitive and sui generis kind of justifica-
tion that cannot be assimilated to any more general theory of justifica-
tion that includes perceptual, inferential or any other kind of justification
(Smithies 2012c). According to the simple theory, introspective justifi-
cation is a distinctive kind of justification for believing that one is in a
certain kind of mental state, which has its source in the fact that one is
in a mental state of that very kind. A consequence of the simple theory
is that any mental state that is a source of introspective justification is
introspectively accessible in the sense defined above.
Why should we accept the simple theory of introspection? The simple
theory is motivated in part by reflection on examples: for instance, if
I am in pain, I have introspective justification to believe that I am in
pain just by virtue of the fact that I am in pain; and similarly, if I am
thinking about rhubarb, I have introspective justification to believe that
I am thinking about rhubarb just by virtue of the fact that I am thinking
116 Declan Smithies

about rhubarb. But the strongest theoretical motivation for the simple
theory of introspection is that it is needed in order to explain the truth
of access internalism.
Assuming that some mental states are introspectively accessible, the
question arises, which ones? This is, in effect, a generalization question
about introspection:

A Generalization Question about Introspection: Which mental


states are introspectively accessible in the sense that one has intro-
spective justification to believe that one is in mental state M if and
only if one is in M?

Given introspective mentalism, we can use our answer to this question


about which mental states are introspectively accessible in constraining
our answer to the question of which mental states play an epistemic role
in determining epistemic justification.
Not all mental states are introspectively accessible. Here again, we can
appeal to the subdoxastic mental representations that figure in compu-
tational explanations in cognitive science, such as Chomsky’s (1965)
tacit knowledge of syntax and Marr’s (1982) primal, 2.5D and 3D sketch.
After all, our justification to believe that we have these mental states
derives from scientific theory rather than introspection. Thus, we need
some restriction on which mental states are introspectively accessible.
An initially promising criterion is that a mental state is introspectively
accessible if and only if it is phenomenally conscious. However, this
criterion is too restrictive, since it excludes not only subdoxastic mental
states but also beliefs. Beliefs are standing states that persist through time
without making any ongoing contribution to phenomenal conscious-
ness. For instance, my belief that Canberra is the capital of Australia
persists whether or not I am consciously considering the matter, and
so does my second-order belief that I believe this. As we have already
seen, there is a problem in explaining the source of my justification for
these beliefs, since there may be nothing in my stream of phenomenal
consciousness that makes them justified at any given time. Moreover, in
many cases, it is not plausible that my beliefs are inferentially justified
by their relations to other beliefs, since I may be unable to remember
anything that is relevant to their justification. In the case of second-
order beliefs, though, it is plausible to suppose that they are justified
by the presence of the corresponding first-order beliefs, regardless of
whether those first-order beliefs are justified and if so, how. So just as I
have introspective justification to believe that I am in pain just by virtue
The Phenomenal Basis of Epistemic Justification 117

of being in pain, so I have introspective justification to believe that I


believe that Canberra is the capital of Australia just by virtue of the fact
that I believe it.
As before, we need an answer to the generalization question that
is permissive enough to include beliefs but also restrictive enough to
exclude subdoxastic states. What we need, then, is a criterion that
explains what beliefs, unlike subdoxastic states, have in common with
phenomenally conscious states in virtue of which they are introspec-
tively accessible. One strategy is to answer the generalization question
by appealing to some broadly functionalist criterion on which a mental
state is introspectively accessible if and only if plays a certain functional
role. The challenge for proponents of this strategy is to identify some
functional property that beliefs have in common with phenomenally
conscious experiences but not with subdoxastic states.
For instance, one might propose that a mental state is introspec-
tively accessible if and only if it is access conscious in the sense that
it is poised for use in the direct control of thought and action (see
Zimmerman 2006; also Shoemaker 2009). After all, beliefs are typically
access conscious, whereas subdoxastic mental representations are typi-
cally not. Nevertheless, we can generate counterexamples by imagining
subdoxastic mental representations that are access conscious but not
phenomenally conscious, such as Block’s example of super-blindsight
in which unconscious perceptual information is poised for use in the
direct control of thought and action. Intuitively, the super-blindsighter
does not have introspective justification to form beliefs about what
is represented in her visual system any more than the regular blind-
sighter does. At best, she has justification to make inferences about what
is represented in her visual system from observational data about her
own spontaneous verbal and non-verbal behaviour. Therefore, access
consciousness is not sufficient for introspective accessibility.
One might respond by imposing a more demanding functional crite-
rion, on which a mental state is introspectively accessible if and only if
it is metacognitively conscious in the sense that it is accompanied by a
higher-order thought that is arrived at in the right way. Once again, we
can generate counterexamples by imagining subdoxastic mental repre-
sentations that are metacognitively conscious but not phenomenally
conscious, such as the case of hyper-blindsight in which unconscious
perceptual representations are reliably disposed to cause higher-order
thoughts of the right kind. Intuitively, the hyper-blindsighter has no
more introspective justification to form beliefs about her unconscious
perceptual representations than the super-blindsighter does. Certainly,
118 Declan Smithies

she has a reliable disposition to form true beliefs about her unconscious
visual representations, but this is not sufficient to make her beliefs
introspectively justified. By analogy, the super-blindsighter has a reli-
able disposition to form true beliefs about stimuli in the blind field, but
this is not sufficient to make them perceptually justified. So why should
we suppose that the hyper-blindsighter’s beliefs about her unconscious
visual representations are any more justified than the super-blindsight-
er’s beliefs about objects in the blind field? Therefore, metacognitive
consciousness is not sufficient for introspective accessibility.
In response to the generalization question, I propose the following
thesis:

The Phenomenal Introspection Thesis: One’s mental states are


introspectively accessible if and only if they are phenomenally
individuated.

On this proposal, phenomenal experiences and beliefs, unlike subdox-


astic states, are introspectively accessible in virtue of being individu-
ated by their relations to phenomenal consciousness. Phenomenal
experiences of judgment are introspectively accessible because they are
individuated by their phenomenal character, whereas beliefs are intro-
spectively accessible because they are individuated by their dispositions
to cause phenomenal experiences of judgment that are also introspec-
tively accessible. This is not to say that one’s introspective justification
for second-order beliefs about one’s beliefs has its source in phenom-
enally conscious judgments. On the contrary, one’s introspective justifi-
cation for second-order beliefs about one’s beliefs has its source in one’s
first-order beliefs themselves. These first-order beliefs are individuated
by their dispositions to cause phenomenally conscious judgments, but
these dispositions need not be manifested in order to have introspective
justification for second-order beliefs or to use it in holding introspec-
tively justified second-order beliefs.
Now, the following three claims form a coherent and mutually rein-
forcing package:

1. Introspective Mentalism: One’s introspectively accessible mental


states determine which doxastic attitudes one has justification to
hold.
2. The Phenomenal Introspection Thesis: One’s introspectively acces-
sible mental states are just one’s phenomenally individuated mental
states.
The Phenomenal Basis of Epistemic Justification 119

3. Phenomenal Mentalism: One’s phenomenally individuated mental


states determine which doxastic attitudes one has justification to
hold.

So we can argue for phenomenal mentalism by appealing to the phenom-


enal introspection thesis together with introspective mentalism, which
is itself a consequence of the independently motivated thesis of access
internalism.
Can we further explain the connection between introspection and
phenomenal consciousness? Horgan and Kriegel (2007) explain this
connection by appealing to a more fundamental claim about the nature
of phenomenal consciousness – namely, that phenomenal conscious-
ness is self-presenting, in the sense that phenomenally conscious states
essentially represent themselves. In my view, however, this reverses
the correct order of explanation by assimilating the epistemic role of
phenomenal consciousness in introspection to a more general model
that applies also in the case of perception. My own view is that the
epistemic role of phenomenal consciousness in introspection is more
fundamental and explains its epistemic role in perception, memory and
testimony. That is why my explanatory strategy is modest rather than
ambitious. We can motivate the general connection between phenom-
enal consciousness and epistemic justification by appealing to a more
specific connection between phenomenal consciousness and its epis-
temic role in introspection, but we cannot motivate the connection in a
way that does not presuppose it at all.
Nevertheless, we can acquire some reflective understanding of the
connection between epistemic justification and phenomenal conscious-
ness by recognizing how it explains the truth of access internalism.
Moreover, we can use this reflective understanding to explain and justify
the intuitive judgments about cases that we began with. Given access
internalism, we can infer the conclusion that my envatted duplicate has
justification to form beliefs on the basis of perceptual experience from
the premise that he has justification to believe on the basis of intro-
spection and a priori reflection alone that he has justification to form
beliefs in this way. Similarly, we can infer the conclusion that my clair-
voyant duplicate lacks justification to form beliefs on the basis of blind
hunches or wishful thinking from the premise that he lacks justification
to believe that he has justification to form beliefs in this way. Likewise
for blindsighters, super-blindsighters and hyper-blindsighters.
These judgments are not simply brute deliverances of intuition but
can be regarded as consequences of an independently motivated theory
120 Declan Smithies

of justification. In this way, intuition and theory can be brought into


reflective equilibrium.24

Notes
1. The distinction between hard and easy problems was introduced by Chalmers
(1995), but the program of understanding the mind and our knowledge of
the external world without reference to phenomenal consciousness goes
back at least as far as Ryle (1949).
2. This is perhaps most clearly evident in the attempts to ‘naturalize’ intention-
ality in the work of Dretske (1981), Stalnaker (1984), Millikan (1984) and
Fodor (1987).
3. This is a defining feature of the ‘reliabilist’ tradition in epistemology that
includes the work of Armstrong (1968), Goldman (1979), Dretske (1981) and
Nozick (1981).
4. This is one consequence of ‘representationalism’ or ‘intentionalism’ in philos-
ophy of mind. Reductive representationalists, including Tye (1995), Dretske
(1995) and Lycan (1996), claim that the problem of explaining phenom-
enal consciousness is made easier by its connections with mental repre-
sentation, while non-reductive representationalists, including Horgan and
Tienson (2002) and Chalmers (2004), claim that the problem of explaining
mental representation is made harder by its connections with phenomenal
consciousness.
5. Other epistemologists who emphasize the role of perceptual experience in
explaining our knowledge of the external world include McDowell (1994),
Brewer (1999) and Pryor (2000).
6. See Smithies (2012a) for further discussion.
7. Reliabilist theories of justification are proposed by Goldman (1979), Sosa
(2003) and Bergmann (2006), although I cannot discuss the specific details
of their views here.
8. This is a variation on Cohen’s (1984) ‘new evil demon problem’.
9. The clairvoyance cases were originally proposed by BonJour (1980).
10. Proponents of mentalism include Conee and Feldman (2001) and Wedgwood
(2002). I focus on ‘current time-slice’ versions of mentalism rather than
‘historical’ versions: i.e., one’s mental states at a time determine which
doxastic attitudes one has justification to hold at that time.
11. Williamson (2000) endorses a factive version of mentalism on which one’s
evidence – and so which doxastic attitudes one has justification to hold – is
determined by one’s knowledge, which he claims to be the most general kind
of factive mental state.
12. Stich defines subdoxastic states as ‘psychological states that play a role in the
proximate causal history of beliefs, though they are not beliefs themselves’
(1978, 499). We can add the further stipulation that no subdoxastic states are
phenomenally conscious states.
13. See Smithies (2011a, 2011b) and Siegel and Silins (2014).
14. According to Block, the super-blindsighter is ‘trained to prompt himself at
will, guessing without being told to guess’ (1997, 385), but let us suppose
The Phenomenal Basis of Epistemic Justification 121

instead that he forms beliefs spontaneously without any need for self-
prompting.
15. Object-involving mental states raise similar problems and can be treated in
much the same way as factive-mental states. I plan to discuss this in more
detail in future.
16. See Smithies (2012a, 2013) for a more detailed discussion and defence of this
account of the individuation of belief and judgment.
17. See Smithies (2013) for an overview and further references.
18. Proponents of intentionalism include Dretske (1995), Tye (1995), Lycan
(1996), Siewert (1998), Horgan and Tienson (2002) and Chalmers (2004).
19. The terminology of ‘content-specific’ and ‘attitude-specific’ phenomenal
properties is borrowed from Ole Koksvik (2011); see also Horgan and Tienson
(2002) for a related distinction between the phenomenology of intentional
content and the phenomenology of attitude type.
20. See Strawson (1994), Peacocke (1998), Siewert (1998), Horgan and Tienson
(2002) and Pitt (2004).
21. See Pautz (Ch. 8 in this volume) for discussion. I should note that while I
find this assumption plausible, I am not independently committed to it, so
my commitment to narrow intentionalism is conditional on the truth of this
assumption.
22. See Horgan and Tienson (2002) and Chalmers (2004) for versions of narrow
intentionalism on which some intentional properties are wide and Farkas
(2008) for a more uncompromising view on which all intentional properties
are narrow.
23. Compare Audi (2001) for a related proposal and Williamson (2007) for crit-
ical discussion. I plan to discuss this proposal in more detail elsewhere.
24. This chapter reworks some of the central ideas in my Ph.D. dissertation
(Smithies 2006) and draws on themes that I have developed in a series of
papers and plan to bring together in a monograph for Oxford University
Press with the provisional title ‘The Epistemic Role of Consciousness’. I have
presented these ideas at several venues over the past few years, including
ANU, Dubrovnik, Harvard, Melbourne, Ohio State, Fribourg, Northwestern,
MIT and the Pacific APA, as well as the Online Philosophy Conference for New
Waves in Philosophy of Mind. I am grateful for feedback on all of those occa-
sions and especially to John Campbell, David Chalmers, Elijah Chudnoff,
Terry Horgan, Geoff Lee, Susanna Siegel, Charles Siewert, Nico Silins and
Daniel Stoljar.

References
Armstrong, David (1968) A Materialist Theory of the Mind. London: Routledge.
Audi, Robert (2001) ‘An Internalist Theory of Normative Grounds’. Philosophical
Topics, 23, 31–45.
Bergmann, Michael (2006) Justification without Awareness: A Defense of Epistemic
Externalism. New York: Oxford University Press.
Block, Ned (1997) ‘On a Confusion about a Function of Consciousness’. In The
Nature of Consciousness: Philosophical Debates, edited by N. Block, O. Flanagan
and G. Guzeldere. Cambridge, MA: MIT Press.
122 Declan Smithies

BonJour, Laurence (1980) ‘Externalist Theories of Empirical Knowledge’. Midwest


Studies in Philosophy, 5(1), 53–74.
Brewer, Bill (1999) Perception and Reason. Oxford: Oxford University Press.
Chalmers, David (1995) ‘Facing Up to the Problem of Consciousness’. Journal of
Consciousness Studies, 2(3), 200–219.
Chalmers, David. (2004) ‘The Representational Character of Experience’. In The
Future for Philosophy, edited by B. Leiter. New York: Oxford University Press.
Chomsky, Noam (1965) Aspects of the Theory of Syntax. Cambridge, MA: MIT
Press.
Cohen, Stewart (1984) ‘Justification and Truth’. Philosophical Studies, 46,
279–295.
Conee, Earl, and Richard Feldman (2001) ‘Internalism Defended’. American
Philosophical Quarterly, 38(1), 1–18.
Davidson, Donald (1986) ‘A Coherence Theory of Truth and Knowledge’. In Truth
and Interpretation: Perspectives on the Philosophy of Donald Davidson, edited by E.
LePore. Oxford: Blackwell.
Davies, Martin. (1987) ‘Tacit Knowledge and Semantic Theory: Does a Five Percent
Difference Matter?’ Mind, 96, 441–462.
Davies, Martin (1989) ‘Tacit Knowledge and Subdoxastic States’. In Reflections on
Chomsky, edited by A. George. Oxford: Blackwell.
Dennett, Daniel (1969) Content and Consciousness. New York: Routledge.
Dretske, Fred (1981) Knowledge and the Flow of Information. Stanford, CA: CSLI.
Dretske, Fred (1995) Naturalizing the Mind. Cambridge, MA: MIT Press.
Evans, Gareth (1981) ‘Semantic Structure and Tacit Knowledge’. In Wittgenstein:
To Follow a Rule, edited by S. Holtzmann and C. Leich. London: Routledge and
Kegan Paul.
Farkas, Katalin (2008) ‘Phenomenal Intentionality without Compromise’. Monist,
91(2), 273–293.
Fodor, Jerry (1987) Psychosemantics. Cambridge, MA: MIT Press.
Goldman, Alvin (1979) ‘What Is Justified Belief?’. In Justification and Knowledge,
edited by G. Pappas. Dordrecht: Reidel.
Gopnik, Alison, and Peter Graf (1988) ‘Knowing How You Know: Children’s
Understanding of the Sources of Their Knowledge’. Child Development, 59,
1366–1371.
Horgan, Terry, and John Tienson (2002) ‘The Intentionality of Phenomenology
and the Phenomenology of Intentionality’. In Philosophy of Mind: Classical and
Contemporary Readings, edited by D. Chalmers. New York: Oxford University
Press.
Horgan, Terry, and Uriah Kriegel (2007) ‘Phenomenal Epistemology: What
Is Consciousness That We May Know It So Well?’ Philosophical Issues, 17,
123–144.
Koksvik, Ole (2011) ‘Intuition’. Ph.D. diss., Australian National University.
Lycan, William (1996) Consciousness and Experience. Cambridge, MA: MIT Press.
Marr, David (1982) Vision: A Computational Investigation into the Human
Representation and Processing of Visual Information. New York: Freeman.
McDowell, John (1994) Mind and World. Cambridge, MA: Harvard University
Press.
Millikan, Ruth (1984) Language, Thought, and Other Biological Categories: New
Foundations for Realism. Cambridge, MA: MIT Press.
The Phenomenal Basis of Epistemic Justification 123

Nozick, Robert (1981) Philosophical Explanations. Cambridge, MA: Harvard


University Press.
Peacocke, Christopher (1998) ‘Conscious Attitudes, Attention, and Self-
Knowledge’. In Knowing Our Own Minds, edited by C. Wright, B. Smith, and C.
MacDonald. New York: Oxford University Press.
Pitt, David (2004) ‘The Phenomenology of Cognition; or, What Is It Like to Think
That P?’ Philosophy and Phenomenological Research, 69, 1–36.
Pryor, James (2000) ‘The Skeptic and the Dogmatist’. Nous, 34, 517–549.
Quine, Willard (1970) ‘Methodological Reflections on Current Linguistic Theory’.
Synthese 21(3–4), 386–398.
Rorty, Richard (1979) Philosophy and the Mirror of Nature. Princeton, NJ: Princeton
University Press.
Ryle, Gilbert (1949) The Concept of Mind. Chicago: Chicago University Press.
Shoemaker, Sydney (2009) ‘Self-Intimation and Second-Order Belief’. Erkenntnis,
71, 35–51.
Siegel, Susanna, and Nicholas Silins (2014) ‘Consciousness, Attention, and
Justification’. In Contemporary Perspectives on Skepticism and Perceptual Justification,
edited by E. Zardini and D. Dodd. Oxford: Oxford University Press.
Siewert, Charles (1998) The Significance of Consciousness. Princeton, NJ: Princeton
University Press.
Smithies, Declan (2006) ‘Rationality and the Subject’s Point of View’. Ph.D. thesis,
New York University.
Smithies, Declan (2011a) ‘What Is the Role of Consciousness in Demonstrative
Thought?’ Journal of Philosophy, 108(1), 5–34.
Smithies, Declan (2011b) ‘Attention Is Rational-Access Consciousness’. In
Attention: Philosophical and Psychological Essays, edited by C. Mole, D. Smithies
and W. Wu. New York: Oxford University Press.
Smithies, Declan (2012a) ‘The Mental Lives of Zombies’. Philosophical Perspectives,
26, 343–372.
Smithies, Declan (2012b) ‘Moore’s Paradox and the Accessibility of Justification’.
Philosophy and Phenomenological Research, 85(2), 273–300.
Smithies, Declan (2012c) ‘A Simple Theory of Introspection’. In Introspection and
Consciousness, edited by D. Smithies and D. Stoljar. New York: Oxford University
Press.
Smithies, Declan (2013) ‘The Nature of Cognitive Phenomenology’. Philosophy
Compass, 8(8), 744–754.
Smithies, Declan (2014) ‘Why Justification Matters’. In Epistemic Evaluation: Point
and Purpose in Epistemology, edited by D. Henderson and J. Greco. New York:
Oxford University Press.
Sosa, Ernest (2003) ‘Beyond Internal Foundations to External Virtues’. In Epistemic
Justification: Internalism vs. Externalism, Foundations vs. Virtues, edited by L.
BonJour and E. Sosa. Oxford: Blackwell.
Stalnaker, Robert (1984) Inquiry. Cambridge, MA: MIT Press.
Stich, Stephen (1978) ‘Beliefs and Subdoxastic States’. Philosophy of Science, 45,
499–518.
Strawson, Galen (1994) Mental Reality. Cambridge, MA: MIT Press.
Tye, Michael (1995) Ten Problems of Consciousness. Cambridge, MA: MIT Press.
Wedgwood, Ralph (2002) ‘Internalism Explained’. Philosophy and Phenomenological
Research, 65, 349–369.
124 Declan Smithies

Weiskrantz, Lawrence (1997) Consciousness Lost and Found: A Neuropsychological


Exploration. New York: Oxford University Press.
Williamson, Timothy (2000) Knowledge and Its Limits. New York: Oxford University
Press.
Williamson, Timothy (2007) ‘On Being Justified in One’s Head’. In Rationality and
the Good: Critical Essays on the Ethics and Epistemology of Robert Audi, edited by
M. Timmons, J. Greco and A. Mele. New York: Oxford University Press.
Wright, Crispin (2004) ‘Warrant for Nothing (and Foundations for Free)?’
Aristotelian Society Suppl,. 78(1), 167–212.
Zimmerman, Aaron (2006) ‘Basic Self-Knowledge: Answering Peacocke’s Criticisms
of Constitutivism’. Philosophical Studies, 128, 337–379.
7
The Metaphysics of Mind and
the Multiple Sources of Multiple
Realizability
Gualtiero Piccinini and Corey J. Maley

Different structures can have the same function. The wings and feet
of insects, birds and bats have different structural properties, yet they
perform the same functions. Many important concepts and explana-
tions in the special sciences depend on the idea that the same func-
tion can be performed by different structures. For instance, in biology,
although both homologous and analogous structures of a given type
have the same function, only homologous structures of that type have
a common evolutionary history. These observations undergird the
concepts of homologous and analogous structures and the distinc-
tion between them: we cannot make sense of this important biological
distinction in any other way. Similar considerations seem to be true of
psychology. Different structures might well have the same psychological
function, particularly across species. The eye of an octopus might be
quite different from the eye of a human being, although both have the
same function.
Making sense of these phenomena is central to the discussion of
Multiple Realizability (MR). Originally, philosophical attention to MR
was focused on issues in the philosophy of mind; more recently, philos-
ophers have realized that MR is an important issue in the metaphysics
of science, particularly in the special sciences.
To a first approximation, a property P is multiply realizable if and only
if there are multiple properties P1, ... , Pn, each one of which can realize P,
and where P, P1, P2, ... , Pn are all distinct from one another. The idea that
mental properties are multiply realizable was introduced in the philos-
ophy of mind in the early functionalist writings of Putnam and Fodor
(Fodor 1968; Putnam 1960, 1967a). Since then, MR has been an impor-
tant consideration in favour of antireductionism in psychology and
other special sciences (e.g., Fodor 1974, 1997). Initially, the reductionist

125
126 Gualtiero Piccinini and Corey J. Maley

resistance accepted MR, limiting itself to searching for ways to maintain


reductionism in the face of MR (e.g., Kim 1992). More recently, the tide
has turned.
Critics pointed out that MR had neither been clearly analysed nor
cogently defended. It became fashionable to deny MR, either about
mental properties or about any properties at all (Bechtel and Mundale
1999; Bickle 2003; Couch 2005; Keeley 2000; Klein 2008, 2013; Shagrir
1998; Shapiro 2000, 2004; Polger 2009; Zangwill 1992). But this recoil
is no more warranted than was the original intuitive appeal to MR.
One goal of this paper is to examine MR carefully enough to determine
which, if any, versions of MR occur.
Clarifying MR has several benefits. It contributes to the philosophy
of science and metaphysics in its own right. It sheds light on one of the
central issues in the philosophy of the special sciences. Finally, it helps
clarify one of the central issues of the mind-body problem.
In this essay, we analyse MR in terms of different mechanisms for
the same capacity, explore two different sources of MR and how they
can be combined, and conclude that both traditional reductionism and
traditional antireductionism should be abandoned in favour of an inte-
grationist perspective.

1 Troubles with multiple realizability

Discussions of MR are usually centred on intuitions about certain cases.


The prototypical example is the same software running on different
types of hardware (Putnam 1960). Another prominent class of examples
includes mental states, such as pain, which are supposed to be either real-
ized or realizable in very different types of creature, such as non-human
animals, Martians, robots and even angels (Putnam 1967a, 1967b).1 A
third class of examples includes artefacts such as engines and clocks,
which are supposed to be realizable by different mechanisms (Fodor
1968). A fourth class of examples includes biological traits, which may
be thought to be realized in different ways in different species (Block and
Fodor 1972). A final class of examples includes certain physical prop-
erties, such as being a metal, which are allegedly realized by different
physical substances (Lycan 1981).
On a first pass, it may seem that in all these examples the same high-
level property (being a certain piece of software, being a pain, etc.) is
realized by different lower-level properties (running on different types
of hardware, having different physiological states, etc.). For these intui-
tions to prove correct, at least two conditions must be satisfied. First,
The Metaphysics of Mind 127

the property that is putatively realized by the realizers must be the same
property, or else there wouldn’t be any one property that is multiply
realized. Second, the putative realizers must be relevantly different from
one another, or else they wouldn’t constitute multiple realizations of the
same property.
Unfortunately, there is little consensus on what counts as realizing
the same property or what counts as being relevantly different realizers
of that same property (cf. Sullivan 2008; Weiskopf 2011, among others).
To make matters worse, supporters of MR often talk about MR of func-
tional properties without clarifying which notion of function is in play
and whether the many disparate examples of putative MR are instances
of the same phenomenon or different phenomena. Thus, the notion of
function is often vague, and there is a tacit assumption that there is one
variety of MR. As a consequence, critics of MR have put pressure on the
canonical examples of MR and the intuitions behind them.
Three observations will help motivate our account. First, the proper-
ties (causal powers) of objects can be described with higher or lower
resolution – in other words, properties can be described in more specific
or more general ways (cf. Bechtel and Mundale 1999). When using a
higher-level description with high enough resolution, the set of causal
powers picked out by that description may be so specific that it has
only one lower-level realizer, so it may not be multiply realizable. When
using a higher-level description with lower resolution, the set of causal
powers picked out by that description might have many lower-level real-
izers but perhaps only trivially so; its multiple realizability might be an
artefact of a higher-level description construed so broadly that it has no
useful role to play in scientific taxonomy or explanation.
Consider keeping time. What counts as a clock? By what mechanism
does it keep time? How precise does it have to be? If we answer these
questions liberally enough, almost anything counts as a clock. The
‘property’ of keeping time might be multiply realizable but trivially so.
If we answer these questions more restrictively, however, the property
we pick out might be realized by only a specific kind of clock or perhaps
only one particular clock. Then, the property of keeping time would not
be multiply realizable. Can properties be specified so that they turn out
to be multiply realizable in a non-trivial way?
A second important observation is that things are similar and different
in many ways, not all of which are relevant to MR. For two things to
realize the same property in different ways, it is not enough that they
are different in just some respect or other. The way they are different
might be irrelevant: they might realize the same high-level property
128 Gualtiero Piccinini and Corey J. Maley

by possessing the same relevant lower-level properties while possessing


different lower-level properties that contribute nothing to the high-level
property (cf. Shapiro 2000). This is especially evident in the case of prop-
erties realized by entities made of different kinds of material. Two other-
wise identical chairs may be made of different metals, woods or plastics.
Yet these two chairs may be instances of the same realizer of the chair
type, because the different materials may contribute the same relevant
properties (such as rigidity) to realization. The same point applies to the
property of being a metal. There are many kinds of metal, and this has
suggested to some that the property of being a metal is multiply realiz-
able. But this is so only if the property of being a metal is realized in
relevantly different ways. If it turns out that all metals are such in virtue
of the same realizing properties, then despite appearances, the property
of being a metal is not multiply realizable after all. A similar case is
differently coloured objects: two hammers made of the same material in
the same organization but differing only in their colour do not count as
different realizations of a hammer. More generally, given two things A
and B, which are different in some respects but realize the same property
P, it does not follow that A and B are different realizations of P and that P
is therefore multiply realizable.
A third observation is that the realization relation may be stricter or
looser. A heavily discussed example is that of computer programs. If all
it takes to realize a computer program is some mapping from the states
of the program while it runs once to the states of a putative realizer, then
most programs are realized by most systems (Putnam 1988). This result
is stronger than MR: it entails MR at the cost of trivializing it, because
MR is now a consequence of an intuitively unattractive result. Is there a
way of constraining the realization relation so that MR comes out true
and non-trivial? Is it desirable to find one?2
The easy way out of this conundrum is to deny MR. For example,
Shapiro (2000) argues that the notion of MR is so confused that nothing
can be said to be multiply realizable in any meaningful sense. For
Shapiro, a high-level property can be related to its realizers in one of two
ways. On the one hand, a property can be realized by entities with the
same relevant properties, as in chairs made of different metals. But these
do not count as multiple realizations of the property, because the differ-
ences between the realizers are irrelevant (as Shapiro puts it, the rele-
vant high-level law can be reduced to a lower-level law). On the other
hand, a high-level property can be realized by entities with different
relevant lower-level properties, as in corkscrews that operate by different
causal mechanisms. But then, Shapiro contends, the different causal
The Metaphysics of Mind 129

mechanisms are the genuine kinds, whereas the putative property that
they realize needs to be eliminated in favour of those different kinds. For
example, one should eliminate the general kind corkscrew in favour of,
say, the more specific kinds winged corkscrew and waiter’s corkscrew.

2 Multiple realizability regained

Rejecting MR is tempting but premature: a satisfactory account of MR


would be useful both in metaphysics and in the philosophy of the special
sciences (more on this below). We must specify three things: the proper-
ties to be realized, the properties to serve as realizers and the realization
relation. Furthermore, this must all be done in a way that allows proper-
ties to be multiply realizable without begging any questions.
We start with a minimal notion of a functional property as a capacity
or disposition or set of causal powers; this is to be explained by a minimal
notion of mechanistic explanation as explanation in terms of compo-
nents, their capacities and their organization.3 Notice that, unlike others
(e.g., Shoemaker 2007), we are not merely saying that properties are
individuated by causal powers or that properties ‘bestow’ causal powers;
for present purposes, we identify properties with sets of causal powers.4
A property to be realized is generally a functional property; that is, a
capacity or disposition or set of causal powers. Something’s causal powers
may be specified more finely or more coarsely. At one extreme, where
objects are described in a maximally specific way, no two things are func-
tionally equivalent; at the other extreme, where objects are described in
a maximally general way, everything is functionally equivalent to every-
thing else. Special sciences tend to find some useful middle ground,
where capacities and functional organizations are specified in such a
way that some things (but not all) count as functionally equivalent in
the sense of having the same capacity. The grain that is most relevant to
a certain level of organization appears to be the grain that picks out sets
of causal powers that suffice to produce the relevant phenomena (e.g.,
how much blood pumping is enough to keep an organism alive under
normal conditions, as opposed to the exact amount of pumping that is
performed by a given heart). We assume that this practice is warranted
and take it for granted.
A system’s functional properties are explained mechanistically in
terms of the system’s components, their functional properties and their
organization. The explaining mechanism is the set of components,
capacities and their organization that produce the capacity (disposition,
130 Gualtiero Piccinini and Corey J. Maley

set of causal powers) in question. The same explanatory strategy iterates


for the functional properties of the components.
Claims of realization, multiple or not, presuppose an appropriate spec-
ification of the property to be realized and a mechanism for that prop-
erty. If the kind corkscrew is defined as a device with a part that screws
into corks and pulls them out, then winged corkscrews and waiter’s
corkscrews count as different realizations (because they employ different
lifting mechanisms). If it is defined more generally as something with a
part that only pulls corks out of bottles, then two-pronged ‘corkscrews’
(which have no screw at all but instead have two blades that slide on
opposite sides of the cork) also count as another realizer of the kind. If
it is defined even more generally as something that simply takes corks
out of bottles, then air pump ‘corkscrews’ count as yet another realizer
of the kind. Whether something counts as a realization of a property
depends in part on whether the property is defined more generally or
more specifically.
Someone might object that this seems to introduce a bothersome
element of observer relativity to functional descriptions. By contrast,
the objector says, the identification ‘water = H2O’ does not seem to have
the same kind of observer relativity. But this is a confusion. Functional
descriptions are neither more nor less observer-dependent than descrip-
tions of substances. Whether more fine- or coarse-grained, if they are
true, they are objectively true. It is both true of (some) corkscrews that
they pull corks out of bottles and that they pull corks out of bottles by
having one of their parts screwed into the cork: there is no observer
relativity to either of these facts. In fact, non-functional descriptions
can also be specified with higher or lower resolution. You can give more
or fewer decimals in a measurement, or you can get more or less specific
about the impurities present in a substance such as water. None of this
impugns the observer independence of the descriptions.
Multiple realization depends on mechanisms. If the same capacity of
two systems is explained by two relevantly different mechanisms, the
two systems count as different realizations. Shapiro (2000, 647) objects
that if there are different causal mechanisms, then the different proper-
ties of those causal mechanisms and not the putatively realized property
are the only real properties at work. A similar objection is voiced by
Kim (1992) and Heil (2003), who ask, what more is there to an object’s
possessing a given higher-level property beyond the object’s possessing
its lower-level realizing property?5
Our answer is that there is something less, not more, to an object’s
possessing a higher-level property. The worry that higher-level properties
The Metaphysics of Mind 131

are redundant disappears when we realize that higher-level properties


are subtractions of being, as opposed to additions to being, from lower-
level properties. Multiple realizability is simply the relation that obtains
when there are relevantly different kinds of lower-level properties that
realize the same higher-level property.
Lower-level properties may realize a lot of higher-level ones, none of
which are identical to any lower-level property. And different lower-level
properties may realize the same higher-level property without being
identical to it. For instance, storing a ‘1’ (as opposed to a ‘0’) within a
computer circuit is a high-level property of a memory cell that may be
realized by a large number of voltages (all of which must fall within a
narrow range; e.g., 4 ± 0.1 volts). Each particular voltage, in turn, may be
realized by an enormous variety of charge distributions within a capac-
itor. Each particular distribution of charges that corresponds to a ‘1’ is
a very specific property: we obtain a particular voltage by abstracting
away from the details of the charge distribution, and we obtain a ‘1’
by abstracting away from the particular value of the voltage. Thus, a
higher-level property is a (partial) aspect of a lower-level property.
That’s not to say that different charge distributions amount to multiple
realizations of a given voltage (in the present sense) or that different volt-
ages within the relevant range amount to multiple realizations of a ‘1’. On
the contrary, these are cases of differences in the realizers of a property
that do not amount to multiple realizations of that property but merely
variant realizers of that property. Many lower-level differences are irrel-
evant to whether a higher-level property is multiply realized. Multiple
realization is more than mere differences at the lower level. As we soon
argue, multiple realization of a property requires relevant differences in
causal mechanisms for that property.
The objection, à la Shapiro (2000), that the realizing properties, rather
than the realized properties, are doing all the work depends on a hierar-
chical ontology in which parts are prior to (i.e., more fundamental than)
wholes and therefore the properties of parts are prior to the properties of
wholes. This hierarchy may be reversed in favour of the view that wholes
are prior to parts (e.g., Schaffer 2010) and therefore the properties of wholes
are prior to the properties of parts. We reject both kinds of hierarchical
ontologies in favour of a neglected third option: an egalitarian ontology.
According to our egalitarian assumption, neither parts nor wholes are prior
to one another, and therefore neither the properties of parts nor the prop-
erties of wholes are more fundamental than one another.6
Our egalitarian assumption allows us to cut through the debate about
realization. According to the flat view (Polger 2007; Polger and Shapiro
132 Gualtiero Piccinini and Corey J. Maley

2008; Shoemaker 2007, 12), realization is a relation between two different


properties of a whole in which a subset of the realizing properties of that
whole constitute the causal powers individuative of the realized prop-
erty. By contrast, according to the dimensioned view (Gillett 2003, 594),
realization is a relation between a property of a whole and a distinct set
of properties/relations possessed (either by the whole or) by its parts,
such that the causal powers individuative of the realized property are
possessed by the whole ‘in virtue of’ (either it or) its parts possessing the
causal powers individuative of the realizing properties/relations.
There is something appealing about both the flat and the dimensioned
views of realization. The flat view makes multiple realization non-trivial
and provides a clear explanation of the relation between the realized
and realizing properties (the former’s powers are a subset of the latter).
This accounts for the intuitive idea that realizations of, say, corkscrews
are themselves corkscrews, even though corkscrews have properties irrel-
evant to their being corkscrews (e.g., their mass, colour or temperature).
At the same time, the dimensioned view connects different mechanistic
levels and thus fits well with multilevel mechanistic explanation, the
prevailing view of explanation about the things that motivated talk of
realization and MR in the first place. The firing of a single neuron is real-
ized (in part) by ions flowing through ion channels; clearly, the proper-
ties of the ions and the channels they move through are not properties
of the whole neuron, although they are constitutive of a property of the
whole neuron.
There is also something unattractive about both the flat and the
dimensioned views. The flat view makes it sound like realized proper-
ties are superfluous, since the realizing properties are enough to do all
the causal work. We might as well eliminate the realized property. On
the other hand, the dimensioned view is somewhat murky on how the
realized property relates to its realizers and suggests that (almost) any
difference in the realizers of a property is a case of multiple realization.
For example, as Gillett (2003, 600) argues, the dimensioned view entails
that two corkscrews whose only difference is being made of aluminum
versus steel count as different realizations, even though, as Shapiro
(2000) notes, the aluminum and the steel make the same causal contri-
bution to lifting corks out of bottles. Shapiro is right that if the differ-
ences between putative realizations of a property make no difference
to the ways in which they realize that property, then this is not MR.
Thus, the dimensioned view makes it too easy to find cases of multiple
realization. But if there are different causal mechanisms that realize a
property in different ways, then this is indeed genuine MR. On both
The Metaphysics of Mind 133

views, there are worries about causal or explanatory exclusion, whereby


causes or explanations at a lower level render causes or explanations at
a higher level superfluous (Kim 1998). On the flat view, what is left to
cause or explain if realizing properties provide causal explanations of
interest? And on the dimensioned view, what could the properties of
wholes cause or explain if the realizing properties of their parts consti-
tute the mechanism of interest?
Our egalitarian ontology allows us to accommodate what is appealing
about both the flat and dimensioned views without inheriting their
unattractive features:

Property P of object O is realized by properties and relations Q+R if


and only if Q+R belong to O’s components and P is a proper subset of
the causal powers of Q+R.

According to our egalitarian account of realization, realization is a rela-


tion between a property (i.e., a set of causal powers) of a whole and the
properties and relations (i.e., a set of causal powers) of its component
parts (per dimensioned view). The relations between the components
are what give them organization; hence, realization is a relation between
a property of a whole and the properties of its component parts in an
organization. The realized property is nothing but a proper subset of the
causal powers possessed by the organized parts (per flat view, modulo
the appeal to parts and their organization).
The relation between realized property and realizing properties is clear:
it’s the proper subset relation (as per the flat view). The account fits like
a glove with the kind of mechanistic explanation that motivated this
dialectic in the first place (as per the dimensioned view). Whether there
is multiple realization remains a non-trivial matter, because it depends
on whether different mechanisms generate the relevant proper subsets
of their causal powers in relevantly different ways (more on this below).
And yet realized properties are not superfluous, because they are not
posterior to (nor are they prior to) the properties of their parts. They are
simply proper subsets of them.
There may appear to be a problem in the way we combine the subset
relation between different levels and an egalitarian ontology.7 Given
that, on the subset account, lower-level properties can do everything that
higher-level properties can do but not vice versa, it seems that they aren’t
on equal footing at all. The lower-level properties are more powerful, as
it were, than the higher-level ones. Hence, the lower-level properties
appear to be more fundamental. But consisting of fewer powers does
134 Gualtiero Piccinini and Corey J. Maley

not entail being ontologically less fundamental. Higher-level properties


are just (proper) subsets of lower-level properties. A (proper) subset is
neither more nor less fundamental than its superset. It’s just a partial
aspect of its superset, as it were.
A proponent of the dimensioned view of realization might object to
the subset relation between realizing and realized property. According to
this objection, entities at different levels of organization are qualitatively
distinct because they have different kinds of properties and relations
that contribute different powers to them (Gillett 2002, 2010). Carbon
atoms do not scratch glass, though diamonds do; corkscrew handles do
not lift corks, though whole corkscrews do; and so on. If the lower-level
mechanisms are different from the higher-level ones, then the powers
are different; so we do not have a subset relation between powers but
qualitatively different powers at the different levels of organization.
This objection is a non sequitur. Sure, an individual carbon atom taken
in isolation cannot scratch. Nor can a bunch of individual carbon atoms
taken separately from one another. But large enough arrays of carbon
atoms held together by appropriate covalent bonds into an appropriate
crystalline structure do scratch: on our view, under appropriate condi-
tions, a (proper) subset of the properties of such an organized structure
of carbon atoms just is the scratching power of a diamond. This organ-
ized collection of atoms does many other things besides scratching –
including, say, maintaining the bonds between the individual atoms,
having a certain mass, reflecting and refracting electromagnetic radia-
tion and so on. By the same token, a single corkscrew lever, taken in
isolation, cannot lift corks. But corkscrew levers that are attached to
other corkscrew components in an appropriately organized structure, in
cooperation with those other components, do lift corks: under appro-
priate conditions, a (proper) subset of the properties of that organized
structure just is the property of lifting corks out of bottles. A whole’s
parts and their properties, when appropriately organized, do what the
whole and its properties do, and they do much more besides. Hence,
what the whole and its properties do is a (proper) subset of what the
parts and their properties do (when appropriately organized and taken
together).
Here our hypothetical proponent of the dimensioned view might
reply that we make it sound like there are (i) parts with their properties,
(ii) wholes with their properties and then (iii) a further object/property
hybrid, parts in an organization. But why believe in (iii)? An ontology
that includes (iii) is clearly profligate and therefore should be rejected
(cf. Gillett 2010).
The Metaphysics of Mind 135

This objection leads us squarely into the metaphysics of composition.


We have room only for a brief sketch. A whole can be considered in two
ways: as consisting of all its parts organized together or in abstraction
from its organized parts. Of course, the parts of a whole may change
over time. Nevertheless, when a whole is considered as consisting of all
of its parts organized together, at any time instant the whole is identical
to its organized parts. Hence, at any time instant, the organized parts are
nothing over and above the whole, and the whole is nothing over and
above the organized parts. The organized parts are no addition of being
over the whole (and vice versa). Since the whole and its organized parts
are the same thing, an ontology that includes organized parts is no more
profligate than an ontology that includes wholes.
But a whole can also be considered in abstraction from its organized
parts, such that the whole remains ‘the same’ through the addition,
subtraction and substitution of its parts. When a whole is considered in
abstraction from its organized parts, the whole is an invariant over loss,
addition or substitution of its (organized) parts. That is, a whole is that
aspect of its organized parts that remains constant when a part is lost,
added or replaced with another part (within limits). Thus, even when
a whole is considered in abstraction from its organized parts, a whole
is nothing over and above its organized parts. Rather, a whole is one
aspect of its organized parts – a subtraction of being from its organized
parts. Since wholes considered in abstraction from their organized parts
are less than their organized parts, positing wholes as well as organized
parts is not profligate.8
Take the example of a winged corkscrew. The property of lifting corks
out of bottles is a property of the whole object, and it is realized by the
parts (the worm, the lever arms, rack, pinions, etc.) in a particular organ-
ization (the rack connected to the worm, the pinions connected to the
lever arms, etc.). Those parts in that organization lift corks out of bottles,
and because lifting corks out of bottles is not, in any sense, a property over
and above what those parts in that organization do, we can also say
that those parts in that organization are a realization of a corkscrew. In
our egalitarian account, the existence of higher-level properties does not
entail that they are properties over and above their realizers. They are
aspects of their realizers – that is, (proper) subsets of the causal powers of
their realizers – that are worth singling out and focusing on. This notion
of a higher-level property, as well as the related notion of MR, is useful
for several reasons.
First, higher-level properties allow us to individuate a phenomenon of
interest, such as removing corks from bottles, which might be difficult or
136 Gualtiero Piccinini and Corey J. Maley

impossible to individuate on the basis of lower-level properties of cork-


screws. Shapiro points out that in the case of corkscrews made of different
metals, rigidity screens off composition (Shapiro 2000). True enough.
And for something to be a functional corkscrew, it’s not enough that it
be rigid. It must be hard enough to penetrate cork but not so brittle that
it will break when the cork is lifted. Many different substances can be
used to build corkscrews, but their only insightful, predictive, explana-
tory, non-wildly-disjunctive specification is that they must lift corks out
of bottles (or do so in such-and-such a way).
Second, this notion of a higher-level property allows for the explanation
of the phenomenon in question in terms of a relevant property; for example,
we explain the removal of corks from bottles in terms of corkscrews’
power to remove corks from bottles rather than any of the other proper-
ties of corkscrews. And they explain what systems with that property
can do when organized with other systems (i.e., in a higher-level func-
tional context). Higher-level properties have non-trivial consequences –
for example, about what a system can and cannot do. To take just one
example, results from computability theory specify precise limits about
what can and cannot be computed by those things that realize various
kinds of automata (and Turing’s original universality and uncomput-
ability results themselves were the foundations of the field). Two things
that realize the same finite state automaton – even if they realize that
automaton in very different ways – will have the same computational
limits in virtue of their sharing that higher-level property.
Third, this notion of a higher-level property supports an informa-
tive taxonomy of systems that differ in their lower-level properties.
What these systems have in common, which is not revealed by listing
their lower-level properties, is a higher-level property. In other words,
although the lower-level properties that realize the higher-level prop-
erty in these different systems are different, they also have something
in common, and what they have in common is the aspect of the lower-
level properties that we call the higher-level property. In other words,
different lower-level properties realize the same higher-level property
when the different sets of causal powers that make them up have a
common subset of causal powers. The same subset (higher-level prop-
erty) may be embedded in different supersets (lower-level properties), as
illustrated in Figure 7.1.
Finally, a higher-level property calls for its explanation to be provided
in terms of appropriate combinations of lower-level properties (i.e., by
mechanistic explanation). The dual role of higher-level properties as
The Metaphysics of Mind 137

Q(M) T Q(N)

Figure 7.1 A property T is multiply realized when different supersets of causal


powers Q(M) and Q(N) (lower-level properties) share the same subset (higher-level
property)

explanantia of higher-level phenomena as well as explananda in terms


of lower-level properties adds to our understanding of a system.

3 Sources of multiple realizability

MR has at least two sources. Each source is sufficient to give rise to MR,
but the two sources may also be combined to form a composite form of
MR.

3.1 Multiple realizability1: multiple organizations


The first kind of MR results when the same components exhibit the same
capacities but the organization of those components differs. Suppose
that a certain set of components, S1, when organized in a certain way,
O1, form a whole that exhibits a certain capacity. It may be that those
same components (S1) can be organized in different ways (O2, O3, ... )
and still form wholes that exhibit the same capacity. If so, then the
property is multiply realizable.
A simple example is what you can do with a round tabletop and
three straight bars of equal length. If the three bars are arranged so as to
support the tabletop in three different spots far enough from the table-
top’s centre, the result is a table – that is, something with the properties
of a table. Alternatively, two of the legs may be arranged to form a cross,
138 Gualtiero Piccinini and Corey J. Maley

and the remaining leg may be used to connect the centre of the cross
to the centre of the tabletop. The result of this different organization is
still a table.
For this proposal to have bite, we need to say something about when
two organizations of the same components are different. Relevant
differences include spatial, temporal, operational and causal differ-
ences. Spatial differences are differences in the way the components
are spatially arranged. Temporal differences are differences in the way
the components’ operations are sequenced. Operational differences
are differences in the operations needed to exhibit a capacity. Finally,
causal differences are differences in the components’ causal powers
that contribute to the capacity and the way such causal powers affect
one another. Two organizations are relevantly different just in case
they include some combination of the following: components spatially
arranged in different ways, performing different operations or the same
operations in different orders, such that the way the causal powers
that contribute to the capacity or the way the causal powers affect one
another are different.
MR1 is ubiquitous in computer science. Consider two programs
(running on the same computer) for multiplying very large integers,
stored as arrays of bits. The first program uses a simple algorithm, such
as what children learn in school, and the second uses a more sophisti-
cated (and faster) algorithm, such as the Fast Fourier Transform. These
programs compute the same function (i.e., they multiply two integers)
using the same hardware components (memory registers, processor,
etc.). But the temporal organization of the components mandated by
the two programs differs considerably: many children could understand
the first, but understanding the second requires non-trivial mathemat-
ical training. Thus, the processes generated by the two programs count
as two different realizations of the operation of multiplication.
The notion of MR1 allows us to make one of Shapiro’s conclusions
more precise. Shapiro (2000) is right that components made of different
materials (e.g., aluminum vs steel) need not count as different realiza-
tions of the same property (e.g., lifting corks out of bottles) because they
contribute the same property (e.g., rigidity) that effectively screens off
the difference in materials. But this is true only if the different sets of
components are organized in the same way. It is important to realize
that if two sets of components that are made of different materials (or
even the same material) give rise to the same functional property by
contributing the same properties through different functional organiza-
tions, those are multiple realizations of the same property.
The Metaphysics of Mind 139

3.2 Multiple realizability2: multiple component types


The second kind of MR results when sets of different component types
are organized in the same way to exhibit the same capacity. By different
components, we mean components with different capacities or func-
tional properties (i.e., components of different kinds). Again, suppose
that a certain set of components, S1, when organized in a certain way,
O1, form a whole that exhibits a certain property. It may be that a set of
different components, S2, can be organized in the same way O1 and yet
still form a whole that exhibits the same property.
As in the case of different organizations mentioned above, we need
to say something about when two components belong to different
kinds. Two components are different in kind just in case they contribute
different causal powers to the performance of their capacity. Here is a
simple test for when two components contribute the same or different
causal powers: to a first approximation, if two components of similar size
can be substituted for one another in their respective systems without
losing the capacity in either system, then these components contribute
the same causal powers. If not, then they contribute different causal
powers.9
For example, consider two pendulum clocks with the same organiza-
tion, where one has gears made of metal and the other has gears made
of wood. We could take a gear from the wooden clock and its counter-
part from the metal one and switch them. Assuming a few insignificant
details (their sizes are the same, the number of teeth in each gear is
the same, etc.), both clocks would work as before, and thus these gears
contribute the same causal powers. But of course this would not work if
we were to swap a quartz crystal from a digital clock with the pendulum
of a cuckoo clock: these components contribute different causal powers.
Neither of these is a case of MR2. The pair of pendulum clocks is not a
case of MR at all, whereas the pendulum and quartz pair is a case of MR3
(see below).
An example of MR2 is a system of signs for communicating messages,
such as Morse code. A longer and a shorter sound may be used as Morse
signalling units, and so may a louder and a quieter sound or sounds
of different pitches. Other physical media can be used as well, such as
various kinds of electromagnetic radiation or a sequence of wooden
rods of two distinct lengths. In each of these cases, the components that
realize the system are such that the recombination technique mentioned
above would not work.
140 Gualtiero Piccinini and Corey J. Maley

Computer science once again provides numerous examples of MR2.


Consider computer organization, which is concerned with designing
processors (and processor components) to implement a given instruc-
tion-set architecture (the low-level, basic instructions a processor
is capable of performing). This is all done at the level of digital logic
design, in which the most basic (or atomic) components of the design
are individual logic gates. Logic gates can be realized in different ways,
meaning that a particular computer organization is not specific to how
the components are realized. For example, a logic gate might be real-
ized by a silicon circuit in one computer, a gallium-arsenide circuit in
another, relays in a third, mechanical gears in a fourth and so forth. And
in each of these cases, our test for differences in causal powers is passed
(or failed, as it were): replacing a mechanical gear with a silicon circuit
will result in a non-working computer.
A biological case of MR2 is the phenomenon of circadian rhythms. A
diverse set of organisms exhibit circadian rhythms, and the organiza-
tion of the systems responsible for generating the oscillations constitu-
tive of circadian rhythms is the same.10 For a large class of organisms,
circadian rhythms are driven by a transcriptional/translational feedback
loop (TTFL). As described by Dunlap (1999, 273), ‘circadian oscillators
use loops that close within cells ... and that rely on positive and negative
elements in oscillators in which transcription of clock genes yields clock
proteins (negative elements) which act in some way to block the action
of positive element(s) whose role is to activate the clock gene(s)’. Some
organisms violate this generalization, but the generalization remains
true of many kinds of organisms, including plants, animals and fungi
(see Buhr and Takahashi [2013] for a recent review).11 Thus, we have
the same organization, even though the clock proteins and clock genes
are different in different species. And as before, our causal-difference
test yields the correct result: replacing a gene in the circadian rhythm
mechanism from one species with a gene in a different species will
almost certainly result in an organism without a circadian rhythm, even
though the two genes play the same role in their respective organisms.

3.3 Multiple realizability3: multiple component types


in multiple organizations
A third kind of MR is the composition of MR1 and MR2: different compo-
nents with different organizations. As before, suppose that a certain set of
components, S1, when organized in a certain way, O1, form a whole that
exhibits a certain property. It may be that a set of different components,
The Metaphysics of Mind 141

S2, can be organized in another way, O2, different from O1, and yet still
form a whole that exhibits the same property.
MR3 is probably the typical case of MR and the one that applies to
most of the standard examples described by philosophers. One much-
discussed example that we’ve already mentioned is the corkscrew.
Proponents of both the flat and dimensioned views usually agree that a
waiter’s corkscrew and a wing corkscrew are multiple realizations of the
kind corkscrew. The components of the two corkscrews are different, as is
the organization of those components: a waiter’s corkscrew has a folding
piece of metal that serves as a fulcrum, which is sometimes hinged, and
often doubles as a bottle opener; a winged corkscrew has a rack and
pinion connecting its levers to the shaft of the screw (or worm). So both
the components differ and their organizations differ.
Eyes – another much-discussed case – follow a similar pattern. Several
authors have noted that there are many different ways in which compo-
nents can be organized to form eyes (e.g., Shapiro 2000), and many
different kinds of components that can be so organized (e.g., Aizawa and
Gillett 2011). The same pattern can be found in many other examples,
such as engines, mousetraps, rifles, sanders and so on. We’ll mention
just one more.
Computer science provides examples of systems that exhibit MR3.
In some computers, the logic circuits are all realized using only NAND
gates, which, because they are functionally complete, can implement
all other gates. In other computers, the logic circuits are realized using
only NOR gates. In still other computers, AND, OR and NOT gates might
realize the logic circuits. The same computer design can be realized by
different technologies and can be organized to perform the same compu-
tations in different ways. Computers are anomalous because their satis-
factions of MR1 and MR2 are independent of one another (more on this
below), whereas in most cases, the two are mixed inextricably because
the different sets of components that form two different realizations can
exhibit the same capacity only by being organized in different ways.

3.4 An egalitarian account of multiple realizability


What we have said so far should be enough to answer Shapiro’s (2000)
challenge to complete the sentence ‘N and M are distinct realizations
of T when and only when ___.’ Let N be the set of properties Q(N) and
organizational relations R(N) that realize T in object O1, and let M be the
set of properties Q(M) and organizational relations R(M) that realize T in
object O2. That is to say, Q(N) + R(N) mechanistically explains T in O1,
142 Gualtiero Piccinini and Corey J. Maley

whereas Q(M) + R(M) mechanistically explains T in O2. Then, N and M are


distinct realizations of T when and only when

1. Q(N) + R(N) and Q(M) + R(M) are at the mechanistic level immediately
below T;
2. One of the following is satisfied:
a. [MR1] Q(N) = Q(M) but R(N) ≠ R(M)
b. [MR2] Q(N) ≠ Q(M) but R(N) = R(M)
c. [MR3] Q(N) ≠ Q(M) and R(N) ≠ R(M).

In other words, a property is multiply realized just in case there are


at least two different causal mechanisms realizing it at the immediately
lower mechanistic level. To find out whether a property is multiply real-
ized, proceed as follows: Fix the higher-level property in question by
identifying a relevant set of causal powers. Find the immediately lower-
level mechanisms realizing different instances of the property. Figure
out whether the lower-level mechanisms have different components or
different organizations (or both). If and only if they do, you have a case
of MR.
This account of MR bears similarities to Aizawa and Gillett’s (2009,
2011) account but with two crucial differences. First, we rely on our egal-
itarian account of realization rather than Aizawa and Gillett’s dimen-
sioned view. Second, not all lower-level differences count as cases of
MR, even if such differences occur between realizer properties at the
same level; the lower-level differences must amount to differences in the
component kinds or in the way the components are organized (or both)
at the mechanistic level immediately below T. Thus we rule out many
‘easy’ cases of MR that Aizawa and Gillett accept.
For example, corkscrews of different colours are not a case of MR
because colour is mechanistically irrelevant to lifting corks. Here Aizawa
and Gillett agree with us. But contra Aizawa and Gillett, on our view a
steel corkscrew and an aluminum corkscrew are not multiple realiza-
tions of the kind corkscrew because steel versus aluminum composition
is not the mechanistic level immediately below lifting corks; that is,
being made of one metal or another is not a relevant aspect of the level
that explains how the corks are lifted. But rigidity is a relevant aspect of
that level, and many different kinds of material are rigid. Here we agree
with Shapiro: rigidity screens off composition.
Equally important is that changes in the number of components or
their rate of functioning or their arrangement (while preserving compo-
nent and organization type) may not amount to MR. Consider a three-
The Metaphysics of Mind 143

legged stool versus a four-legged stool versus a five-legged stool ... versus
an n-legged stool. Are these cases of MR? For Aizawa and Gillett, they are.
But there is no independent reason, besides Aizawa and Gillett’s account
of MR, to think so. Sure, there are differences between stools with a
different number of legs. But they do not require significantly different
mechanistic explanations, because they all rely on the same kinds of
components organized in (roughly) the same way: an n-legged stool has
approximately 1/nth of its weight supported by each leg, supposing the
legs are distributed equidistantly on the outside edge of the seat. Thus,
this is not a case of MR, and our account of MR accommodates this
fact. By the same token, so-called MR by compensatory adjustments
(Aizawa 2012), in which quantitative changes in the properties of some
components are compensated by quantitative changes in the properties
of other components, is not MR properly so called.
While some lower-level differences are clear cases of MR, other lower-
level differences are clearly not cases of MR. As it often happens, between
the clear cases at the two extremes of a continuum there is a grey area.
Our account works well for the clear cases in which MR occurs or fails to
occur, and that’s all that we hoped to accomplish.

4 Multiple realizability and levels of organization

MR can iterate through a system’s levels of organization. To illustrate,


start with a pump, a prototypical example of a functionally defined
artefact. One way to realize a pump is to have a device with chambers
and moving parts to fill and empty the chambers. A chamber is, among
other things, something that won’t leak too much; to make one, you
need materials with a certain permeability. The materials need other
properties, such as a degree of elasticity. Many materials have these
properties, but that is beside the point. The point is that there are certain
properties of materials that are relevant to their being impermeable to
certain liquids and elastic enough to move properly without breaking.
Presumably this requires molecules with certain functional properties,
organized together in appropriate ways. Perhaps there are many relevant
properties of molecules that can be exploited for this purpose (using the
same organizing principle); this is MR2. And perhaps there are many
kinds of organizations that can be exploited for this purpose (organizing
the same property of molecules); this is MR1. And perhaps the two kinds
of MR can be combined.
The process can iterate as follows. The properties of molecules may be
realized by atoms with different properties organized in one way (MR2)
144 Gualtiero Piccinini and Corey J. Maley

or by atoms with the same properties organized in the same way (MR1)
or both (MR3). The properties of atoms may be realized by subatomic
particles with different properties organized in one way (MR2) or by
particles with the same properties organized in the same way (MR1) or
both (MR3).
Another example is a neuron’s capacity to fire, or generate an action
potential. Is this multiply realized? No, because neural firing is explained
by the movement of ions into and out of the axon through ion chan-
nels. Are ion channels multiply realized? Yes, at least in some cases. For
instance, there are two kinds of potassium ion channels, the voltage-
gated channels and the calcium-activated channels, and the capacity
to selectively allow ions to pass through the channel is explained by
different mechanisms in each case. Because they have different compo-
nents (one has a voltage ‘sensor’, the other has a calcium ‘sensor’)
organized in different ways, this is a case of MR3. Now take the voltage-
gated potassium ion channels. Are they multiply realized? It seems not,
although there are variations: there are differences in the molecules that
constitute the structure allowing these channels to inactivate, but they
seem to operate in the same way (although research into these structures
is a hot topic in neuroscience as of this writing; see Jensen et. al. [2012]
for just one example).
Where does MR stop? Either at a level where there is no MR, because
there is only one kind of component and one kind of organization that
realizes a certain property, or at the level of the smallest physical compo-
nents (if there is one).

5 Multiple realizability and computation

Computing systems are an especially good case study for MR. Computers
exhibit MR of computed function by algorithm, of algorithm by program
(using different programming languages), of program by memory loca-
tions, of memory locations (and processing of programs) by architec-
ture, of architecture by technology.
As we pointed out above, computing systems exhibit all forms of
MR. You can realize the same computation type using the same compo-
nent types arranged in different ways (MR1), different component types
arranged in the same way (MR2), as well as different component types
arranged in different ways (MR3). When most systems exhibit MR3,
it’s because different components can only exhibit the same capacity
by being organized in different ways. By contrast, computing systems
are such that the same components can be organized in different ways
The Metaphysics of Mind 145

to exhibit the same computational capacity, different components


can be organized in the same way to exhibit the same computational
capacity and, as a consequence, different components can be organized
in different ways to exhibit the same computational capacity. In other
words, in computing systems the two aspects of MR3 are independent of
one another. Why is that? Is there something special about computing
systems?
Computing systems are special because their capacities can be speci-
fied solely in terms of relations between portions of the vehicles (inputs,
outputs, internal states) they manipulate along certain dimensions of
variation, without specifying any further constraints on or properties
of the vehicles. In other words, the capacities of computing systems are
independent of the physical medium in which the computations are
realized (Piccinini and Scarantino 2011).
This medium independence of computing systems explains why they
exhibit a stronger form of MR than other systems do. In the case of
ordinary systems, their capacity puts specific physical constraints on the
components and organizations that can be used to fulfil that capacity.
For instance, although there are many ways to build a mousetrap, there
is a specific physical constraint on any system of organized components
that is purported to be a mousetrap: it must catch mice.12 By contrast,
a computing system – for example, a digital adder – need not have
any specific physical effect. All it needs to do is possess components
with enough degrees of freedom to exhibit the relevant computational
states and be organized in one of the many ways that yields sums out
of addends.

6 Multiple realizability and laws

Shapiro (2000, 2004) claims that functional properties do not enter into
laws other than dull, analytic laws like ‘all corkscrews have the function
of removing corks’. But worrying about laws may be beside the point.
Laws do not seem to be of primary (or even of any) importance to many
special sciences; hence the move from models of scientific explanation
such as the deductive-nomological and inductive-nomological models
to models of explanation specific to the special sciences (Craver 2007;
Salmon 1989). The domains of the special sciences abound in functional
properties, however, so it is important to see where (and how) they fit.
First, as Shapiro notes, functional properties do enter into explana-
tions; in particular, they often enter into non-analytic generalizations.
For human artefacts, we start with a goal, and we try to build something
146 Gualtiero Piccinini and Corey J. Maley

that accomplishes it. So our functional generalizations are a consequence


of their being about artefacts. But for biological systems, functions are
empirically discovered. It was a discovery – a momentous one – that
hearts have the capacity to pump. So the generalization that hearts have
the capacity to pump or that eyes have the capacity to convert scattered
ambient light into coherent neural images is not at all analytic, and
neither are generalizations about how hearts and eyes manage to work
in virtue of their functional organizations.
Additionally, these generalizations are not dull. On the contrary,
they unify lower-level phenomena (e.g., ‘Do all these morphologically
different structures have anything in common? Yes, they have the
capacity to pump.’) and explain higher-level phenomena (e.g., ‘How
does the blood circulate in the body? It is pumped from the veins into
the arteries by the heart.’).
Third, we are concerned with functional ascriptions, which cannot be
turned into laws without having exceptions and ceteris paribus clauses.
As Rosenberg notes, even if there are few or no laws involving func-
tional properties, it doesn’t follow that these properties are not explana-
tory, unless we add the premise that laws are necessary for explanation
(Rosenberg 2001). But there is consensus that this premise is false, and
the relevant alternative is mechanistic explanation. The reason why
certain vehicles have the ability to move is that they are driven by
engines, and the reason why they do not move under certain conditions
is that their engines are broken. This is true regardless of which kind of
engine is driving any particular vehicle and whether there are any laws
that apply to all engines (which of course there are, thermodynamic and
gravitational laws at the very least; but this is beside the point). Other
(or more detailed) mechanistic explanations will invoke the functional
organization of the components.

7 Consequences for reductionism and antireductionism

One original motivation for focusing on MR was to rule out reduc-


tionism, thereby ensuring the autonomy of the special sciences (Fodor
1974). Reductionists retorted that either MR fails to occur or, if it does
occur, it fails to undermine reductionism. While we have argued that
MR is a genuine phenomenon and articulated its sources, we will now
draw what is perhaps the most surprising moral of our story: neither
reductionism nor autonomy (in most of their traditional guises) holds.
To be sure, some forms of reduction and autonomy are sufficiently
weak that they are compatible with one another and relatively
The Metaphysics of Mind 147

uncontroversial. On the reductionist side, we agree that every concrete


object is made out of physical components and the organized activities
of a system’s components explain the activities of the whole. On the
autonomist side, we agree that higher-level sciences can choose (i) which
phenomena to explain, (ii) which observational and experimental tech-
niques to use, (iii) which vocabulary to adopt and (iv) some of the ways
in which evidence from other fields constrains its explanations (Aizawa
and Gillett 2011). The genuine controversy between reductionism and
antireductionism lies elsewhere, and that’s what we turn to now.
The form of reductionism most commonly discussed in this context
is the identity of higher-level properties to their lower-level realizers. As
we’ve argued, when MR occurs, there is no such identity. Rather, higher-
level properties are proper subsets of their lower-level realizers.
Other traditional forms of reductionism include the derivation of
higher-level theories from lower-level theories and of higher-level laws
from lower-level laws. As we’ve also seen, the special sciences rarely, if
ever, produce the kinds of theories or discover laws that lend themselves
to derivation of a higher-level one from a lower-level one. MR may well
be a reason for that. Contrary to what both reductionists and many
antireductionists maintain, laws play little, if any, role in the special
sciences, and their being reducible or irreducible is of little importance
(although there are a few defenders of the role of laws, e.g., Mitchell
2002). As we said above, discovering and articulating mechanisms, rather
than discovering and articulating laws, is the most valuable model of
explanation for the biological and psychological sciences and perhaps
the special sciences in general (Craver 2007; Machamer, Darden, and
Craver 2000).
But the failure of traditional reductionism is no solace to the tradi-
tional antireductionist. For traditional antireductionism is predicated
on the autonomy of higher-level sciences and their explanations from
lower-level ones. MR, properly understood, provides no support to what
is traditionally understood as autonomy.
The most common form of autonomy discussed in this literature is the
distinctness of higher-level properties from lower-level ones. As we have
seen, higher-level properties are not identical to their lower-level realizers
(contra reductionism). But that doesn’t mean higher-level properties are
distinct from their lower-level realizers in the sense of being ‘additions
of being’ to them, as distinctness is usually understood. Rather, higher-
level properties are subtractions of being, as it were, from their lower-
level realizers. Higher-level properties are proper subsets of the causal
powers of their lower-level realizers. What remains after subtracting an
148 Gualtiero Piccinini and Corey J. Maley

appropriate subset of causal powers from lower-level (organized sets of)


properties is the higher-level properties they realize.13 (This is not to
say that lower-level properties are ontologically more fundamental than
higher-level ones; they are ontologically on a par.)
Another popular form of autonomy is the underivability of higher-
level theories or laws from lower-level ones. While we agree that there is
no such derivability, the reason is simply that the relevant kind of law
and theory play little or no role in the special sciences. Thus, we reject
this kind of autonomy as irrelevant. What we propose instead is the
integration of explanations at different mechanistic levels into multi-
level mechanistic explanations (Piccinini and Craver 2011).14

8 Conclusion

We have argued that multiple realizability is worth taking seriously,


despite some of the difficulties it has encountered. Reframing multiple
realizability within an egalitarian ontology in a way that comports
well with mechanistic explanation offers a more fruitful way forward
than recent debates about whether realization is best understood as
dimensioned or flat. We have offered an egalitarian account of realiza-
tion that captures the best features of both views while avoiding their
difficulties.
In our view, parts are neither prior to nor posterior to wholes; and
properties of parts are neither prior to nor posterior to properties of
wholes. Rather, properties are sets of causal powers; properties of wholes
are proper subsets of the causal powers of the properties of the organ-
ized parts. Realization is a relation between a property of a whole and
the properties and relations of its parts; thus, the realized property is
nothing but a proper subset of the causal powers possessed by the organ-
ized parts.
Multiple realizability is significant in helping us identify the correct
account of interlevel property relations; namely, the version of the
subset view that is sketched in our paper. What MR shows is that the
same set of causal powers (i.e., higher-level property) can be a subset of
different supersets (i.e., lower-level properties).
According to our account, the multiple realizability of a property can
occur in three ways: the components responsible for that property can
differ while the organization of the components remains the same, the
components can remain the same while the organization differs, or both
the components and the organization can differ. By adopting an egali-
tarian ontology, our account captures the kinds of multiple realizability
The Metaphysics of Mind 149

of interest to the special sciences without metaphysically prioritizing


either parts or wholes. Our view is ontologically serious without being
metaphysically mysterious, and it fits well with mechanistic explana-
tion in the special sciences.15

Notes
1. We are going assume that everything is physical and leave angels aside.
2. The thesis that most programs are realized by most physical systems is
discussed and refuted in Piccinini (2010).
3. For an account of functions properly so called – teleological functions – and
their relations to capacities or causal powers, see Garson and Piccinini (2013)
and Maley and Piccinini (forthcoming).
4. There may well be more to properties than causal powers – e.g., properties
may have a qualitative aspect (Heil 2003) – but we ignore this possibility
here.
5. Heil argues that MR is a phenomenon pertaining to high-level predicates in
virtue of different underlying properties of the entities to which the predi-
cates apply. Strictly speaking, for Heil there are no high-level properties and
a fortiori no high-level properties that are multiply realizable.
6. Unfortunately we lack the space to further articulate and defend our egali-
tarian ontology. We hope to do so in future work.
7. Thanks to Eric Funkhouser for raising this potential objection.
8. We hope to articulate this sketch of an ontology of composition at greater
length in future work.
9. What about components of different size, such as two levers from two winged
corkscrews, one of which is twice the size of the other? In this case, we may
have to adjust the scale of the components before recombining them. See
Wimsatt (2002) for a detailed discussion of functional equivalence, isomor-
phism and similarity.
10. Thanks to Sarah K. Robins for suggesting this example.
11. Thanks to Bill Bechtel for helpful comments on this point.
12. What about malfunctioning mousetraps? They belong to a type that catches
mice. What about putative mousetraps that are so ill designed that they can
never catch any mice? They are not truly mousetraps. For more on this kind
of case, see Maley and Piccinini (forthcoming).
13. Some of these points are made independently by Alyssa Ney (2010).
14. Yet another form of autonomy is the lack of direct constraints between high-
er-level explanations and lower-level ones. As one of us argued elsewhere
(Piccinini and Craver 2011), this kind of autonomy fails, too.
15. Some of the ideas behind this paper were sketched by Piccinini around 2005,
mostly in reaction to reading recent work by Shapiro (2000, 2004). Thanks
to Mark Sprevak for inviting us to submit this paper and organizing the New
Waves in Philosophy of Mind Online Conference. Thanks to Bill Bechtel,
Erik Funkhouser, Carl Gillett, Alyssa Ney, Tom Polger, Sarah K. Robins, Kevin
Ryan, Paul Schweizer, Larry Shapiro, Dan Weiskopf, our audience at the 2013
SSPP meeting and especially Ken Aizawa for helpful comments on a previous
version.
150 Gualtiero Piccinini and Corey J. Maley

References
Aizawa, K. (2012) ‘Multiple Realization by Compensatory Differences’. European
Journal for Philosophy of Science, 3(1), 1–18.
Aizawa, K., and C. Gillett (2009) ‘The (Multiple) Realization of Psychological
and Other Properties in the Sciences’. Mind and Language, 24(2), 181–208.
Doi:10.1111/j.1468–0017.2008.01359.x.
Aizawa, K., and C. Gillett (2011) ‘The Autonomy of Psychology in the Age of
Neuroscience’. In Causality in the Sciences, eds, P. M. Illari, F. Russo and J.
Williamson, 202–223. Oxford University Press.
Bechtel, W., and J. Mundale (1999) ‘Multiple Realizability Revisited: Linking
Cognitive and Neural States’. Philosophy of Science, 66(2), 175–207.
Bickle, J. (2003) Philosophy and Neuroscience: A Ruthlessly Reductive Approach.
Dordrecht: Kluwer.
Block, N. J., and J. A. Fodor (1972) ‘What Psychological States Are Not’.
Philosophical Review, 81(2), 159–181.
Buhr, E. D., and J. S. Takahashi (2013) ‘Molecular Components of the
Mammalian Circadian Clock’. In Circadian Clocks, vol. 217, eds, A. Kramer
and M. Merrow, 3–27. Berlin, Heidelberg: Springer Berlin Heidelberg.
Doi:10.1007/978–3–642–25950–0_1.
Couch, M. (2005). ‘Functional Properties and Convergence in Biology’. Philosophy
of Science, 72, 1041–1051.
Craver, C. (2007) Explaining the Brain. Oxford: Oxford University Press.
Fodor, J. A. (1968) Psychological Explanation. New York: Random House.
Fodor, J. A. (1974) ‘Special Sciences (or, The disunity of science as a working
hypothesis)’. Synthese, 28(2), 97–115, doi:10.1007/BF00485230.
Fodor, J. A. (1997) ‘Special Sciences: Still Autonomous After all these Years. Noûs,
31 (suppl.: Philosophical Perspectives, 11), 149–163.
Garson, J., and G. Piccinini (2013) ‘Functions Must Be Performed at Appropriate
Rates in Appropriate Situations’. British Journal for the Philosophy of Science. doi:
10.1093/bjps/axs041.
Gillett, C. (2002) ‘The Dimensions of Realization: A Critique of the Standard
View’. Analysis, 62, 316–323.
Gillett, C. (2003) ‘The Metaphysics of Realization, Multiple Realizability, and the
Special Sciences’. Journal of Philosophy, 100(11), 591–603.
Gillett, C. (2010) ‘Moving beyond the Subset Model of Realization: The Problem
of Qualitative Distinctness in the Metaphysics of Science’. Synthese, 177,
165–192.
Heil, J. (2003) From an Ontological Point of View. Oxford: Oxford University Press.
Jensen, M. O., V. Jogini, D. W. Borhani, A. E. Leffler, R. O. Dror, and D. E. Shaw
(2012) ‘Mechanism of Voltage Gating in Potassium Channels’. Science Signaling,
336(6078), 229, doi:10.1126/science.1216533.
Keeley, B. L. (2000) ‘Shocking Lessons from Electric Fish: The Theory and Practice
of Multiple Realization’. Philosophy of Science, 67(3), 444–465.
Kim, J. (1992) ‘Multiple Realization and the Metaphysics of Reduction’. Philosophy
and Phenomenological Research, 52(1), 1–26.
Kim, J. (1998) Mind in a Physical World. Cambridge, MA: MIT Press.
Klein, C. (2008) ‘An Ideal Solution to Disputes about Multiply Realized Kinds’.
Philosophical Studies, 140(2), 161–177.
The Metaphysics of Mind 151

Klein, C. (2013) ‘Multiple Realizability and the Semantic View of Theories’.


Philosophical Studies, 163(3), 683–695.
Lycan, W. G. (1981) ‘Form, Function, and Feel’. Journal of Philosophy, 78(1),
24–50.
Machamer, P., L. Darden, and C. F. Craver (2000) ‘Thinking about Mechanisms’.
Philosophy of Science, 67(1), 1–25.
Maley, C. J., and G. Piccinini (forthcoming) ‘The Ontology of Functional
Mechanisms’. In Integrating Neuroscience and Psychology, D. M. Kaplan (ed.),
Oxford University Press.
Mitchell, S. D. (2002) ‘Ceteris Paribus: An Inadequate Representation for Biological
Contingency’. Erkenntnis, 57(3), 329–350.
Ney, A. (2010) ‘Convergence on the Problem of Mental Causation: Shoemaker’s
Strategy for (Nonreductive?) Physicalists’. Philosophical Issues, 20(1), 438–445.
Piccinini, G. (2010) ‘Computation in Physical Systems’. In The Stanford
Encyclopedia of Philosophy (Fall edn), Edward N. Zalta (ed.), http://plato.stan-
ford.edu/archives/fall2010/entries/computation-physicalsystems/.
Piccinini, G. and C. Craver (2011). ‘Integrating Psychology and Neuroscience:
Functional Analyses as Mechanism Sketches.’ Synthese 183(3): 283–311.
Piccinini, G. and A. Scarantino (2011). ‘Information Processing, Computation,
and Cognition.’ Journal of Biological Physics 37(1): 1–38.
Polger, T. W. (2007) ‘Realization and the Metaphysics of Mind’. Australasian
Journal of Philosophy, 85(2), 233–259, doi:10.1080/00048400701343085.
Polger, T. W. (2009) ‘Evaluating the Evidence for Multiple Realization’. Synthese,
167(3), 457–472.
Polger, T. W., and L. A. Shapiro. (2008) ‘Understanding the Dimensions of
Realization’. Journal of Philosophy, 105(4), 213–222.
Putnam, H. (1960) ‘Minds and Machines’. In Dimensions of Mind, S. Hook (ed.),
New York University Press.
Putnam, H. (1967a) ‘The Mental Life of Some Machines’. In Intentionality, Minds,
and Perception, H.-N. Castañeda (ed.), Detroit: Wayne State University Press.
Putnam, H. (1967b) ‘Psychological Predicates’. In Art, Mind, and Religion, W. H.
Capitan (ed.), Pittsburgh: University of Pittsburgh Press.
Putnam, H. (1988) Representation and Reality. Cambridge, MA: MIT Press.
Rosenberg, A. (2001) ‘On Multiple Realization and the Special Sciences’. Journal
of Philosophy, 98(7), 365–373.
Salmon, W. C. (1989) Four Decades of Scientific Explanation. Pittsburgh: University
of Pittsburgh Press.
Schaffer, J. (2010) ‘Monism: The Priority of the Whole’. Philosophical Review,
119(1), 31–76.
Shagrir, O. (1998) ‘Multiple Realization, Computation and the Taxonomy
of Psychological States’. Synthese, 114(3), 445–461, doi:10.1023/
A:1005072701509.
Shapiro, L. A. (2000) ‘Multiple Realizations’. Journal of Philosophy, 97(12),
635–654.
Shapiro, L. A. (2004) The Mind Incarnate. Cambridge, MA: MIT Press.
Shoemaker, S. (2007) Physical Realization. Oxford: Oxford University Press.
Sullivan, J. A. (2008) ‘Memory Consolidation, Multiple Realizations, and Modest
Reductions’. Philosophy of Science, 75(5), 501–513.
152 Gualtiero Piccinini and Corey J. Maley

Weiskopf, D. A. (2011) ‘The Functional Unity of Special Science Kinds’. British


Journal for the Philosophy of Science, 63, 233–258.
Wimsatt, W. (2002) ‘Functional Organization, Analogy, and Inference’. In
Functions: New Essays in the Philosophy of Psychology and Biology, A. Ariew, R.
Cummins and M. Perlman (eds), 173–221. Oxford: Oxford University Press.
Zangwill, N. (1992) ‘Variable Realization: Not Proved’. Philosophical Quarterly,
42(167), 214–219.
8
The Real Trouble with Armchair
Arguments against Phenomenal
Externalism
Adam Pautz

The intrinsicness of phenomenology is self-evident to reflective


introspection.
—Terry Horgan and John Tienson

Every argument has its intuitive bedrock.


—John Hawthorne

According to reductive externalist theories of phenomenal consciousness,


the sensible qualities (colours, sounds, tastes, smells) reduce to physical
properties out there in the external environment, contrary to the seven-
teenth-century Galilean view that they are somehow only in the mind-
brain. Further, if we are to be conscious of these external qualities, we
must be appropriately physically connected to them (e.g., our sensory
systems must causally detect or ‘track’ them). This leads to phenomenal
externalism: intrinsically identical subjects can differ phenomenally
due to external differences. Examples of reductive externalism include
‘tracking intentionalism’ (Dretske, Lycan, Tye), active externalism (Noë
and O’Regan) and reductive versions of naive realism (Fish). Many think
such externalist theories represent our best shot at reductively explaining
phenomenal consciousness.1
The standard arguments against such theories invoke armchair intui-
tions about far-out cases such as brains in vats, swampmen and inverts
(Block, Chalmers, Hawthorne, Levine, Shoemaker). They often presup-
pose phenomenal internalism: phenomenology supervenes on intrinsic
character. They play a crucial role in the burgeoning phenomenal inten-
tionality program (Horgan, Tienson, Kriegel). A central plank in this
program is that reductive externalism fails because armchair reflection
establishes phenomenal internalism. Indeed, phenomenal intentionality

153
154 Adam Pautz

theorists not only reject reductive externalist theories; they often reject
all reductive approaches, adopting instead what I call a consciousness
first approach: phenomenal consciousness is not something that must
be reductively explained in other terms (e.g., tracking plus cognitive/
rational accessibility) but rather a starting point from which to explain
other things (e.g., cognition, rationality, value).
My sympathies lie with phenomenal internalism and the phenom-
enal intentionality program.2 In particular, I defend an internalist, neo-
Galilean (or ‘Edenic’) version of ‘intentionalism’ (Pautz 2006; Chalmers
2006). But here my aim is negative. I criticize three main armchair
arguments against rival reductive externalist theories. Externalists like
Dretske, Lycan and Tye have raised their own objections to such argu-
ments. But I show that they do not quite hit the nail on the head; I iden-
tify what I take to be the real problems, arguing that the much-discussed
armchair arguments are in fact without merit. The moral: the case for
phenomenal internalism must depend on empirical arguments.

1 Preliminary: why take phenomenal


externalism seriously?

Before considering arguments against reductive externalist theories of


consciousness, I want to briefly explain why such theories should be
taken seriously in the first place. I develop what I consider the best argu-
ment. On the face of it, the best explanation for the perception of spatial
features invokes interactions with things in external space.
Consider an example, which I use throughout this chapter. Let R
be the phenomenal property you have when you view a tomato on a
certain occasion. R is essentially externally directed; it necessarily exhibits
a rich spatial intentionality. Necessarily, if you have R, then – even if you
are hallucinating – you ostensibly experience phenomenal redness and
roundness as bound together at a certain viewer-relative place. So you
are in a state that ‘matches the world’ only if a round object is present
there. Call this the spatial datum about R.
Now, against nominalism, I assume the existence of properties. Then
the spatial datum implies that having R entails bearing a certain relation
to the spatial property being round: roughly, the relation x is in a state
that matches the world only if property y is instantiated. Call this relation
the phenomenal representation relation. So if you have R while halluci-
nating, then the general property of being round exists, and you bear
the phenomenal representation relation to it, even if you do not see an
existing object that instantiates this property.
The Real Trouble with Armchair Arguments 155

In my view, the best argument for externalist theories of phenomenal


consciousness is that they might provide the best explanation of spatial
experiential intentionality. I focus throughout this essay on tracking
intentionalism (Dretske, Tye, others).
Tracking intentionalists accept strong intentionalism: phenomenology
is fully constituted by the phenomenal representation of complexes of
properties. Then they explain phenomenal representation in two steps.
First, against internalists who often locate the sensible qualities in the
brain (and so have trouble explaining the spatial datum), they main-
tain that sensible properties are really instantiated in external space. For
instance, the apparent redness of the tomato (redness-as-we-see-it) is a
‘light-reflectance property’ of the tomato’s surface. Second, you repre-
sent such properties because under optimal conditions your neural states
suitably causally co-vary with (‘track’) their external instantiation and in
turn lead to appropriate behaviour in space (e.g., behaviour appropriate
to a round thing at p). For instance, in having R, you represent a certain
phenomenal colour and shape as co-instantiated out there in space, by
virtue of the fact that your neural state has the function of normally
tracking their co-instantiation in space. You can represent these proper-
ties while hallucinating, because your brain state retains the function
of tracking them. So the phenomenal representation relation just is a
tracking relation – a kind of thermometer model of consciousness. How
else might we explain the representation of qualities in space?3
The basic idea comes in different versions. Dretske (1995) reduces
the phenomenal representation relation to this relation: x is in a state
that satisfies a certain cognitive-rational access condition and that has
the systemic biological function of tracking property y. The relevant
cognitive access condition is what is supposed to turn mere subpersonal
representation into genuine conscious, ‘phenomenal’ representation (it
will not play a role here). The relevant notion of biological function is
explained in historical-evolutionary terms (more on this in Section 6).
Recently, Tye (2012) converted to a similar historical theory.
Tracking intentionalism is externalist. Phenomenology isn’t fixed
by subjects’ intrinsic properties but by what external properties their
sensory systems track. To illustrate, consider an accidental, lone, lifelong
brain in a vat (a bad-off BIV) that is an intrinsic duplicate of your brain as
you see a tomato. It lacks an evolutionary history. Its brain states, unlike
your brain states, lack the function of tracking any external shapes (etc.)
in any population. So on tracking intentionalism, it cannot represent
roundness. But given the spatial datum, R is inseparable from repre-
senting roundness. So it cannot have R.
156 Adam Pautz

Tracking intentionalism is also reductive. Reduction is attractive.


Everyone accepts the following dependence claim: total (narrow and wide)
physical duplicates must (as a matter of at least nomological necessity) bear
the phenomenal representation relation to all of the same properties (shapes,
orientations, phenomenal colours). Because it is reductive, tracking inten-
tionalism nicely explains this as follows: (i) total physical duplicates trivi-
ally bear the tracking relation to the same properties (since it is a physical
relation), and (ii) the phenomenal representation relation just is the
tracking relation. This explanation for phenomenal-physical depend-
ence bottoms out in a phenomenal-physical identity (‘real definition’).
This is appealing, because identities do not ‘cry out for’ further expla-
nation. Indeed, identities are explanation stoppers: they don’t admit of
further explanation. What would it be to explain any identity?
Turn now to phenomenal internalism. I favour it in the end, but I
admit it faces puzzles. For one thing, I think it requires a radically non-
reductive account of experiential intentionality. Phenomenal internal-
ists (including phenomenal intentionalists such as Horgan and Tienson)
have overlooked the point. True, some phenomenal internalists (Block,
Kriegel, McLaughlin, Prinz) incline towards reductive type-type neural
identity theory, holding that monadic phenomenal properties such as R
are identical with monadic internal neurofunctional properties.4 But as
Field (2001, 69–72) has argued, philosophers cannot rest content with
reducing monadic mental properties to monadic neural or functional
properties: they must also say something about dyadic mental relations
between subjects and external items, such as the phenomenal represen-
tation relation. Tracking intentionalists reduce it to an externally deter-
mined tracking relation. Can phenomenal internalists reduce it?
The following argument (Pautz 2010b, §7; Tye [forthcoming a])
suggests not: phenomenal internalists (even if they accept type-type
identity for monadic phenomenal properties) must accept primitivism
about the dyadic phenomenal representation relation:

1. Given phenomenal internalism, the aforementioned bad-off BIV does


have tomato-like experience R.
2. Having R is inseparable from having an experience of a round thing,
an experience that ‘matches the world’ only if something is present
that is round. (Spatial datum.)
3. So in having R, the BIV bears the dyadic phenomenal representation
relation to the property being round – even if R is intrinsic, it entails
standing in this relation to a particular shape (Horgan et al. 2004,
The Real Trouble with Armchair Arguments 157

304–305). (As we saw, this follows from the spatial datum and the
existence of properties.)
4. But, ex hypothesi, the BIV bears no dyadic physical-functional (e.g.,
tracking) relation to this property.
5. Conclusion: phenomenal internalism implies that the phenomenal
representation relation is a non-physical and (presumably) primi-
tive relation between individuals and being round and other sensible
properties.

I think this argument is a threat to reductive materialism no less serious


than the standard ‘explanatory gap’ arguments.
The case for premise 4 is simply that, while the BIV is conscious of
roundness and other properties, it bears no actual or dispositional phys-
ical relations uniquely to those properties.5 So phenomenal internalists
must apparently say that the phenomenal representation relation is a
non-physical relation. Phenomenal internalism goes naturally with non-
reductive, internalist intentionalism (Chalmers 2006; Pautz 2006) in direct
opposition to tracking intentionalism.
Some phenomenal internalists object to the move from premise 2
to 3. For instance, since they reject abstract objects, Kriegel (2011) and
Mendelovici (2010) deny that having R essentially involves standing
in any real relation to the general property, roundness, understood as a
necessarily existing abstract object. If there is no such relation, internal-
ists don’t have to worry about reducing it. My reply is that the argument
doesn’t require that R is essentially relational.6 For even if R is not essen-
tially relational, having R certainly at least contingently implies standing
in certain mental relations to properties (or tropes) and concrete types
of things in scenarios where those properties and types exist. For instance,
suppose that in the BIV scenario there happens to be a round tomato
in front of the BIV as it undergoes its tomato-like hallucination. Then,
even if the property of being round exists only in scenarios where it
is instantiated (an ‘immanent’ conception of properties), the property
being round exists in the scenario, and in having R, the BIV stands in
the phenomenal representation relation to it. Further, the BIV bears the
following relation to the concrete tomato that exists in the scenario x
has an experience that is accurate with respect to y. Since the BIV bears no
physical-functional relations to such things, phenomenal internalism
apparently entails that these mind-world relations are primitive.7
The main drawback of primitivism is that it cannot provide any very
appealing explanation of the aforementioned dependence claim: total
158 Adam Pautz

(narrow and wide) physical duplicates bear the phenomenal representation


relation to all of the same properties (shapes, orientations, colours) and types
of concrete things.
One option for internalist primitivists is a dualist explanation: there
is a swarm of basic, contingent phenomenal-physical laws linking being
in certain physical states (e.g., neural states) with bearing the primitive
phenomenal representation relation to certain shapes, phenomenal
colours, positions and so on. They would be ‘nomological danglers’,
additional to the basic laws of physics. This is unappealing. The tracking
intentionalist’s explanation bottoms out an identity, which is an ‘expla-
nation stopper’. By contrast, the dualist’s laws cry out for further expla-
nation. Why do these laws obtain? We must admit some such basic modal
truths, but we should keep them to a minimum.
Another option for internalist primitivists would be an emergent mate-
rialist explanation of phenomenal-physical dependence (Rosen 2010,
132). I argued that internalism implies that the phenomenal represen-
tation relation is primitive and distinct from the physical, in the sense
that there is no reduction (real definition, metaphysical analysis) of it in
physical-functional terms (no general completion of schema what it is
for x to stand in the phenomenal representation relation to y just is for x to bear
physical-functional relation R to y). Emergent materialists nevertheless claim
that this primitive relation is always grounded in (instantiated by virtue
of) the physical and hence depends on the physical with metaphysical
necessity. (Grounding has received a great deal of contemporary interest;
see Rosen 2010.) The only difference with dualism is modal: on dualism,
the dependence is contingent. So on emergent materialism, there is a
huge swarm of phenomenal-physical ‘grounding laws’ of this form: being
in internal neural state N grounds bearing the primitive phenomenal
representation relation to X. All these phenomenal-physical grounding
laws are basic. I don’t just mean that they are deeply a posteriori rather
than a priori (most materialists would not object to a posteriori neces-
sities). I mean they are metaphysically basic ‘grounding danglers’: they
don’t follow from any more basic truths. Emergent materialism, then, is
analogous to a Moorean metaethical view on which goodness is primitive
but always grounded in natural properties.
The innumerable grounding danglers of emergent materialism are just
as unappealing as the nomological danglers of dualism. They cry out for
explanation. If the phenomenal representation relation is distinct from
all physical-functional relations, why is it necessarily connected with
(and grounded in) the physical-functional facts? (Analogy: if an emer-
gentist said that we are simple souls yet somehow necessarily grounded
The Real Trouble with Armchair Arguments 159

in wholly distinct bodies, we would be mystified.) Just as many used to


say ‘supervenience’ cries out for explanation, I suggest grounding – its
contemporary replacement – cries out for explanation as well. Reductive
explanations are better.
We phenomenal internalists face another problem. It appears totally
arbitrary that a mere pattern of neural firing B should result in the
phenomenal representation of roundness rather than some other shape,
even in the BIV scenario in which it is not causally linked to any shape at
all. (Also, why should it result in the experience of phenomenal redness
at this particular position in the visual field?) Phenomenal externalists,
such as tracking intentionalists, might avoid this arbitrariness (even if
they cannot avoid the explanatory gap). On their externalist view, the
physical ground of phenomenally representing roundness is not the mere
neural state B but a wider, environment-involving state that is specified
in terms of that very spatial property: the state of having some internal
state or other that has the function of tracking roundness and causing
behaviour appropriate to round things.
So phenomenal externalists might nicely explain the spatial inten-
tionality of perceptual experience. By contrast, those of us favouring
phenomenal internalism must grapple with some serious puzzles.
Therefore our topic is important: can simple armchair arguments under-
mine reductive externalism and establish phenomenal internalism? My
answer is no.

2 The argument from the internalist intuition

Many philosophers (Block, Chalmers, Burge, Hawthorne, Kriegel, Levine


and Speaks) espouse the first argument I examine against reductive
externalist theories. It has two steps.
Roughly, an intrinsic property of an individual is one whose instantia-
tion by that individual does not constitutively depend on contingent
items wholly distinct from that individual. We have empirically defea-
sible, pre-theoretical justification for believing some properties to be
intrinsic. Consider, for instance, shapes. The first step of the argument
claims that we likewise have a strong pre-theoretical justification for
believing that phenomenal properties such as R and having a headache are
intrinsic properties of subjects, so that

Phenomenal internalism: all experiences supervene on subjects’


intrinsic properties; total intrinsic duplicates of you must (as a matter
of metaphysical necessity) be phenomenal duplicates of you.
160 Adam Pautz

This is the internalist intuition. Hawthorne (2004, 352) calls it intuitive


bedrock, and Horgan and Tienson (2002, n. 23) call a similar claim ‘self-
evident’.8 How could individuals differ concerning whether they have a
headache or see red, unless they differ intrinsically? 9
The second step uses thought experiments to show that certain reduc-
tive theories of consciousness violate phenomenal internalism. Consider
again ‘tracking intentionalism’ (Section 1).
1. The bad-off BIV: We already saw that tracking intentionalism entails
that the bad-off BIV cannot support phenomenal consciousness. This
violates phenomenal internalism, since it is an intrinsic duplicate of
your brain.
2. Swampman: Harold is an ordinary person. A total intrinsic dupli-
cate of Harold materializes by chance in a swamp. This system, Swamp
Harold, has no evolutionary history. As noted in Section 1, Dretske’s
and Tye’s versions of tracking intentionalism entail that only systems
with a selection history can represent the world. Since standard forms
of phenomenology are inseparable from intentionality (e.g., standard
visual experiences necessarily exhibit spatial intentionality), it follows
that although Harold and Swamp Harold are total intrinsic duplicates,
they differ phenomenally: in particular, Swamp Harold simply has no
interesting experiences at all, contrary to phenomenal internalism.
3. Inverted Earth: On Earth, the sky is blue. When Harold looks at it,
he gets receptor activity A on his retina and downstream neural state
S. On Earth, among humans, S has the biological function of tracking
external blueness.
Suppose that, on Inverted Earth, there evolved a species intrinsically
identical to Homo sapiens. But objects there have inverted colours; for
instance, the sky is yellow rather than blue. However, the ambient light
is weird and has always been weird throughout the evolutionary process,
so that yellow objects give off ‘blue’ light. So when Twin Harold looks
at the yellow sky on inverted earth, he gets receptor activity A on his
retina and downstream neural state S, the same neural state Harold gets
when he looks at the blue sky on earth. (In another version of the case,
twin humans naturally evolved with colour-inverting lenses in front of
their eyes, so that yellow light is transformed into blue light.) Indeed, on
viewing the sky, Twin Harold is a complete intrinsic duplicate of Harold
on earth. But whereas in Harold S has the biological function of tracking
surface blue, in Twin Harold it has the biological function of tracking
surface yellow.10
On tracking intentionalism, this means that even though Harold and
Twin Harold occupy the same total internal state, this internal state
The Real Trouble with Armchair Arguments 161

enables them to phenomenally represent different external colours.


Again, tracking intentionalism violates the internalist intuition.

3 Problem: no armchair support for phenomenal


internalism

I think that the argument from the internalist intuition is a non-


starter. However, before explaining why, I criticize standard externalist
objections.
Externalists have been concessive. Dretske (1995, 151) concedes that
phenomenal internalism seems ‘obvious’ and ‘powerful’. Tye (2000,
120) admits that rejecting phenomenal internalism is ‘deeply counterin-
tuitive’. Their objections lie elsewhere. But I think armchair enthusiasts
might offer somewhat convincing replies to those objections.
1. Dretske’s main objection involves a rebutting defeater (1995, 151).
Apparently, the best reductive materialist theories of consciousness and
its intentionality violate phenomenal internalism, as I admitted myself
(Section 1). So there is a strong theoretical argument against it. Maybe
this beats the intuitive argument for it.
Possible reply: Maybe phenomenal internalism is compatible with
reductive materialism after all; maybe my Section 1 argument fails. If
not, perhaps the internalist intuition is so compelling that we must
accept it despite its problematic consequences.
2. Dretske (1995, 148) and Tye (2012) produce undercutting defeaters.
Offhand, you might have thought that every intrinsic duplicate of
a heart is a heart or that every intrinsic duplicate of a gas gauge is a
gas gauge. These intuitions are wrong, because the relevant properties
depend on historical, extrinsic factors. Given the bad track record of
internalist intuitions, maybe we are equally wrong about the intrinsic-
ness of phenomenal character.
Possible Reply: Even if our initial, unreflective opinions about the intrin-
sicness of some quite different properties are wrong (as we see ourselves
realize after a moment’s thought), this provides little reason to doubt our
persisting intuition about the intrinsicness of phenomenal character.
3. Lycan (2001, 24) says using the internalist intuition ‘simply begs
the question of [phenomenal] externalism in favor of internalism’.
Possible reply: Hawthorne (2004, n. 4) notes that this is a confused use
of ‘begging the question’: it is not question begging to use pre-theoretical
intuitions against a theory just because they conflict with that theory.
So I think externalists’ objections to the argument from the internalist
intuition fail. What then is the real problem?
162 Adam Pautz

Externalists like Tye and Dretske have been much too concessive in
granting that phenomenal internalism enjoys pre-theoretical support at
all. The real problem is that this is not so. A little reflection, indeed the
whole history of human thought on perception, shows that phenom-
enal externalism is not absurd at all; indeed, if anything, it is pre-theo-
retically quite plausible.
For instance, many have accepted naive realism. On this view, the
sensible qualities, or ‘qualia’, are really out in the world: colours, sound
qualities, tastes and so on. For instance, when you view a tomato and
have phenomenal property R, the red quality you are directly acquainted
with is really ‘spread out’ on its surface. The distinctive claim of naive
realism is that, at least in ‘veridical’ cases, some ordinary (non-burry, etc.)
phenomenal properties such as R are grounded in nothing but standing
in a relation of direct acquaintance to a concrete state, or condition,
involving the instantiation of sensible properties (colours, shapes) by
a mind-independent physical object. (These sensible properties include
viewpoint-relative properties such as being elliptical from here – ‘objective
looks’.)
To handle hallucination, naive realists might appeal to non-normal
objects such as sense data or Meinongian objects, so that R is always
grounded in acquaintance with objects. Alternatively, they might accept
a more extreme ‘disjunctivism’ (I think the best version is ‘primitivist
disjunctivism’, discussed in Pautz 2010a, 275). So hallucination doesn’t
undermine naive realism.
Naive realism is an example of a relational (act-object) theory of phenom-
enology: some phenomenal properties are, in some cases, grounded in
direct acquaintance with concrete objects and states wholly distinct
from perceivers.
Naive realists have taken different views on the physical basis of
acquaintance with the world. Pre-modern thinkers such as Plato, Euclid
and Ptolemy accepted the extromission theory: we become acquainted
with the world by way of rays emanating from the eye (perhaps with
infinite velocity, like gravity in Newton’s theory). Thus in his Optika
Euclid wrote:

Rays [proceed] from the eye [and] those things are seen upon which
the visual rays fall and those things are not seen upon which the
visual rays do not fall.

This entails phenomenal externalism. The phenomenal difference


between your seeing a round thing and your seeing a square thing is not
The Real Trouble with Armchair Arguments 163

an intrinsic difference in your head or soul. Indeed, your head might


be exactly the same in both cases. The phenomenal difference is an
extrinsic, relational difference in what shapes you are acquainted with
out in the world via the ray; it is entirely ‘outside the head’. And it is this
merely relational difference that causally explains your different beliefs
and behaviour in the two cases.
Today we accept intromission theory: vision is a causal process leading
via light from the object to the brain. This might encourage acceptance of
phenomenal internalism (see the ‘simple empirical argument’ below).
But contemporary naive realists (e.g., Fish 2009, 137) combine it with
naive realism. Their idea is that the long causal process going from
external states to appropriate neural processing is the supervenience base
of the mind’s ‘reaching out’ and getting directly acquainted with those
external states.
This view is also externalist, entailing that intrinsic duplicates can differ
phenomenally. To see this, consider again Harold on Earth where the
sky is blue and Twin Harold on Inverted Earth where the sky is yellow
(Section 2). Their neural processes as they view the sky are intrinsically
identical. Nevertheless, on naive realism, they enable Harold and Twin
Harold to be acquainted with different external concrete colour states
(‘tropes’) involving the sky, because they are appropriately caused by
those different external colours.
Traditional sense datum theory is another relational theory of phenom-
enology. Indeed, I argue that if it is generally considered a paradigmatic
version of phenomenal internalism, it is another version of phenomenal
externalism.
By ‘the sense datum theory’ I mean the view that phenomenal prop-
erties are grounded in an acquaintance relation to a concrete state
involving a sense datum wholly distinct from the subject, where a sense
datum is an object generally having the properties external things appear
to have. On a standard elaboration, there are contingent psychophysical
laws whereby the subject’s brain states cause certain sense data to come
into existence for a short period of time and simultaneously cause the
subject to stand in the acquaintance relation to those sense data.
I suspect many take sense datum theory to be internalist because they
mistakenly think sense data are parts of subjects (souls or brains), so that
differences in sense data would indeed be intrinsic differences. This is
not the view I have in mind. When in hallucination you are acquainted
with a literally round sense datum, that round thing is of course not part
of your brain. And if you are a simple soul without any parts rather than
a brain, then again the sense datum is not part of you. Then where are
164 Adam Pautz

sense data? On one version, sense data occupy a separate, private two-
dimensional mental space. Even on this version they are wholly distinct
from the subject (soul or brain) that observes them. On the different
version defended by Price (1954, vii–viii) and Jackson (1977, 102), they
are three-dimensional objects in public physical space alongside physical
objects. Think of them as projections of the brain. On this view, even
though a sense datum exists in public space, only the subject of the brain
that causes it to come into existence there can be acquainted with it. So
the sense datum theory is a relational theory like naive realism, with
sense data as simulacra for physical objects.
Since sense data are wholly distinct from subjects, this theory is also
externalist. Suppose as before that Harold has an ‘intrinsic duplicate’,
Twin Harold, on inverted earth. Exactly what that means depends on
the correct ontology of the human subject. If Harold and Twin Harold
are physical things like brains or bodies (even if they bear non-physical
acquaintance relations to non-physical sense data), then they are
intrinsic duplicates in that these brains or bodies share all of their
intrinsic properties. Alternatively, if they are simple, non-physical souls,
they are intrinsic duplicates because these souls have exactly the same
intrinsic properties, like being happy or thinking.
Either way, just like naive realism and tracking intentionalism, the sense
datum theory implies that Harold and Twin Harold can differ phenom-
enally, contrary to phenomenal internalism. For suppose that Harold
and Twin Harold live under different (‘inverted’) psychophysical laws
connecting brain states with acquaintance with sense data, so that while
Harold is acquainted with a blue sense datum on looking at the sky, Twin
Harold is acquainted with a yellow one. Alternatively, suppose that they
live under the same psychophysical laws but these laws are probabilistic,
so that even though Harold and Twin Harold undergo an intrinsically
identical brain state, this same brain state happens to cause them to be
acquainted with these qualitatively different sense data.
Then although Harold and Twin Harold are intrinsic duplicates, they
differ phenomenally. On the sense datum theory, just as on naive realism
and tracking intentionalism, the phenomenal difference between these
two brains or souls is not an intrinsic difference. Instead, it is a purely
relational, extrinsic difference: they bear the acquaintance relation to
different sense data wholly distinct from them. It is like the difference
between sitting next to Mary and sitting next to Jane. Indeed, the whole
point of the sense datum theory is that phenomenal differences are rela-
tional differences, contrary to rival ‘non-relational’ views such as ‘adver-
bialism’. On the sense datum theory, this purely relational difference
The Real Trouble with Armchair Arguments 165

between Harold and Twin Harold results in their having different


perceptual and introspective beliefs and perhaps different behavioural
dispositions. Hence, the sense datum theory is a version of phenom-
enal externalism no less than tracking intentionalism and naive realism.
Experience isn’t fixed by how you intrinsically are; it’s fixed by your rela-
tions to things distinct from you.
Or again, suppose you have an experience of a tomato and consider
a soul or a brain – call it BIV – that is an intrinsic duplicate of you in
a vat. If phenomenal internalism is ‘intuitive bedrock’ (Hawthorne) or
‘self-evident’ (Horgan and Tienson), then BIV must have the very same
experience. Sense datum theorists disagree. To see this, suppose that
for some reason this brain cannot produce sense data according to the
usual psychophysical laws. H. H. Price noted in a famous passage (1954,
3) that, on the sense datum theory, having a tomato-like experience
(even in hallucination) essentially requires accompaniment: it requires
‘that there exists a red patch of a round and somewhat bulgy shape’
distinct from oneself and that this patch is present to one’s conscious-
ness. Price said this is ‘self-evident’. If this is right, then BIV simply
cannot have a tomato-like experience, because BIV is unaccompanied
by a suitable sensible object. So no less than naive realism and tracking
intentionalism, sense datum theory is externalist, entailing that an
intrinsic BIV duplicate of a conscious subject might fail to be a phenom-
enal duplicate. Against internalism, awareness of sense data is a rela-
tional affair that doesn’t supervene on a subject’s intrinsic state with
‘metaphysical’ necessity (contingent psychophysical laws secure only
‘nomic’ supervenience).
Now for my main point. If phenomenal internalism were really self-
evident or intuitive bedrock, we should be able to immediately rule out
the basic relational conception of visual phenomenology (including
extromission theory, naive realism and sense datum theory) simply
because it is externalist. But we cannot. For instance, pre-theoretically, it
is simply not counterintuitive that different visual experiences should
be grounded in a purely extrinsic, relational difference involving what
the subjects are acquainted with. So the argument from the internalist
intuition against contemporary reductive externalist theories (e.g.,
tracking intentionalism and active externalism) is a non-starter. If we
cannot rule out relational theories just because they violate phenomenal
internalism, we also cannot rule out reductive externalist theories for
this reason.
So my criticism of the argument from the internalist intuition is that
pre-theoretical reflection fails to support for phenomenal internalism.
166 Adam Pautz

Although my criticism does not require it, I also believe something


stronger: given what experience is like, pre-theoretical reflection supports
rejecting phenomenal internalism and accepting phenomenal externalism
(even though ultimately I accept internalism). The armchair enthusiasts
have it backwards.
True, maybe having a headache seems like an intrinsic, non-relational
property of oneself. It was such sensations that motivated ‘adver-
bialism’. (Lycan [2001, 28] notes that even reductive externalists
might accommodate the intuition that some phenomenal properties
involving one’s own body, like having a headache, are indeed intrinsic.)
But suppose you look at a tomato and have visual phenomenal prop-
erty R on a particular occasion. Bracket all of your detailed empirical
knowledge about the role of the brain, the possibility of hallucination,
the case for materialism and so on. What theory would you accept if
you had only phenomenology to go on? Intuitively, the basic rela-
tional theory about R is correct. Intuitively, there exists a red and round
object wholly distinct from you, and your having the phenomenal prop-
erty R is grounded in your bearing an acquaintance relation to the
concrete state of this object’s being red and round rather than any
intrinsic property. The character of your conscious state is grounded
in the object’s possessing these perceptible properties, together with
the fact that you stand in a relation of acquaintance or direct aware-
ness to this state.
Likewise, if you see a balloon changing shape while deflating, it is not
pre-theoretically plausible that the phenomenal change in your experi-
ence is an intrinsic change inside of you (even if empirical investiga-
tion has shown it is accompanied by one), as the phenomenal internalists
insist. Rather, what is pre-theoretically plausible is that it consists in your
being acquainted with the concrete instantiation of a new shape by an
object distinct from you (akin to the difference between sitting next to
Mary and sitting next to Jane).
Indeed, scores of philosophers (sense datum theorists and naive real-
ists) share the same basic relationality intuition for visual experience (Fish
2009, 20). And as we have seen, the relational theory entails phenom-
enal externalism, because intrinsic duplicates, occupying different envi-
ronments or operating under different psychophysical laws, could be
acquainted with different (mental or physical) items.
The armchair enthusiasts might offer a response to my more modest
point that armchair reflection at least fails to support phenomenal inter-
nalism. Hawthorne (2004, 352) and Horgan, Tienson and Graham
(2004, 302) emphasize that the internalist conviction is ‘compelling’,
The Real Trouble with Armchair Arguments 167

‘widespread’ and ‘persistent’. Maybe the best explanation is that we do


in fact have some pre-theoretical justification for accepting phenomenal
internalism in general, contrary to what I have suggested.
But this response fails. First, are most ordinary people really inclined
to accept phenomenal internalism? This is an empirical question – a
question for ‘experimental philosophy’ – but I doubt it. As I have just
said, throughout history many have been inclined to instead accept
externalist naive realism. And recent studies (Winer et al. 2002) suggest
that even today many accept the primitive extromission theory of vision,
which (as we saw) entails phenomenal externalism.
Second, maybe some other people, including philosophers and scien-
tists, are inclined to accept phenomenal internalism. But the reason
cannot be that it is pre-theoretically self-evident or intuitive bedrock; I
have shown it is not. I offer an alternative explanation: the real source
of their internalist conviction is not pre-theoretical reflection but what I
will call the simple empirical argument, which has a long history (Russell
1927, 320ff.). Here’s a recent statement:

It is only inner states that matter for experience [phenomenal inter-


nalism], not anything relational. [Phenomenal externalism] flies in
the face of the scientific evidence correlating experiences with neural
responses: for every measurable change in experience, there is some
measurable change in the nervous system. (Prinz 2012, 19)

Likewise, Kriegel declares that ‘everything we know about the laws of


neurophysiology suggest that a lifelong envatted brain with the same
sensory-stimulation history of my brain would undergo the same expe-
riential life as mine’ (2011, 137). And Horgan and Tienson note that
‘distal environmental causes generate experiential effects only by gener-
ating more immediate [neural] links in the causal chains between them-
selves and experience’. The simple empirical argument concludes that
it is metaphysically impossible to make changes in experience except by
making changes in intrinsic neural properties.
So in view of what our experience of the world is like, what is
pre-theoretically plausible, before we learn any science, are externalist
theories, such as naive realism: at least for visual experience, phenom-
enal differences do not necessarily require intrinsic differences inside
the head. Granted, today many philosophers (Kriegel, Horgan, Tienson,
Hawthorne) vehemently reject phenomenal externalism and find
phenomenal internalism ‘obvious’ across the board. But the explanation,
I conjecture, is that they have for most of their lives known the basic
168 Adam Pautz

scientific facts about the role of the brain in enabling conscious experi-
ence. Because of the seductive simple empirical argument, they have
become totally convinced of phenomenal internalism: phenomenal
differences do require intrinsic differences inside the head. Because their
confident belief in phenomenal internalism has become so ingrained,
they mistakenly take it to be something that is obvious or self-evident
on a moment’s reflection. But really it is just a high-level empirical belief,
one that became widely accepted in the history of human thought only
after detailed empirical investigation.
Now the phenomenal internalist might naturally respond, ‘OK,
the armchair argument from the internalist intuition fails, but why
can’t I just directly rely on the simple empirical argument to undermine
reductive externalist theories like tracking intentionalism and naive
realism?’
My focus here is on armchair arguments, but let me address this ques-
tion. I favour certain empirical arguments (see Section 8), but I think
that this very simple empirical argument fails. The quick way to see this
is to note that it is equally true that ‘for every change in thoughts about
natural kinds, there is a measurable change in the brain’ (to appropriate
Prinz’s language). But this does not entail that natural kind thoughts are
fixed by intrinsic neutral properties: content externalism means this is
not the case. The inference is equally fallacious concerning phenomenal
states.11
The fallacy is obvious: the mere fact that it is nomically necessary
in actual humans that phenomenal differences are correlated with
intrinsic neural differences doesn’t mean that this is metaphysically
necessary. What Kriegel calls ‘the laws of neurophysiology’ might
(like typical special science laws) obtain only relative to a background
condition, one not satisfied in the case of the BIV. (For instance, a
brain state might result in an experience of round only if it normally
tracks round objects.) Indeed, Prinz is wrong that the simple correla-
tional data even raise the probability of phenomenal internalism over
phenomenal externalism, since all phenomenal changes are correlated
with both changes in intrinsic neural states and changes in externally-
determined content (e.g., when you go from seeing yellow to seeing
blue, you go from a neural state that normally tracks yellow objects to
a neural state that normally tracks blue objects). So the simple correla-
tional data alone are entirely neutral between phenomenal internalism
and phenomenal externalism (naive realism, tracking intentionalism,
active externalism).
The Real Trouble with Armchair Arguments 169

4 The argument from possibility intuitions

So the argument from the internalist intuition fails. But perhaps all is
not lost for the armchair enthusiasts. Another argument is available:
the argument from possibility intuitions. The argument specifically targets
reductive materialist externalist theories, like tracking intentionalism. (It
doesn’t work against dualist externalist theories; see note 13.) Chalmers,
Loar, Shoemaker and Levine suggest the argument but do not clearly
distinguish it from the argument from the internalist intuition.12 So let
me explain the difference.
Again, consider tracking intentionalism. On tracking intentionalism,
having R entails the obtaining of a certain wide (non-intrinsic) physical
condition: having a state that under biologically normal conditions
tracks – and thereby represents – the instantiation of redness (on this
view, a reflectance property) and roundness in the external world.
Having the experience in the absence of the wide physical condition is
metaphysically impossible.
Therefore, to refute such a reductive externalist theory, it would be
enough to establish from the armchair the mere possibility of having R in
the absence of the relevant wide physical condition. For instance, it would
be enough to show that in some possible world a BIV intrinsic duplicate of
oneself has R. It would also be enough to show that certain spectrum inver-
sion scenarios are possible (more on this below). The argument from possi-
bility intuitions merely relies on such possibility claims. Thus it differs
from the argument from the internalist intuition, which by contrast relies
on a much stronger necessitation claim to the effect that every possible
intrinsic duplicate of oneself (e.g., every possible brain in a vat duplicate)
in every possible world is a phenomenal duplicate of oneself.13
For example:

The intuition [that a BIV with experiences is possible] supports the


view that my [experiences] are constituted independently of my
actual situation in the world. (Loar 2003, 230)
Focusing on ... color, I say ‘THIS is supposed to be a reflectance prop-
erty of the surface of ... a cloud of fundamental particles’ ... . Reflection
on the disparity between the manifest and the scientific image makes
inescapable the conclusion that the phenomenal character we are
confronted with in color experience is due not simply to what there
is in our environment ... it seems intelligible [possible] that there are
creatures who, in any given objective situation, are confronted with a
170 Adam Pautz

very different phenomenal character than we would be in that same


situation. (Shoemaker 1994, 293–294)
It seems intuitively plausible that states with different qualitative
character could nevertheless represent [track] the very same distal
feature. (Levine 1997, 109)

On tracking intentionalism, having R consists in being in a state that


normally tracks the colour red, which is identical with a certain surface
reflectance F. But, Shoemaker notes, there is an explanatory gap. Why
should tracking this reflectance F constitute a reddish experience as
opposed to (say) a greenish experience? Therefore, against tracking
intentionalism, it is intuitively possible that two individuals should
normally track F and yet be spectrum inverted: while one has a reddish
experience, the other has (say) a greenish experience. So tracking inten-
tionalism is false.
In general, intuitively, phenomenology is modally independent of
wide physical conditions, contrary to reductive externalism.

5 Problem: the argument is unavailable to materialists

My objection to the argument from the ‘internalist intuition’ was simply


that we lack pre-theoretical justification for accepting phenomenal inter-
nalism. By contrast, I grant that, contrary to reductive externalist theories,
it is intuitive that technicolor phenomenology is modally independent
of wide physical conditions, such as tracking a particular reflectance prop-
erty (as opposed to wide non-physical conditions, such as standing in a
primitive acquaintance relation to a primitive external colour). This is
just an instance of our more general antimaterialist intuitions. So the
relevant scenarios are conceivable. They cannot be ruled out a priori.
So what’s wrong with the argument from possibility intuitions? Tye’s
response is that conceivability does not entail possibility.14 But this is
not a strong criticism, because conceivability nevertheless provides some
defeasible evidence for possibility, hence against reductive externalist
(e.g., tracking) theories.
I think that the real problem is that most philosophers are materi-
alists, including Loar, Shoemaker and Levine. And materialists cannot
consistently invoke possibility intuitions against reductive externalist
theories.
There are only two possible forms of materialism: internalist materi-
alism (type-type identity theory, internalist functionalism) and externalist
The Real Trouble with Armchair Arguments 171

materialism (tracking intentionalism, active externalism). Our possibility


intuitions count equally against both, since we have general antimateri-
alist intuitions to the effect that experience is modally independent of all
physical conditions (internal and external). Call this the parity problem.
To illustrate, consider Shoemaker. Shoemaker notes the explana-
tory gap between tracking a reflectance F and having a reddish expe-
rience. The connection looks contingent. So it is intuitively possible
that tracking the reflectance F could be associated with having a
greenish experience rather than a reddish experience, as in spectrum
inversion.
But it is strange that Shoemaker uses this argument against the
externalist materialism of Dretske and Tye. Equally robust possibility
intuitions would undermine Shoemaker’s own internalist materialism.
For on Shoemaker’s internalist materialism, having a reddish experi-
ence is constituted by some neural-functional state N involving soggy
grey matter. And the explanatory gap between having N and having a
reddish experience is just as wide as the explanatory gap between tracking
reflectance F and having a reddish experience. The connection between
neural-functional state N and the colour experience seems just as contin-
gent as the connection between reflectance F and the colour experience.
Consequently, contrary to Shoemaker’s internalist materialism, it is intu-
itively possible that N should be associated with a reddish experience in
humans and with a greenish experience in another population: there
intuitively could be spectrum inversion among individuals with the same
narrow neural-functional states, just as there intuitively could be spectrum
inversion among individuals with the same wide physical states of the
form normally tracking reflectance F.
Likewise, as Loar (quoted above) implies, intuitively, a bad-off BIV
could have a reddish experience while not tracking reflectance F,
contrary to reductive externalist theories, such as tracking intention-
alism. But it is equally intuitively possible that an individual (say an
alien or a robot) should have a reddish experience while lacking neural-
functional property N, contrary to the internalist materialism Loar
himself accepts.
Our possibility intuitions against externalist materialism are not any
‘stronger than’ our possibility intuitions against internalist materi-
alism. So materialists like Loar, Shoemaker and Levine would need some
other considerations or arguments (e.g., the empirical arguments to be
mentioned in Section 8) in order to justify accepting our possibility intui-
tions against externalist materialism while ignoring our equally strong
possibility intuitions against their own internalist materialist theories.
172 Adam Pautz

But then these other arguments would be doing all the justificatory
work.
Materialists cannot use possibility intuitions against externalist mate-
rialism for another reason. I call it the bad lot problem. Consider an
analogy: if you believe that the weather man is wrong in his predic-
tions about wind conditions half the time (these predictions form a ‘bad
lot’), you should put hardly any stock in any of his predictions about
wind conditions. But the materialist believes that our antimaterialist
possibility intuitions about the relationship between the phenomenal
and the physical also form a ‘bad lot’: whatever version of materialism
turns out to be true, intuitions in this group must generally be false (e.g.,
if internalist materialism is true, all contrary possibility intuitions are
false). So if you accept materialism, you must say that not only do they
provide equal justification against internalist materialism and exter-
nalist materialism (the parity problem); they are also not to be trusted at
all (the bad lot problem).

6 The argument from phenomenal localism

Previously, I criticized the argument from the internalist intuition


against reductive externalist theories of consciousness (Section 3). For
instance, it is simply not pre-theoretically intuitive that tomato-like
experience R is intrinsic. But, I concede, it is pre-theoretically intui-
tive that your having R for a period is temporally local: that is, totally
modally independent of everything outside the total state of the universe
during that period. Thus it differs, for instance, from the property of being
a traffic signal that means stop, whose instantiation now constitutively
depends on past conventions (e.g., to stop when the light turns red).
More generally,

Phenomenal localism: Necessarily for any phenomenal property P, if a


subject instantiates P for temporal period p, and proposition C speci-
fies all of the intrinsic properties and relations instantiated in the
whole world during period p, then, for any world W at which C is true
for period p, the subject also has phenomenal property P in W during
period p, no matter what W is like before and after period p.

Roughly, whereas phenomenal internalism is the claim that having a


certain experience for a time supervenes on the intrinsic properties of
the subject alone during that time, phenomenal localism is the weaker
claim that it at least supervenes on the intrinsic properties and relations
The Real Trouble with Armchair Arguments 173

instantiated in the whole universe during that time. To appreciate the


difference, consider naive realism. On this view, the character of your
experience is fixed by your standing in a primitive acquaintance relation
to a state of the external world, so phenomenal internalism is false. But
the holding of that relation at a time might be modally independent of
the past and future (ignoring the time-lag argument), so that phenom-
enal localism is true. Likewise, on the sense datum theory, phenomenal
internalism is false (as we saw), but phenomenal localism is true.
Phenomenal localism provides a promising argument against a subset
of reductive externalist theories; namely, historical externalist theories
that violate phenomenal localism.
Consider the tracking intentionalism of Dretske and Tye. Suppose you
have reddish experience R for ten seconds. And suppose tracking inten-
tionalism is true. On tracking intentionalism, you have R because you
have a brain state, B, which has the biological function of tracking the red
reflectance.
Now consider a world W that is intrinsically like the actual world for
the ten-second period but in which everything came into existence ex
nihilo at the start of the ten-second period (there is no past at all). In this
world, since you have no evolutionary history, your brain state does not
have the biological function of tracking the red reflectance. So according
to Dretske and Tye, in W, even though the total state of the universe for
the ten-second period is intrinsically same as in the actual world, you
don’t have R for that period, because your brain state represents nothing
at all.
Or consider a world Z in which only our evolutionary history is
different in such a way that your current brain state B now counts as
having the function of tracking the green reflectance. (Compare: had only
the past been appropriately different, the stoplight turning red in the
present might have meant go rather than stop.) On tracking intention-
alism, in Z, even though the total state of the universe for the ten-second
period might be intrinsically the same, you have a greenish experience
rather than a reddish one for that period.
Scores of philosophers, appealing to BIVs and swampmen, argue that
such externalist theories are absurd because they violate phenomenal
internalism. We have seen that this standard argument fails: phenomenal
internalism is simply not a self-evident truth. Many perfectly coherent
theories – extromission theory, naive realism and sense datum theory –
violate phenomenal internalism. So I think philosophers shouldn’t have
focused on phenomenal internalism. Instead they should have focused
on phenomenal localism. What is truly new and unusual about some
174 Adam Pautz

contemporary externalist theories is not that they violate phenom-


enal internalism but that they also violate phenomenal localism. Some
past theories of phenomenal consciousness (sense datum theory, naive
realism) violated phenomenal internalism but none violated phenom-
enal localism.

7 Problems with the argument from


phenomenal localism

Nevertheless, I think that even the argument from phenomenal localism


fails.
First, it undermines only a subset of reductive externalist theories:
namely, those violating phenomenal localism, for instance the tracking
intentionalism of Dretske and Tye. Other reductive externalist theo-
ries might accommodate phenomenal localism, even if they violate
phenomenal internalism.
Here are some examples. (i) While Dretske’s and Tye’s versions of
tracking intentionalism violate phenomenal localism because they
appeal to history, maybe other possible versions accommodate phenom-
enal localism. True, devising such a theory might be difficult – since all
standard theories of representation appeal to historical facts or forward-
looking facts to help settle what external features our inner states have
the ‘biological function’ of tracking or track under ‘optimal conditions’
in the present – but maybe not impossible. (ii) Likewise, maybe naive
realists could reduce the acquaintance relation to a complex mind-world
causal relation. And maybe, contrary to Humeanism about causation,
causal facts themselves are local facts that do not depend on regularities
in the past and future (Hawthorne 2004). The result would be a reduc-
tive externalism that accommodates phenomenal locality. (iii) Maybe
‘active externalism’ and other output-based versions of phenomenal
externalism can accommodate phenomenal localism, if the relevant
action-oriented facts do not depend on the past or future.
Historical externalists like Dretske and Tye might pursue a less concili-
atory response: phenomenal localism is simply false, even if compelling.
To soften the blow, they might say the following.
First, we also have locality intuitions about x causes y and thinking
about water. But Hume and Putnam have convinced many that these
intuitions are false. This bad track record might undercut somewhat our
confidence in phenomenal localism.
Second, materialists need a theory of how we might be justified a
priori in believing that a property is local and a theory of how such
The Real Trouble with Armchair Arguments 175

beliefs might be generally reliable. But that is hard to come by. (A mate-
rialist cannot comfortably accept ‘revelation’: that we ‘immediately
grasp’ the full essential nature of phenomenal properties just by being
acquainted with them and can tell that those essential natures don’t
involve the past or future.) Absent such a theory, maybe we should be
sceptical about our intuition favouring phenomenal localism.
Third, Tye and Dretske might explain away our localist ‘intuition’ as
follows: since we do not have to look to the past or future to know
that we have certain phenomenal properties now – one need only intro-
spect – we might erroneously conclude that they are temporally local.
To see that this inference is erroneous, consider another case: my three-
year-old daughter can immediately tell just by looking that something
is a heart, without knowing about its evolutionary history. But being
a heart is a historical, non-local property: if an intrinsic duplicate of
the heart formed by chance in a swamp, it would not also be a heart,
because it would lack the right evolutionary history – it would be a ‘fake
heart’. The general point: you can know something without knowing
all its a posteriori consequences. Likewise, maybe on Dretske and Tye’s
historical externalism my daughter (or an adult sceptical of evolution)
can immediately know about her phenomenal properties, even if she
doesn’t know about her evolutionary history.

8 Conclusion: a plea for an empirical approach

Many prominent philosophers (Chalmers, Hawthorne, Horgan,


Shoemaker) rely on armchair arguments against reductive externalist
theories of experience (e.g., tracking intentionalism, naive realism,
active externalism).
My aim has been to identify the ‘real problems’ with central armchair
arguments, because I think the criticisms of Dretske, Lycan, Tye and
others fall short. There are additional antiexternalist armchair argu-
ments I have not examined: for instance, the argument from the locality
of mental causation (Fodor), the argument from introspection (Levine)
and the argument from slow switching (Chalmers). But there are plau-
sible replies.15 Indeed, although I have not shown this here, I believe
that some (non-reductive) externalist theories (notably naive realism
and sense datum theory) cannot be clearly ruled out on the basis of any
a priori arguments (Pautz 2010a). Here I have suggested to the contrary
that armchair reflection on phenomenology supports externalism.
Nevertheless, my own sympathies lie with phenomenal internalism
and the ‘phenomenal intentionality program’ mentioned in the
176 Adam Pautz

introduction. In particular, elsewhere (2006) I have defended an inter-


nalist, neo-Galilean ‘projectivist’ version of intentionalism. Chalmers
(2006) also defends such a theory, calling it the Edenic Theory. My disa-
greement with armchair internalists like Chalmers is just this: I think
that the only good arguments for internalism and against an externalist
rival like naive realism are empirical.16 Elsewhere I have developed three
empirical arguments: the internal-dependence argument, the structure
argument, and the explanatory argument.17 (They differ from the faulty
simple empirical argument of Prinz, Kriegel, and Horgan and Tienson that
I briefly criticized in Section 3.) To decide the important externalism-
internalism issue, we must get out of our armchairs and look seriously at
work in neuroscience and psychophysics.18

Notes
1. For reductive externalism, see Dretske (1995), Lycan (2001), Tye (2000), Noë
(2004) and Fish (2009, 153). I will not explain ‘reductive’ here. See Sider
(2011, 116–132) for clarification and defence of a general reductionism about
the manifest image. See §2 of this chapter for a case for reduction over alter-
natives (e.g., basic grounding relations).
2. See Kriegel (2011), Horgan and Tienson (2002), Loar (2003) and Mendelovici
(2010). Pautz (2013) defends in detail the following ‘consciousness-first’
picture: Consciousness grounds rationality because it is implicated in basic
epistemic norms. (For a related view, see Smithies, Ch. 6 of this volume.)
In turn, the facts about rationality help to constitutively determine belief
and desire (Davidson, Lewis). So consciousness also ultimately grounds belief
and desire. Chalmers (2012, 467) briefly defends a related two-stage view on
which acquaintance grounds normative inferential connections and these in
turn pin down content.
3. Another well-known argument for phenomenal externalism starts with what
I have elsewhere (2007, 251) called the properties version of the ‘transparency
observation’ (see, e.g., Tye [forthcoming b]). But I think this ‘transparency
observation’ (unlike the ‘spatial datum’) is far from pre-theoretically obvious,
due to problems (not considered by Tye) concerning hallucination, many-
property situations and a priori constraints on attentive awareness (Pautz
2007, 517, 522 and n. 12).
4. See, e.g., Kriegel (2011, 167) and Prinz (2012, 286).
5. The bad-off BIV has no visual receptor system or motor output system (just
the central nervous system). Granted, if the BIV were suitably connected to
a human body, its current neural state would track round things and cause
round-appropriate behavioural movements. Could the BIV’s standing in this
counterfactual relation to roundness constitute its standing in the phenom-
enal representation relation to being round rather than to any other shape
(e.g., being square)? No, for by differently hooking up the BIV to the world
and a motor output-system, we could get its brain state to be caused by (say)
square things and to cause square-appropriate behaviour.
The Real Trouble with Armchair Arguments 177

6. However Johnston (2004), Pautz (2007) and Tye (forthcoming) provide an


argument (not addressed by Kriegel or Mendelovici) for this claim, based
on the fact that having R would necessarily enable one to know what such
properties or qualities are like (which requires that they exist and that one is
perceptually related to them).
7. In a recent book (2012), Prinz presents his ‘AIR’ theory, a materialist theory
of consciousness. The AIR theory entails that experiences are essentially inter-
mediate representations, defined as representations of ‘view-point relative micro-
features’ (see 124–126, 286; but see 327 for a contradictory claim). He also
defends internalism: a BIV might have R (19, 286). He might say (20) that
in having R the BIV phenomenally represents a response-dependent ‘shape
appearance’ – a view I haven’t covered here. Could he avoid my argument
that internalism leads to primitivism about representation relations? No;
indeed, even though his AIR theory entails that experience is inseparable
from sensory representation, he provides no theory of sensory representation
in the book. This is like Hamlet without the prince. Formerly, Prinz accepted
Dretske’s externalist theory of representation (see Pautz 2010c). But as we
have seen, Dretske’s externalist theory cannot be applied to the BIV, since its
internal states don’t have the biological function of indicating any proper-
ties (including Prinz’s ‘microfeatures’ and ‘appearances’). Elsewhere (2010c) I
also argue that Dretske’s theory is incompatible with Prinz’s general view that
experience represents ‘response-dependent properties’.
8. Horgan and co-workers actually invoke internalist theses somewhat
different from the one I have formulated in the text. But I think they can
be set aside because they are problematic. (i) Horgan, Tienson and Graham
(2004, 302) say that, intuitively, ‘a [arbitrary] physical duplicate of oneself
would also be a phenomenal duplicate of oneself’; similarly, Kriegel (2009,
79) claims that, intuitively, a physical duplicate of me ‘would have to undergo
the same conscious experience I undergo’ (my italics). Unlike the more basic
thesis of phenomenal internalism I formulated in the text (which is neutral
on whether experience depends on intrinsic physical or non-physical nature),
the thesis these philosophers are expressing is that mere intrinsic physical
duplication would have to result in phenomenal duplication. But intuition
clearly doesn’t support this. In fact, the reverse is true: intuitively, an intrinsic
physical duplicate of you (e.g., a BIV) might be spectrum inverted with respect
to you, or a zombie without any experiences at all (e.g., if dualism is true and
the psychophysical laws are highly contingent). (ii) Horgan and co-authors
suggest that what they call narrowness is obvious: ‘phenomenology does not
depend constitutively on factors outside the brain’ (2002, 526–527; 2004,
299, 301). The problem with this brain-based narrowness thesis is that it is also
too strong to be justified from the armchair. It rules out substance dualism and
sense datum theory, for on these views conscious experience depends consti-
tutively on the existence and character of particulars wholly distinct from the
brain (neither non-physical souls nor non-physical sense data reside in the
brain). It also rules out the ancient view that the physical basis of conscious-
ness is the heart. Even if these views are false, mere intuition isn’t enough to
rule them out. (iii) Horgan and Tienson (2002, n. 23) also suggest that what
they call intrinsicness is ‘self-evident to reflective introspection’: ‘phenome-
nology is not constitutively dependent on anything outside phenomenology
178 Adam Pautz

itself’. What does this mean? If ‘anything outside phenomenology’ means


anything ‘whose nature is describable in non-phenomenological language’, in the
words of Horgan and Tienson (2002, n. 23), then this thesis simply amounts
to dualism, so it doesn’t capture any obvious intrinsicality thesis. If, on the
other hand, ‘anything outside phenomenology’ means anything that phenom-
enology does not constitutively depend on, then the thesis becomes a trivial
analytic truth and so is even compatible with reductive externalist theories.
9. Chalmers (2006, 56), Hawthorne (2004), Kriegel (2007, 321) and Levine
(2001, 113) explicitly claim that we have strong pre-theoretical reason to
accept phenomenal internalism (but see Chalmers 2006, 78, for the opposite
claim). Block (1990, 16) and Burge (2003, 444) accept phenomenal inter-
nalism without argument.
10. For these versions of the ‘inverted earth’ case, see Lycan (2001, 30–31) and
Levine (2001, 113).
11. Tye (forthcoming a, §3) makes a similar point.
12. See Chalmers (2004, 168; 2006, 56) and the quotes below.
13. Here is another way to see that possibility intuitions differ from the inter-
nalist intuition. Consider the dualistic sense datum theory. Or consider
a (somewhat strange) dualist version of naive realism, on which external
qualities and our acquaintance with them supervenes only nomically on the
physical character of objects and the causal process from objects to the brain.
Since they are externalist, such theories are inconsistent with the internalist
intuition. But since they are also dualistic, they are quite consistent with intu-
itions concerning the possibility of spectrum inversion and brains in vats:
they agree that acquaintance with qualities – and hence phenomenology –
can vary independently of wide physical conditions.
14. See Tye (forthcoming a, §3) and (2000, 110).
15. For these arguments, see Fodor (1991), Levine (2001, 117) and Chalmers
(2004, 354–355). For replies to Fodor’s causal argument, see Dretske (1995,
151ff.) and Tye (forthcoming a, §3). For a reply to Chalmers’s argument
from slow switching and indeterminacy, see Lycan (2001). As for Levine’s
introspective argument, I think it too fails. Suppose you have the confident
introspective belief that two of your colour experiences radically differ (for
short, the ‘difference belief’). Levine argues that if an externalist theory like
tracking intentionalism is true (or even if you merely believe it is true), then
your apparently indefeasible difference belief is in fact defeasible by (perhaps
misleading) empirical evidence that the colour experiences track the same
external reflectance feature: this evidence should make you reject your own
confident introspective belief! My reply: this is an issue for everyone (Byrne
2003, 645). For instance, if neural identity theory is true (or even if you
merely believe it), then the ‘difference’ belief should likewise be defeasible by
(perhaps misleading) evidence that your underlying brain states are the same.
Levine (117) also supposes that tracking intentionalism absurdly entails that
you could confidently have the difference belief, even while it is actually
false, because the colour experiences actually track the same feature. But this
too is an issue for everyone: why cannot there be a radical mismatch between
one’s most basic, simple introspective beliefs and the true character of one’s
experience (constituted by tracking, brain states, or whatever)? Pautz (2010c,
359) sketches an answer (one available to externalists).
The Real Trouble with Armchair Arguments 179

16. This bears on modal epistemology. Chalmers’s (2009) two-dimensional


approach and ‘modal rationalism’ require that all necessities (formulated
in non–Twin Earthable terms) are a priori. But I think a counterexample
is the necessary falsehood of certain relational-externalist theories, specifi-
cally sense datum theory and naive realism. True, Chalmers (2004, 168; 2006,
56) thinks some materialist externalist theories can be ruled out a priori on
the basis of antimaterialist conceivability arguments about spectrum inver-
sion and the like. But Chalmers cannot use these a priori antimaterialist
arguments against non-materialist (dualist or ‘pansychist’) externalist theo-
ries, such as a non-materialist version of naive realism (see n. 13); indeed
Chalmers considers such theories a priori plausible (2006, 79). Against
‘modal rationalism’, the necessary falsehood of such theories is only know-
able a posteriori (n. 17).
17. The internal-dependence argument and the structure argument are discussed
in Pautz (2006, 2010c). The explanatory argument is briefly put forward in
Pautz (2010c, n. 23). In more recent work, I use these empirical arguments
against naive realism.
18. My thanks to Angela Mendelovici, Boyd Millar, Declan Smithies and Mark
Sprevak.

References
Block, N. (1990) ‘Inverted Earth’. Philosophical Perspectives, 4, 53–79.
Burge, T. (2003) ‘Phenomenality and Reference: Reply to Loar’. In M. Hahn and B.
Ramberg (eds), Reflections and Replies. Cambridge, MA: MIT Press.
Byrne, A. (2003) ‘Color and Similarity’. Philosophy and Phenomenological Research,
66, 641–665.
Chalmers, D. (2004) ‘The Representational Character of Experience’. In B. Leiter
(ed.), The Future of Philosophy. Oxford: Oxford University Press.
Chalmers, D. (2006) ‘Perception and the Fall from Eden’. In T. Szabo Gendler and
J. Hawthorne (eds), Perceptual Experience. Oxford: Oxford University Press.
Chalmers, D. (2009) ‘The Two-Dimensional Argument against Materialism’. In
B. McLaughlin and S. Walter (eds), Oxford Handbook to the Philosophy of Mind.
Oxford: Oxford University Press.
Chalmers, D. (2012) Constructing the World. Oxford: Oxford University Press.
Devitt, M. and Sterelny, K. (1987) Language and Reality. Cambridge, MA: MIT
Press.
Dretske, F. (1995) Naturalizing the Mind. Cambridge, MA: MIT Press.
Field, H. (2001) Truth in the Absence of Fact. Oxford: Oxford University Press.
Fish, W. (2009) Perception, Hallucination and Illusion. Oxford: Oxford University
Press.
Fodor, J. (1991) ‘A Modal Argument for Narrow Content’. Journal of Philosophy,
88, 5–26.
Hawthorne, J. (2004) ‘Why Humeans Are Out of their Minds’. Noûs, 38,
351–358.
Horgan, T., and J. Tienson. (2002) ‘The Intentionality of Phenomenology and the
Phenomenology of Intentionality’. In D. Chalmers (ed.), Philosophy of Mind:
Classical and Contemporary Readings. Oxford: Oxford University Press.
180 Adam Pautz

Horgan, T., J. Tienson, and G. Graham. (2004) ‘Phenomenal Intentionality and


the Brain in a Vat’. In R. Shantz (ed.), The Externalist Challenge: New Studies on
Cognition and Intentionality. Amsterdam: de Gruyter.
Jackson, F. (1977) Perception. Cambridge: Cambridge University Press.
Johnston, M. (2004) ‘The Obscure Object of Hallucination’. Philosophical Studies,
120, 113–183.
Kriegel, U. (2007) ‘Intentional Inexistence and Phenomenal Intentionality’.
Philosophical Perspectives, 21, 307–340.
Kriegel, U. (2009) Subjective Consciousness. Oxford: Oxford University Press.
Kriegel, U. (2011) The Sources of Intentionality. Oxford: Oxford University Press.
Levine, J. (1997) ‘Are Qualia Just Representations?’ Mind and Language, 12,
101–113.
Levine, J. (2001) Purple Haze. Oxford: Oxford University Press.
Loar, Brian. (2003) ‘Phenomenal Intentionality as the Basis of Mental Content’.
In M. Hahn and B. Ramberg (eds), Reflections and Replies: Essays on the Philosophy
of Tyler Burge. Cambridge, MA: MIT Press.
Lycan, W. (2001) ‘The Case for Phenomenal Externalism’. Philosophical Perspectives,
15, 17–35.
Mendelovici, A. (2010) Mental Representation and Closely Conflated Topics. PhD
diss., Princeton University.
Niyogi, S. (2005) ‘Aspects of the logical structure of conceptual analysis’.
Proceedings of the 27th Annual Meeting of the Cognitive Science Society.
Noë, A. (2004) Action in Perception. Cambridge, MA: MIT Press.
Pautz, A. (2006) ‘Sensory Awareness Is Not a Wide Physical Relation’. Noûs, 40,
205–240.
Pautz, A. (2007) ‘Intentionalism and Perceptual Presence’. Philosophical Perspectives,
21, 495–541.
Pautz, A. (2010a) ‘Why Explain Visual Experience in Terms of Content?’ In B.
Nanay (ed.), Perceiving the World. New York: Oxford University Press, 254–309.
Pautz, A. (2010b) ‘A Simple View of Consciousness’. In G. Bealer and R. Koons
(eds), The Waning of Materialism. Oxford: Oxford University Press.
Pautz, A. (2010c) ‘Do Theories of Consciousness Rest on a Mistake?’ Philosophical
Issues, 20, 333–367.
Pautz, A. (2013) ‘Does Phenomenology Ground Mental Content?’ In U. Kriegel
(ed.), Phenomenal Intentionality: New Essays. Oxford: Oxford University Press.
Price, H. (1954) Perception. 2nd edn. London: Methuen.
Prinz, J. (2012) The Conscious Brain. Oxford: Oxford University Press.
Rosen, G. (2010) ‘Metaphysical Dependence: Grounding and Reduction’. In
Hale and Hoffman (eds), Modality: Metaphysics, Logic and Epistemology. Oxford:
Oxford University Press.
Russell, B. (1905) ‘On Denoting’ In The Philosophy of Language, 3rd edn. A. P.
Martinich, (ed.), Oxford: Oxford University Press, 1996, 199–207.
Russell, B. (1927) The Analysis of Matter. London: Kegan Paul.
Schmidtz, D. (2006). Elements of Justice. Cambridge: Cambridge University Press.
Shoemaker, S. (1994) ‘The Phenomenal Character of Experience’. Philosophy and
Phenomenological Research, 54, 291–314.
Sider, T. (2011) Writing the Book of the World. Oxford: Oxford University Press.
Tye, M. (forthcoming a) ‘Phenomenal Externalism, Lolita and the Planet
Xenon’.
The Real Trouble with Armchair Arguments 181

Tye, M. (forthcoming b ) ‘Transparency, Qualia Realism, and Representationalism’.


In Philosophical Studies.
Tye, M. (2000) Consciousness, Color and Content. Cambridge, MA: MIT Press.
Tye. M. (2012) ‘Thinking Fish and Zombie Caterpillars’. Interview with Richard
Marshall, www.3ammagazine.com/3am/thinking-fish-zombie-caterpillars/.
Winer, G. A., J. E. Cottrell, V. Gregg, J. S. Fournier, and L. A. Bica. (2002)
‘Fundamentally Misunderstanding Visual Perception: Adults’ Beliefs in Visual
Emissions’. American Psychologist, 57, 417–424.
Part II
Mind and Cognitive Science
9
Problems and Possibilities for
Empirically Informed Philosophy
of Mind
Elizabeth Irvine

The use of empirical work in philosophy of mind is an increasing trend


and forms the starting point for this chapter. The potential value of
such interdisciplinary research is not in question here, and it is assumed
to be high. Rather, the chapter focuses on questions about the specific
ways in which interdisciplinary research across philosophy and the
mind/brain sciences is carried out. These include how empirical work
can be used to support or revise existing philosophical positions, and
the role of the empirically based philosopher in cognitive science.
Following this, I suggest an alternative way of approaching questions in
philosophy of mind and cognitive science in an interdisciplinary way,
based on contemporary work in philosophy of science. This approach is
explored through two examples, focusing on the interpretation of first-
person data and questions about the boundaries of cognition. While
not the only, or necessarily the best, approach to interdisciplinary work,
I suggest that a focus on methodological questions from the point of
view of philosophy of science is a potentially invaluable way of pursuing
philosophical questions about the mind.
This chapter is based on the sociological observation that many
philosophers interested in the mind now think that interdisciplinary
work across philosophy and the cognitive sciences is a Good Thing.
While far from uncontroversial in philosophy of mind (e.g., Burge 2010,
2011; McDowell 2010), increasing numbers of philosophers are now
interested in psychology and neuroscience (and vice versa), and I take
it that this trend will continue. However, I think there are a number of
problems facing any attempt to do ‘good’ empirically informed philos-
ophy of mind.

185
186 Elizabeth Irvine

I introduce two of these problems below: (i) empirical examples simply


failing to fit philosophical distinctions (even though they may be used
in support of them); and (ii) philosophical accounts taking the form of
inadequate scientific theories, in the sense that they are vague and only
attempt to fit, rather than predict, empirical data. These problems are
certainly not new ones and have been commented on before with regard
to specific cases (some examples are discussed below). Furthermore, the
problems outlined here are certainly not intended to support the idea of
philosophy of mind as an intellectual enterprise that can (or should), be
pursued independently of empirical considerations or the idea that all
philosophy of cognitive science is deeply flawed. Instead, these problems
are pointed out as they highlight important methodological questions
about the nature of interdisciplinary work. Given the assumption that
such interdisciplinary work is likely to continue in philosophy, these
problems deserve more discussion.
While these problems are not unresolvable, the second part of this
chapter promotes an alternative and potentially powerful approach to
pursuing interdisciplinary philosophical work. The suggested approach
is based within philosophy of science. This approach consists in a general
philosophical analysis of scientific methods, concepts and distinctions
used to investigate the brain. To some extent this is already part of the
project of philosophy of cognitive science, but I suggest that more general
frameworks from philosophy of science (i.e., not those just limited to
cognitive science) are so far seriously underused, yet extremely valuable.
Two examples of questions that may best be tackled using a philosophy
of science approach are discussed: the analysis of first-person or intro-
spective methods as a viable form of scientific measurement, and the
sort of constitutive question that philosophers often seem to assume
have a simple answer, such as whether cognition is extended or not.
A philosophy of science approach is not the only viable philosophical
approach to understanding the mind, nor necessarily the best one, but
it does offer a way of informing and calming (if not resolving) some
existing philosophical debates, and a different way of approaching inter-
disciplinary work across philosophy and the cognitive sciences.

1 Interdisciplinary philosophy: mind and


cognitive science

One problem that pervades philosophical theorizing in mind and cogni-


tive science is that philosophical theories and distinctions can simply
fail to map onto (interesting) scientific distinctions. As illustrated below,
Problems and Possibilities 187

even the apparently ‘good’ cases can often be seriously flawed. This
problem shows that doing empirically informed philosophy is not as easy
as it looks. On closer inspection of many cases where empirical work is
used to support a philosophical theory, discussions of empirical research
are either irrelevant to the philosophical theory or distinction, or they
show it to be fundamentally misguided or uninteresting.
The second problem is that deriving a philosophical account of a mental
phenomenon from empirical data, such as perception or consciousness,
often (though not always) makes the philosophical account little more
than a vague version of currently accepted scientific theories. Within
science, vague theories are seen as inadequate theories. Typically, these
philosophical accounts also attempt to provide a unified theory of wide
explanatory scope, yet science tolerates very few such ‘grand unified’
theories. If philosophers aim to contribute to the mind/brain sciences
by developing unifying but (largely) non-predictive theories, there is a
real question about how scientifically useful such work is.
It is suggested below that these problems are serious and fairly
common ones within at least some approaches to interdisciplinary work
across philosophy and the mind/brain sciences. Instead of throwing in
the towel and either retreating to the armchair or the local cognitive
science department, I later propose an alternative, and hopefully more
productive, philosophical way to engage with empirical work.

1.1 Empirically informed philosophy of mind


Here I briefly describe an example of the way that philosophical distinc-
tions and concepts can fail to fit scientific distinctions and concepts,
even though empirical work is called upon to support the philosophical
distinction. This example is taken from an area of research – and from
the work of particular philosophers – that is typically seen as providing
an example of a good way to engage with empirical work. The prob-
lems outlined below show just how careful one needs to be when calling
on empirical work to support a philosophical theory. This example
concerns the equation, recently made by several philosophers, between
the contents of phenomenal consciousness and the contents of iconic
(sensory) memory.
The general debate in which this example features concerns whether
(phenomenal) consciousness has rich or sparse content. It appears as
though we have richly detailed experiences of whole scenes, but experi-
mental work in the last ten years or so has thrown doubts on this. Work
on inattentional blindness and change blindness (e.g., Mack and Rock
1998; Simons and Rensink 2005) showed that subjects fail to report
188 Elizabeth Irvine

significant changes in scenes (dancing gorillas, changes in conversa-


tional partners). Given our strong intuitions that we would notice a
dancing gorilla or a change in the identity of a conversational partner,
the finding that we (typically) do not notice these changes raises doubts
about how reliable these intuitions are. If we do not notice what we
(intuitively) think we would notice, then perhaps we do not in fact
consciously perceive as much as we think we do.
However, several philosophers have proposed alternative ways to
interpret these experiments. It has been suggested that even if subjects
do not overtly notice (or report) these changes or salient features, they
may nevertheless be conscious of them. For example, Block (2007)
suggests that rich phenomenal content is simply not all concurrently
reportable, Dretske (2007) distinguishes between knowing or thinking
(hence reporting) about visual stimuli and consciously perceiving them,
and Wolfe (1999) suggests that scenes could be consciously perceived
in a richly detailed way but then quickly forgotten. All these authors
agree that a lack of overt report by subjects does not imply that they
were not conscious of a scene in rich detail. The link between different
types of access to information or the relation between content and
depth of processing, appears to make claims like these open to empirical
evidence.
To provide empirical evidence for the claim that rich conscious
content is experienced but is quickly ‘forgotten’, and that it is not all
concurrently accessible, many authors have turned to Sperling’s (1960)
experiments on partial report superiority. In these experiments, subjects
are shown a display of 12 letters for a very short duration and then cued
to report the contents of a particular row of letters. Subjects are able to
report this row reasonably well, and the cued row can be anywhere on
the display. However, subjects can only accurately report 3 or 4 letters
(or the contents of one row) on each trial. That is, subjects have the
potential, after viewing the display, of reporting almost all of the 12
letters but can only access (report) 3 or 4 letters per trial.
This short-term visual memory (iconic or sensory memory) of large
amounts of information that can only be partially reported has been
claimed by a range of philosophers to provide the rich contents of
phenomenal consciousness. That is, the rich contents of phenomenal
consciousness have been equated with the rich contents of sensory
memory (e.g., Tye 2006; Block 2007; Fodor 2007, 2008; Jacob and de
Vignemont 2010), thus providing support for the claim that conscious-
ness has rich, rather than sparse, content.
Problems and Possibilities 189

However, a closer look at the details of current theories of sensory


memory show that such an equation cannot hold. Instead of providing
a rich store of visually detailed (non-conceptual) content that is
consciously experienced, the type of short-term visual memory investi-
gated by Sperling is now recognized to be a non-unified memory store
of differentially processed and decaying information (Loftus and Irwin
1998; Luck and Hollingworth 2008). As explained below, this memory
store does not have the properties that are attributed to phenomenal
consciousness, but instead has very different properties that are not
obviously realized in visual experiences. Furthermore, there are alterna-
tive ways of explaining reports (and possibly intuitions) about the expe-
rience of rich conscious content. The claim that the contents of sensory
memory are the contents of phenomenal consciousness is simply not
consistent with contemporary scientific theories of sensory memory.
The main problem with philosophical accounts making these claims
is that they assume sensory memory to be a single, unified phenom-
enon relating to short-term storage of (non-conceptual) visual infor-
mation that is present in consciousness. In contrast, contemporary
theories use sensory memory as an umbrella term to refer to two distinct
phenomena, only one of which is directly related to visual experience.
The phenomenon directly relevant to visual experience is not the one
found in Sperling’s paradigm. Furthermore, the type of sensory memory
investigated by Sperling, and identified with phenomenal conscious-
ness in the philosophical literature, displays rather different properties
to those typically attributed to phenomenal consciousness.
One phenomenon found under the banner of sensory memory is
visible persistence. This phenomenon is related to our experience of
after-images, or the integration of spatially overlapping but tempo-
rally separated patterns (patterns seen ‘as one’ despite being presented
at different times). Visible persistence is due to neural activity in early
visual areas that continues when stimuli with very short display times,
or high-contrast stimuli (e.g., lightning in a dark sky) are no longer
present. This is the phenomenon that is directly related to visual experi-
ences, and it controls how long they persist over time (after-images can
last for several seconds).
The other phenomenon found under the heading of sensory memory,
and illustrated through Sperling’s work is informational persistence.
This refers to the short-term storage of visual information. Informational
persistence consists of both a visible analogue store for shape and loca-
tion information (stored for 150–300 ms), and a post-categorical store for
abstract information such as identity (stored for 500 ms). The differences
190 Elizabeth Irvine

in the type of information found in these two stores and the different
decay rates of this information, leads to specific kind of errors in the
Sperling paradigm. When subjects are cued after the visible analogue
store has decayed (location information gone) but while the post-
categorical store is still available (identity information still available),
subjects make ‘location errors’. In these cases, subjects correctly identify
some letters, but since location information is no longer available, the
letters come from non-cued rows. Significantly, informational persist-
ence is a type of visual memory; it tells us about the short-term storage of
different types of visual information but not about the content or dura-
tion of visual experiences. As Luck and Hollingworth (2008) state: ‘the
partial-report technique does not measure directly the visible aspect of
visual sensory memory, but rather that information persists after stimulus
onset’ (16; original italics).
These facts pose several problems for accounts that equate the contents
of rich phenomenal consciousness with the contents of sensory memory.
First, several authors have treated visible and information persistence as
different aspects of the same (conscious) phenomenon (e.g., Block 2007,
488–491; Tye 2006, 511–513), but these are misleading conflations of
two very different phenomena. Visible and informational persistence are
investigated with different experimental paradigms, concern different
parts of the visual system, have different temporal properties, and relate
either to early visual processing and visual experience or to short-term
visual memory. The Sperling paradigm, and the general phenomenon
of informational persistence are not measures of what is experienced,
only what is ‘remembered’ and reported when the display is no longer
present. Using informational persistence (memory) to make claims about
the contents of visual experience is not accepted within current psycho-
logical theories, so neither should it be in philosophical accounts.
Second, even if we could make inferences from informational persist-
ence to the contents of phenomenal consciousness, the properties of
the visible analogue and post-categorical stores are not consistent
with the properties typically attributed to phenomenal conscious-
ness. Information in these stores is deeply processed, up to the level of
object identity. Subjects are not therefore reading off the letter identities
from a conscious perception of detailed but (non-conceptual) visually
detailed letter-like shapes. They are simply reporting the letter identities
that are already processed and stored; this type of sensory memory is
certainly not ‘iconic’. Further, subjects do not report a changing expe-
rience as different types of information degrade, as one might expect
if the contents of sensory memory are the contents of phenomenal
Problems and Possibilities 191

consciousness. Experience of letter displays do not suddenly lose spatial


information, becoming a sort of non-spatial letter-identity soup, as one
would imagine to be the case when subjects are making ‘location errors’
as described above.
Finally, the reports that subjects give of experiencing the whole display
in a richly detailed way can be explained in another, very different
way. Instead of these reports being based on rich (internal) conscious
content, expectation (de Gardelle 2009; Kouider et al. 2010) and scene
gist (Oliva 2005) are sufficient to explain reports (and the intuition) of
phenomenal richness. They are sufficient even when the reported visual
details are not in fact present; false reports of specific details based on
the processing of scene gist are widely found and reliably generated (see,
e.g., Castelhano and Henderson 2008).
This means that the philosophical equation between the contents of
sensory memory and phenomenal consciousness is inconsistent with
contemporary psychological theories. Furthermore, there is also an
alternative explanation for reports (and likely intuitions) of phenom-
enal richness that is not based on the processing or storage of richly
detailed (non-conceptual) visual information (for more details and
potential implications, see Irvine 2011).
This example illustrates how philosophical accounts that appear to
engage deeply with empirical work can get it wrong. In scientific work,
the details count and can make the difference between a claim being
entirely consistent with empirical work or being entirely incompatible
with it. In this example (and in others), philosophical work is based
on outdated scientific theories or on misinterpretations of current theo-
ries. Referring to historical philosophical theses, or alternative readings
of philosophical theses, is a viable way of motivating and justifying
contemporary positions. However, historical scientific theories cannot
be used in this way, and much care needs to be taken in providing alter-
native interpretations of empirical evidence. If empirical work is used in
philosophy, it needs to be used in a scientifically respectable way.
Of course, this does not show that philosophical discussions of
conscious content are fundamentally mistaken, misleading or unin-
teresting, or that they should be abandoned entirely and replaced by
psychological and neuroscientific theory. The point is that if philos-
ophy of mind is to be properly informed by empirical work, then we
will have to accept that this cannot merely consist in searching for
empirical support for philosophical theses wherever we can find it. Real
interdisciplinary work across philosophy and science requires a detailed
understanding of each discipline in its own right and attempting some
192 Elizabeth Irvine

degree of integration. This often results in the revision, sometimes deep


revision, of the theories, concepts and distinctions we use across the
disciplines in question.1 In this case, we have to accept that some philo-
sophical positions will turn out to be mistaken or uninteresting when
confronted with contemporary scientific work.
Again, if we are committed to interdisciplinary work (and this seems
to be a growing trend), then research methods in empirically informed
philosophy may need to change in the ways outlined above. However,
getting the balance right between scientific and philosophical contri-
butions is difficult; a second problem that arises in empirically based
philosophy is discussed below.

1.2 Deriving philosophical accounts from empirical work


This section concerns empirically informed philosophy of mind, and
some parts of philosophy of cognitive science, where philosophical
theories are derived from current scientific theories. That is, current
empirical work is well understood and well summarized, but more or
less presented as philosophical work in its own right, or used to derive
philosophical conclusions. There is no sharp distinction between these
cases and those addressed above: being more familiar with empirical
work does not rule out drawing faulty links between scientific and philo-
sophical theories. However, basing philosophical accounts on empirical
work faces additional problems. The main problem is that of defining
the role of the philosopher within cognitive science.
One option is to simply do (more or less) theoretical cognitive science,
with one foot in cognitive science and one foot in philosophy, some-
times being involved in experimental work but more often offering
interpretations and theoretical accounts of current scientific findings.
I see no problem with philosophers functioning as cognitive scientists
engaged in theoretical work (we are supposed to be adaptable folk), but
there are two related problems with this kind of approach.
One tactic in this kind of interdisciplinary work is to largely dispense
with philosophical terms or to change them in such a way as to fit
experimental data. However, the resulting philosophical theory, based
on scientific work, can be just a restatement of currently accepted theo-
ries in cognitive science, sometimes using different labels for different
processes (or again, sometimes not). One example of this is Prinz’s (2012)
theory of consciousness, which is largely just a recapitulation of atten-
tion-based theories of consciousness that have been popular in cognitive
science for a long time (e.g., Baars 1997; Dehaene and Naccahe 2001).
Similar things were said about O’Regan and Noë’s (2001) sensori-motor
Problems and Possibilities 193

account of consciousness; while sensorimotor accounts are interesting,


they are not new (see, e.g., Gibson 1979) and don’t necessary tell us
much about what constitutes consciousness (e.g., Hardcastle 2001).
In providing an empirical account of what a particular phenomenon
is (e.g., consciousness), these accounts do not add significantly to the
existing scientific literature (though they may of course reinvigorate
research in a particular area).
A more general but related question is what status empirically based
philosophical theories have, particularly if we think that philosophical
theories are (or should be) different from theories found in cognitive
science. Again, there is no problem in accepting that philosophers can
do theoretical cognitive science, but if they are to be seen as doing some-
thing that philosophers in particular do well or if philosophical theo-
rizing in cognitive science is to feed back into traditional philosophical
debate (and it seems that this is the goal of many empirically based
philosophers), then it helps to identify what this contribution is. One
common approach is discussed below (there may be others).
Philosophical theories that are based on empirical work often aim to
provide a new way of thinking about a phenomenon (consciousness,
perception) by considering a set of examples or a description of a mecha-
nism and situating this within some more general framework. This often
includes making claims about what constitutes that phenomenon, iden-
tifying its function, and relating it to other mental phenomena (e.g.,
action, attention).
Of course, such accounts can also be proposed by non-philosophers.
Computational theories of perception and cognition provide similar
contributions; symbolic cognitive architectures, like ACT-R, make
different claims about what cognition is compared with connectionist
models, which are different again to probabilistic models (see, e.g.,
Anderson and Lebiere 2003; McClelland et al. 2003; McClelland et al.
2010; Griffiths et al. 2010). These different modelling frameworks give
rise to very different claims about the nature of attention, perception,
cognition, action and more ‘philosophical’ notions such as representa-
tion and rationality (e.g., Gigerenzer et al. 1999).
Unfortunately, what seems to distinguish ‘philosophical’ theories
of empirical phenomena is their enduring status as inadequate scien-
tific theories: of being vague, qualitative, not specifying the boundary
conditions of the theory, not generating predictions and so on. These
kinds of theories are of course found in science, particularly when novel
phenomena are found or when a theory has just been proposed. Another
common feature of philosophical theories of empirical phenomena
194 Elizabeth Irvine

is that they are often of the form of ‘grand unifying’ theories that
attempt to account for a wide range of phenomena (e.g., the theories
of consciousness noted above). When compared with other inadequate
scientific theories, philosophical theories can therefore play a small but
useful role in science: suggesting new frameworks to account for (and
sometimes unify) a range of phenomena. However, a proliferation of
these kinds of theories is not necessarily helpful.
This is largely because there are very few accepted theories in science
that unify and explain a wide range of phenomena but fail to exhibit the
typical properties of ‘good’ theories noted above. The theory of natural
selection is a standard example; its explanatory scope is massive, but
without the addition of context-specific knowledge and assumptions,
it cannot make (quantitative) predictions, and the boundaries of the
theory are still being worked out for specific cases (e.g., what proportion
of evolution is really due to natural selection). As these kind of non-pre-
dictive, ‘grand unifying’ theories do not possess the standard properties
of scientific theories, very few of them are tolerated for long or seen as
useful explanatory frameworks in the first place. For example, Friston’s
‘free-energy’ principle (e.g., Friston and Stephan 2007; Friston et al.
2006) is potentially a powerful ‘grand unifying’ theory for cognition and
animal behaviour in general. However, it appears to be taken seriously
only in relevant research communities when empirical support is given
for details of its implementation in specific cases (e.g., see Clark 2013 for
a thorough overview of recent work with much the same viewpoint).
On a smaller scale, there is often complaint about the proliferation of
models providing a ‘proof of concept’ for a particular explanatory frame-
work. This occurs when a model or framework is shown to account for
core, simple phenomena in a post hoc way. These models often lack the
properties of ‘good’ scientific models, as they are often fairly unspecified
or not well worked out and so do not provide clear, testable predictions
about any other cases (e.g., in decision making; see Glöckner and Betsch
2011). Clearly, an explanatory framework has to start somewhere, but
it will not be particularly useful if it goes undeveloped and untested, as
many proof of concepts appear to do.
Philosophers aiming to give ‘grand unifying’ theories of mental/
cognitive phenomena need to be mindful of these facts. While there
does seem to be a shortage of theoretical work in cognitive science (a
gap philosophers could try to fill), the scientific value of adding more
loosely specified, wide-scope theories into the literature is not obvious.
‘Good’ theoretical work needs to be highly empirically sensitive, and
theories need to be good scientific theories – at the very least being able
Problems and Possibilities 195

to explain and predict novel phenomena. If philosophers are joining the


game of interpreting experiments and posing grand unifying explana-
tory frameworks, then in order for these frameworks to be useful (or
perhaps even considered) in cognitive science, philosophers may end up
having to do cognitive science. This is likely to include formalizing their
models, generating testable predictions, identifying critical properties
(core predictions that could falsify the theory) and running experiments
(see also Dennett 2009 on this kind of enterprise).

1.3 Summary
I have not argued that all empirically informed or empirically based
philosophical accounts face the problem of making inappropriate
theoretical distinctions or of providing nothing more than inadequate
scientific theories; it is only that within my experience these seem
to be common problems facing empirically informed philosophy of
mind and philosophy of cognitive science. In part, I suspect that this is
because much of (interdisciplinary) philosophy of mind and cognitive
science is focused on answering constitutive questions: what is percep-
tion or attention or consciousness, what are their functions, and how
do they relate to other mental/cognitive/psychological states. Given
the differences between philosophy of mind and contemporary cogni-
tive science, it is hardly surprising that, as philosophical distinctions
do not always map onto scientific ones, many philosophers are left to
force a fit (often without realizing it), treat (constitutive) philosophical
questions as ones to be answered a priori, or fall into the bit of cogni-
tive science that they are best trained to do (interpreting experimental
results into newish conceptual frameworks but not engaging in full-on
scientific research).
The problems outlined above are not irresolvable. I suggest below that
there are other ways – as yet underutilized and potentially invaluable –
of pursuing interdisciplinary work. These methods are based on the idea
of treating mental/cognitive/psychological properties as any other high-
level property of a complex biological system and trying to figure out
the best way to investigate them. Rather than ask ‘What is (philosophi-
cally interesting phenomenon) X?’ or ‘Does X have (philosophically
interesting) property Y?’, we can instead ask ‘What methods can we use
to investigate X (and whether X has property Y)?’ and perhaps learn
some surprising things in the process, both about how science works
and about the phenomena we’re interested in. While an approach based
on philosophy of science is sometimes found in philosophy of cognitive
science, I suggest that it deserves to be taken more seriously.
196 Elizabeth Irvine

2 An alternative: philosophy of the science of the mind

The kind of philosophy of science discussed below focuses on scien-


tific methods and what these methods tell us about the phenomena we
are interested in, along with apparently a priori metaphysical questions,
such as what causation is (Woodward 2008), what reduction means
(Machamer and Sullivan 2001), and what the difference is between
correlation and identity (Bechtel and McCauley 1999; McCauley and
Bechtel 2001). I do not argue for this approach in general here (see
Bechtel 2009 for more general examples of the utility of this approach)
but illustrate, through two examples, the ways that insights from philos-
ophy of science stand to affect both cognitive science and philosophy
of mind. These concern the status of first-person data, and the way that
we pose and answer constitutive questions (here, whether cognition is
extended or not).

2.1 First-person data in science and philosophy


One discussion that stands to benefit from methodological insights from
philosophy of science, particularly from the point of view of scientific
measurement, is the use of first-person data (including introspective
reports) in psychology and cognitive science. First-person methods are
often treated as a unique way to access private data, and sometimes held
(whether explicitly or not) to provide incorrigible evidence about the
nature of a subject’s internal (conscious) states. While rather innocuous
in some settings, such as patients reporting pain levels to medical prac-
titioners, the veracity of first-person data has become a talking point in
consciousness studies.
For example, Schwitzgebel (2008), while pointing out the surprising
unreliability of at least some types of first-person reports (e.g., concerning
our own emotional and bodily states and the near absence of consensus
over whether there is any cognitive phenomenology), suggests that in
some cases we can be trained to be better introspectors (2003) and that
some methods of ascertaining first-person data (e.g., buzzer method
2007) might be better than others. Rather less carefully, the new field
of neurophenomenology takes the contents of first-person reports to be
essential to studying consciousness and that given suitable training and
response scales, first-person data are reliable and essential scientific data
(Ramsøy and Overgaard 2004; Overgaard et al. 2006).
Despite the widespread use of first-person reports within science and
philosophy, and the varying degrees of reliability ascribed to them,
to date there has been relatively little discussion of methodological
Problems and Possibilities 197

questions surrounding the use of first-person data from the perspective


of philosophy of science (though see Feest 2012; Piccinini 2005, 2009a,
2009b, 2010; Irvine 2012). The following section outlines a number of
questions about first-person data that deserve wider discussion and may
impact related areas in philosophy of mind. These questions call on the
details of scientific measurement, knowledge about the mechanisms
that generate first-person reports and possibly some rather difficult phil-
osophical work on what first-person data really tell us about.
First, there is the essential question of whether the use of ‘private’ data
can constitute a scientific method. Arguably, science must rely on (public)
measures that can be verified by other means and whose reliability we
have some way of gauging. If first-person methods really are the only
way of accessing the information they claim to inform us about, it is an
open question what (if any) judgments we can make about their reli-
ability. One way of solving this problem is to reject the assumption that
first-person data are data about ‘private’ states, and see first-person data
as public data and a form of self-measurement (Piccinini [see esp. 2009a,
2009b]; Dennett’s heterophenomenology [2003, 2007]).
Yet even if first-person data are like other scientific data, there is the
further question of how we interpret the data. It is widely acknowledged
that, at least sometimes, reports are biased or inaccurate; but it is also
widely assumed in the philosophical literature that there are cases where
reports can be essentially bias-free readouts of an internal mental state
(e.g., Bayne and Spener 2010). However, as has long been known from
the application of signal detection theory to human subjects (Green
and Swets 1966), ‘there is no such thing as an unmediated “subjective
report” – ever’ (Snodgrass and Lepisto 2007, 526). ‘Free’ first-person
reports (no forced guessing) are the product of filtering a system’s objec-
tive ability to respond to stimuli through a variable, context-sensitive
threshold (criterion). This raises significant questions about how or
whether to deal with bias in first-person measures.
For example, the phenomenon of perceptual defence (e.g., Bruner and
Postman 1947, 1949) was used to support Freudian ideas about the uncon-
scious by using an experimental set-up that implicitly varied subjects’
response bias. It was found that subjects were better at consciously
perceiving (freely reporting) non-threatening words (‘shot’) than threat-
ening words (‘shit’) when both sets of words were presented under
identical conditions. This was taken as evidence of an internal ‘censor’
protecting the conscious mind from unpleasant stimuli. Instead, subse-
quent experiments and analysis using signal detection theory suggested
that subjects perceive both sets of words but that college undergraduates
198 Elizabeth Irvine

in the 1950s were wary of reporting swear words to experimenters, giving


them a strong response bias against reporting the threatening words.
Although this is an extreme case, it shows how subjective reports can
be affected by factors inherent in a task (e.g., inclusion of swear words),
and factors relating to subjects (e.g., their motivation). The application
of signal detection theory to human subjects showed that subjective
reports are, always and ineliminably, biased readouts of the ‘objec-
tive’ ability of a system to respond to stimuli. In this case, eliminating
a response bias is impossible if we want to sustain ‘free’ first reports.
Minimizing a bias just means shifting the response towards an objective
measure of sensitivity. Training subjects to use particular response scales
is just to give them new response biases.
If first-person reports are biased readouts of ‘objective’ states, we seem
to have a few options. First, we can use first-person data as a direct means
of characterizing the contents of mental states. This is the default but
easy option; it is entirely possible that what a subject reports is different
to what he thinks/feels/sees. Second, we can identify mental states with
the objective facts about the system and abandon ‘subjective’ data alto-
gether. However, taking a reading of an ‘objective’ state of the system
seems to have little to do with the original use of first-person data – to
investigate internal, subjective states of the person, not just the percep-
tual sensitivity of the visual system (see, e.g., Lau 2006, further discus-
sion). Third, we can accept that we (currently) have little idea how to
deal with the inherent bias in first-person measures but try to make
some headway in particular cases, perhaps using comparisons to other
types of data such as task training, motivation, expectation, knowledge
about cognitive structure and so on.
Yet Feest (2012) has also argued that in order to interpret introspective
reports, we need to have some theory of how introspection works (see
also Schwitzgebel 2012 on a pluralist conception of the mechanisms of
introspection). This is similar to the way that we need to know how a
measuring tool works in order to interpret its readings (Hacking 1983).
Although first-person data are useful enough in everyday life, if they are
to be used to decide substantive scientific and philosophical questions
(e.g., what are the contents/levels/types of consciousness, the bounda-
ries of perceptual abilities), we need a greater understanding of them as
scientific data. In this case, just as we need to understand how a micro-
scope works in order to interpret the data it provides, we need to under-
stand how first-person data are generated in order to interpret them.
This relates to questions about what first-person data are actually data
of. One key assumption that often underlies the use of first-person data
Problems and Possibilities 199

and talk about how to interpret them is that there really is a fact of the
matter about what mental state a subject is in at a particular point in
time. Given this assumption and the problems identified above, subjects
are now sometimes given training on how to figure out what they
really are experiencing and how best to report their mental states (e.g.,
Overgaard et al. 2006). However, there are other ways of thinking about
what first-person data are about. Dennett (2003, 2005, 2007) claims that
first-person reports tell us of individuals’ beliefs about their experiences
or mental states. In contrast, Piccinini (2010) suggests that they can
instead be seen as direct reports about their experiences or mental states.
The problem, as noted above, is that we are very unsure how to
map measures of cognitive processes, such as reports, to mental states.
Perhaps mental states just are whatever we think they are or perhaps
there are facts of the matter about what mental states we have at any
one time that do not necessarily match our reports about them. If the
latter is true, we don’t know how much of a grasp of our mental life we
really have (Schwitzgebel), and even if we did have a good knowledge of
it, this doesn’t necessarily translate into first-person data being faithful
reflections of our mental states (e.g., due to response bias). What the
relation is between first-person measures and mental states, as well as
the implications this has for what we think mental states are, are ques-
tions that are central to the way we interpret first-person data and thus
deserve further serious discussion.
The aim of this section has been to show that the methodological
questions associated with the use of first-person measures – including
how to deal with bias, how these measures are generated and what they
are measures of – are more serious than is often given credit. While some
of these points could of course arise in philosophy of cognitive science,
they often do not. The approach outlined above favours an investi-
gation of fundamental questions about scientific measurement and
the interpretation of data that underlie any scientific enterprise. This
focus on general methodological questions rather than on issues that
are specific to particular research areas, is a definite advantage of the
approach. Instead of having to respond to long-standing intuitions and
assumptions (e.g., first-person reports are largely correct, direct read-
outs of private mental states), a focus on scientific methodology offers
a new and potentially insightful approach to this important range of
questions. A philosophy-of-science approach is not the only way into
these questions, but it does highlight many essential methodological
questions that are often bypassed or are simply not part of the current
dialectic.
200 Elizabeth Irvine

2.2 Dichotomies, mechanisms and pluralism


A very different area where philosophy of science offers a new perspective
is discussed in this section. This area concerns debates in philosophy of
mind and cognitive science about constitutive questions. The example
considered here is the debate over whether cognitive processes occur
only in the head or are (or can be) extended into non-neural bodily and
environmental processes. The central problem here is the assumption,
made by many philosophers, that such questions must have a defini-
tive yes or no answer or, more recently, that empirical investigation will
provide a definitive yes or no answer. I suggest that taking inspiration
from related discussions in philosophy of biology and neuroscience, we
need not assume that constitutive questions have clear-cut answers in
either scientific practice or metaphysics.
Clark and Chalmers (1998) suggested that cognition need not be
limited to the processes that go on inside the skull. Instead, parts of
cognitive processes could extend into external objects, such as the phys-
ical manipulation of objects to solve spatial problems or supplementary
memory in the form of external devices. In order for cognition to remain
a useful concept, though, some conditions need to constrain what can
count as cognitive extension (e.g., so that the contents of the Internet
do not count as external memories). Various criteria have been offered of
what should count as an external item or process being part of a cogni-
tive process, often based on a notion specific to cognition. In order to
demarcate the conditions for cognitive extension, Clark and Chalmers
(1998) appeal to the external resource as trusted and available, summed
up in the parity principle: ‘If, as we confront some task, a part of the
world functions as a process which, were it done in the head, we would
have no hesitation in recognizing as part of the cognitive process, then
that part of the world is (so we claim) part of the cognitive process’ (8,
original italics). However, there has not been sufficient discussion of how
similar the function must be; some conditions rely on questionable or
vague definitions of cognition, and others simply don’t apply to all cases
(e.g., Haugeland 1998; Rupert 2004; Wheeler 2010; Adams and Aizawa
2001, 2008).
As recently suggested by Kaplan (2012), if the debate over cognitive
extension comes down to identifying constitutive parts of a (cognitive)
process, a problem often faced in science, then seeing how this problem
is routinely solved by scientists could help. Again, philosophy of science
seems to have useful things to offer debates in philosophy of mind and
cognitive science, but I suggest that it does not quite provide the answer
(or indeed the question) that Kaplan thinks it does.
Problems and Possibilities 201

In recent mechanistic philosophy of science, scientific explanation


consists in identifying the multilevel entities and activities that, organ-
ized in particular ways, together produce the phenomenon of interest
(Machamer et al. 2000). Craver (2007a, 2007b) has outlined the ways in
which mechanisms are experimentally demarcated (i.e., how they are
constituted), using the notion of mutual manipulability. The basic idea
is that a part is a constitutive component of a mechanism (or cogni-
tive process) if by wiggling the part, the phenomenon wiggles, and by
wiggling the phenomenon, the part wiggles. These mutual wiggles must
be reasonably subtle and specific in order to rule out inclusion of back-
ground conditions. Kaplan suggests that this generic way of identifying
constitutive parts of a mechanism can be used to empirically establish
whether or not a cognitive process is extended.
However, as Craver himself has argued (2009), mechanisms and
their demarcation cannot be used to identify natural kinds or context-
independent constitutive parts of a mechanism. What counts as being
constitutively relevant for a mechanism in one research context may not
count so in another. The mutual wiggles must be reasonably subtle and
specific, but how much so is up to the researcher and his or her research
goal. If a therapist for Parkinson’s patients needs to find external ways of
intervening on a patient to help with their memory loss, then a notepad
may well count as a constitutive part of the cognitive process of memory,
but if a neuroscientist is trying to create a drug to prevent neural plaque
build-up, then the use of a notepad may not be relevant at all.
To be sure, there are cases often cited in the extended cognition liter-
ature where it makes sense to include external objects or relations to
external features as a constitutive part of the mechanism or cognitive
process in question, because these mechanisms afford greater predictive
and explanatory power (for several compelling examples, see Kaplan
2012; Chemero and Silberstein 2008). But mechanisms and their range
of explanatory power do not in themselves determine the definitive
boundaries of a cognitive process; they only specify a reasonable way
to carve up a system, given a description of a phenomenon, and a set of
research goals. In this case, considerations about which fineness of grain
to use, strength of coupling and so on come right back into the picture
of how to carve up the mechanism:

The boundaries of mechanisms ... cannot be delimited by spatial


propinquity, compartmentalization, and causal interaction alone. This
is because the spatial and causal boundaries of mechanisms depend
on the epistemologically prior delineation of relevance boundaries.
202 Elizabeth Irvine

But relevance to what? The answer is: relevance to the phenomena


that we seek to predict, explain, and control ... the mechanistic struc-
ture of the world depends in part upon our explanatory interests and
our descriptive choices. (Craver 2009, 590–591)

So far, this may seem to affect only how we talk about cognition, not
what it really is. In this way of thinking, empirical questions have no
import for metaphysical questions. Yet similar work in philosophy of
biology, particularly related to complex multi-level systems, suggests
that just as there are multiple ways of carving up mechanisms, there
are multiple ways of carving the same process up into ‘real’ ontological
chunks. Ontological pluralism, accepting that there are multiple and
equally viable ways of carving up the same bit of reality into ontological
types or kinds, is not necessarily a radical position; rather, it is one that
naturally arises from the consideration of scientific methods (see, e.g.,
Boyd 1999; Wilson 2005; Craver 2009; Dupré 1993).
While this flies in the face of much debate about this issue, it seems
that ideas from philosophy of science, properly applied, have much
potential to calm similar debates within philosophy of mind and cogni-
tive science. This is so because – as has become clear in biology and is
now reflected in philosophy of biology – complex biological systems
are not the sort of thing that support exception-free generalizations
or clear-cut dichotomies. Neither is it profitable to model biological
systems in only one way or carve up a causal landscape in only one way.
Reflection on biology itself has generated philosophical positions of
experimental, cross-level, conceptual and theoretical pluralism because
these are the best ways we have of understanding biological systems
(e.g., Kellert et al. 2006).
These are also reflected in metaphysical positions about biological
systems; current science is structured in a pluralist way not just because
we aren’t very good at it but because sharp, generalizable, context-in-
dependent boundaries are not a feature of biological systems. They are
not something we can ever expect to find. We, as evolved cognizing
beings, are clearly complex biological systems, so insights from philos-
ophy of biology have obvious application in philosophy of mind and
cognitive science. Future projects would include seeing how far similar
tactics work in other debates (e.g., related topics in embodied cogni-
tion, whether cognition involves representations, dual systems theories
vs sensorimotor theories of perception and so on).
Problems and Possibilities 203

3 Conclusions

I have suggested that at least some contemporary approaches to inter-


disciplinary work across philosophy and the mind/brain sciences face
serious problems. These problems concern recognizing and dealing with
the misfit between philosophical and scientific distinctions (possibly
making deep revisions in philosophical accounts) and how to establish
a useful role for philosophers in cognitive science, one that does not
result in overpopulating the field with vague theoretical frameworks. Yet
clearly not all interdisciplinary work falls into these traps: these prob-
lems do not rule out these approaches as being productive, and these
approaches are not the only ways of characterizing the ways in which
philosophers engage with empirical work (see, e.g., Brook 2009; Dennett
2009; Thagard 2009).
However, I have suggested that there are alternative approaches
that are currently underused and that may provide insights not easily
reached using other methods. Approaches from current philosophy of
science often offer a more direct way to engage with scientific methods,
what they tell us about specific phenomena, and how scientific meth-
odology can inform philosophical questions about mental phenomena.
Besides the two case studies introduced above, there are many other
general debates in philosophy of mind and cognitive science that could
be invigorated, clarified or laid to rest by looking at counterpart discus-
sions in philosophy of science; these include discussions of causation,
reduction, explanation, emergence, cross-level identity claims and the
individuation of concepts and scientific/natural kinds.
While ideas from philosophy of science that offer to defuse or radi-
cally reshape debates in philosophy of mind and cognitive science may
not be palatable for those currently embroiled in them, this is not, I
think, a reason to ignore the ideas. While not committal about whether
‘philosophy of science is philosophy enough’ (Quine 1953, 446), I
hope to have shown that it certainly deserves greater attention than
it currently receives in empirically informed philosophy of mind and
cognitive science.

Note
1. This is true across the board; see, e.g., Kessel et al. (2008) on interdiscipli-
nary research across health and the social sciences and McCauley and Bechtel
(2001) and Churchland (1993) on cross-level research in psychology and into
philosophy.
204 Elizabeth Irvine

References
Adams, F., and Aizawa, K. (2001) ‘The Bounds of Cognition’. Philosophical
Psychology, 14, 43–64.
Adams, F., and Aizawa, K. (2008). The Bounds of Cognition. Oxford: Blackwell.
Anderson, J. R., and Lebiere, C. (2003) ‘The Newell Test for a Theory of Cognition’.
Behavioral and Brain Sciences, 26(5), 587–601.
Baars, B. J. (1997) ‘In the Theatre of Consciousness: Global Workspace Theory, a
Rigorous Scientific Theory of Consciousness’. Journal of Consciousness Studies,
4(4), 292–309.
Bayne, T., and Spener, M. (2010) ‘Introspective Humility’. Philosophical Issues,
20(1), 1–22.
Bechtel, W. (2009) ‘Constructing a Philosophy of Science of Cognitive Science’.
Topics in Cognitive Science, 1(3), 548–569.
Bechtel, W. P., and McCauley, R. N. (1999) ‘Heuristic Identity Theory (or back
to the future): The Mind-body Problem Against the Background of Research
Strategies in Cognitive Neuroscience. In M. Hahn and S. C. Stoness (eds),
Proceedings of the 21st Annual Meeting of the Cognitive Science Society. New York:
Erlbaum.
Block, N. (2007) ‘Consciousness, Accessibility, and the Mesh between Psychology
and Neuroscience’. Behavioral and Brain Sciences, 30(5–6), 481–499; discussion
499–548.
Boyd, R. (1999) ‘Homeostasis, Species, and Higher Taxa’. In R. A. Wilson (ed.),
Species: New Interdisciplinary Essays, 141–185. Cambridge, MA: MIT Press.
Brook, A. (2009) ‘Introduction: Philosophy in Philosophy of Cognitive Science’.
Topics in Cognitive Science, 1(2), 216–230.
Bruner, J. S., and Postman, L. (1949) ‘Perception, Cognition, and Behavior’.
Journal of Personality, 18, 14–31.
Bruner, J. S., and Postman, L. (1947) ‘Emotional Selectivity in Perception and
Reaction’. Journal of Personality, 16(1), 69–77.
Burge, T. (2010) Origins of Objectivity. Oxford: Oxford University Press.
Burge, T. (2011) ‘Disjunctivism Again’. Philosophical Explorations, 14(1), 43–80.
Castelhano, M. S., and Henderson, J. M. (2008) ‘The Influence of Color on
the Activation of Scene Gist’. Journal of Experimental Psychology Human, 34,
660–675.
Chemero, A., and Silberstein, M. (2008) ‘Defending Extended Cognition’. In B. C.
Love, K. McRae and V. M. Sloutsky (eds), Proceedings of the 30th Annual Meeting
of the Cognitive Science Society, 129–134.
Churchland, P. S. (1993) ‘The Co-evolutionary Research Ideology’. In A. Goldman
(ed.), Readings in Philosophy and Cognitive Science, 745–767. Cambridge, MA:
MIT Press.
Clark, A. (2013) ‘Whatever Next? Predictive Brains, Situated Agents, and the
Future of Cognitive Science’. Behavioral and Brain Sciences, 36, 181–253.
Clark, A., and Chalmers, D. J. (2006) ‘The Extended Mind’. Analysis, 58(1), 7–19.
Craver, C. F. (2007a) ‘Constitutive Explanatory Relevance’. Journal of Philosophical
Research, 32, 3–20.
Craver, C. F. (2007b) Explaining the Brain: Mechanisms and the Mosaic Unity of
Neuroscience. Oxford: Oxford University Press.
Problems and Possibilities 205

Craver, C. F. (2009) ‘Mechanisms and Natural Kinds’. Philosophical Psychology,


22(5), 575–594.
De Gardelle, V., Sackur, J., and Kouider, S. (2009) ‘Perceptual Illusions in Brief
Visual Presentations’. Consciousness and Cognition, 18(3), 569–577.
Dehaene, S., and Naccache, L. (2001) ‘Towards a Cognitive Neuroscience of
Consciousness: Basic Evidence and a Workspace Framework’. Cognition, 79(1–2),
1–37.
Dennett, D. C. (2003) ‘Who’s on First? Heterophenomenology Explained’. Journal
of Consciousness Studies, 10(1), 10–30.
Dennett, D. C. (2005) Sweet Dreams: Philosophical Obstacles to a Science of
Consciousness. Cambridge, MA: MIT Press.
Dennett, D. C. (2007) ‘Heterophenomenology Reconsidered’. Phenomenology and
the Cognitive Sciences, 6(1–2), 247–270.
Dennett, D. C. (2009) ‘The Part of Cognitive Science that is Philosophy’. Topics in
Cognitive Science, 1(2), 231–236.
Dretske, F. I. (2007) ‘What Change Blindness Teaches about Consciousness’.
Philosophical Perspectives, 21(1), 215–220.
Dupré, J. (1993) The Disorder of Things: Metaphysical Foundations of the Disunity of
Science. Cambridge, MA: Harvard University Press.
Feest, U. (2012) ‘Introspection as a Method and Introspection as a Feature of
Consciousness’. Inquiry, 55(1), 1–15.
Fodor, J. (2007) ‘The Revenge of the Given’. In B. P. McLaughlin and J. D. Cohen
(eds), Contemporary Debates in Philosophy of Mind, 105–116. Oxford: Wiley-
Blackwell.
Friston, K., Kilner, J., and Harrison, L. (2006) ‘A Free Energy Principle for the
Brain’. Journal of Physiology, 100(1–3), 70–87.
Friston, K., and Stephan, K. (2007) ‘Free-energy and the Brain’. Synthese, 159(3),
417–458.
Gibson, J. J. (1979) The Ecological Approach to Visual Perception. New York:
Erlbaum.
Gigerenzer, G., Todd, P. M., and the ABC Group (1999) Simple Heuristics that Make
us Smart. Oxford: Oxford University Press.
Glöckner, A., and Betsch, T. (2011) ‘The Empirical Content of Theories in
Judgment and Decision Making: Shortcomings and Remedies’. Judgment and
Decision Making, 6(8), 711–721.
Green, D. M., and Swets, J. A. (1966) Signal Detection Theory and Psychophysics.
New York: Wiley.
Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., and Tenenbaum, J. B. (2010)
‘Probabilistic Models of Cognition: Exploring Representations and Inductive
Biases’. Trends in Cognitive Sciences, 14(8), 357–364.
Hacking, I. (1983) Representing and Intervening: Introductory Topics in the Philosophy
of Science. Cambridge: Cambridge University Press.
Hardcastle, V. G. (2001) ‘Visual Perception is Not Visual Awareness’. Behavioral
and Brain Sciences, 24(5), 985.
Haugeland, J. (1998) ‘Mind Embodied and Embedded’. In J. Haugeland (ed.),
Having Thought. Cambridge, MA: MIT Press.
Irvine, E. (2011) ‘Rich Experience and Sensory Memory’. Philosophical Psychology,
24, 159–176.
206 Elizabeth Irvine

Irvine, E. (2012) ‘Old Problems with New Measures in the Science of Consciousness’.
British Journal for the Philosophy of Science, 63, 627–648.
Jacob, P., and de Vignemont, F. (2010) ‘Spatial Coordinates and Phenomenology
in the Two-visual Systems Model’. In N. Gangopadhyay, M. Madary and F.
Spicer (eds), Perception, Action and Consciousness, 125–144. Oxford: Oxford
University Press.
Kaplan, D. M. (2012) ‘How to Demarcate the Boundaries of Cognition’. Biology
and Philosophy, 27, 545–570.
Kellert, S. H., Longino, H. E., and Waters, C. K. (eds) (2006) Scientific Pluralism.
Minneapolis: University of Minnesota Press.
Kessel, F., Rosenfield, P., and Anderson, N. (eds) (2008) Interdisciplinary Research:
Case Studies from Health and Social Science. Oxford: Oxford University Press.
Kouider, S., De Gardelle, V., Sackur, J., and Dupoux, E. (2010) ‘How Rich is
Consciousness? The Partial Awareness Hypothesis’. Trends in Cognitive Sciences,
14(7), 301–307.
Lau, H. C. (2006) ‘Are We Studying Consciousness Yet? Journal of Consciousness
Studies, 13(4), 94–112.
Loftus, G., and Irwin, D. (1988) ‘On the Relations among Different Measures of
Visible and Informational Persistence’. Cognitive Psychology, 35, 135–199.
Luck, S. J., and Hollingworth, A. (2008) Visual Memory. Oxford: Oxford University
Press.
Machamer, P., and Sullivan, J. (2001) Levelling reduction. PhilSci archive, http://
philsci-archive.pitt.edu/386/.
Machamer, P., Darden, L., and Craver, C. F. (2000) ‘Thinking about Mechanisms’.
Philosophy of Science, 67, 1–25.
Mack, A., and Rock, I. (1998) ‘Inattentional Blindness’. Current Directions in
Psychological Science, 12, 180–184.
McCauley, R. N., and Bechtel, W. P. (2001) ‘Explanatory Pluralism and Heuristic
Identity Theory’. Theory and Psychology, 11(6), 736–760.
McClelland, J. L., Botvinick, M. M., Noelle, D. C., Plaut, D. C., Rogers, T.
T., Seidenberg, M. S., and Smith, L. B. (2010) ‘Letting Structure Emerge:
Connectionist and Dynamical Systems Approaches to Cognition’. Trends in
Cognitive Sciences, 14(8), 348–356.
McClelland, J. L., Plaut, D. C., Gotts, S. J., and Maia, T. V. (2003) ‘Developing
a Domain-general Framework for Cognition: What is the Best Approach?
Behavioral and Brain Sciences, 26(5), 611–614.
McDowell, J. (2010) ‘Tyler Burge on Disjunctivism’. Philosophical Explorations,
13(3), 243–255.
Oliva, A. (2005) ‘Gist of the Scene’. In L. Itti, G. Rees and J. K. Tsotsos (eds),
Neurobiology of Attention, 251–256. San Diego: Elsevier.
O’Regan, J. K., and Noë, A. (2001) ‘A Sensorimotor Account of Vision and Visual
Consciousness’. Behavioral and Brain Sciences, 24(5), 939–973; discussion
973–1031.
Overgaard, M., Rote, J., Mouridsen, K., and Ramsøy, T. Z. (2006) ‘Is Conscious
Perception Gradual or Dichotomous? A Comparison of Report Methodologies
during a Visual Task, Consciousness and Cognition, 15(4), 700–708.
Piccinini, G. (2005) ‘Data from Introspective Reports’. Journal of Consciousness
Studies, 10(9), 141–156.
Problems and Possibilities 207

Piccinini, G. (2009a) ‘Scientific Methods Ought to be Public, and Descriptive


Experience Sampling is One of Them’. Journal of Consciousness Studies, 18(1),
1–12.
Piccinini, G. (2009b) ‘First-person Data, Publicity, and Self-measurement’.
Philosophers Imprint, 9(9), 1–16.
Piccinini, G. (2010) ‘How to Improve on Heterophenomenology: The Self-
measurement Methodology of First-person Data’. Journal of Consciousness
Studies, 17(3–4), 84–106.
Prinz, J. (2012) The Conscious Brain. New York: Oxford University Press.
Quine, W. V. (1953) ‘Mr. Strawson on Logical Theory’. Mind, 62(248), 433–451.
Ramsøy, T. Z., and Overgaard, M. (2004) ‘Introspection and Subliminal Perception’.
Phenomenology and the Cognitive Sciences, 3(1), 1–23.
Rupert, R. D. (2004) ‘Challenges to the Hypothesis of Extended Cognition’.
Journal of Philosophy, 101(8), 389–428.
Schwitzgebel, E. (2003) ‘Introspective Training Apprehensively Defended:
Reflections on Titchener’s Lab Manual’. American Psychologist, 11(7–8), 1–49.
Schwitzgebel, E. (2007) ‘Do You have Constant Tactile Experience of your Feet
in your Shoes?: Or is Experience Limited to What’s in Attention? Journal of
Consciousness Studies, 14(3), 5–35.
Schwitzgebel, E. (2008) ‘The Unreliability of Naive Introspection’. Philosophical
Review, 117(2), 245–273.
Schwitzgebel, E. (2012) Introspection, What? In D. Smithies and D. Stoljar (eds),
Introspection and Consciousness, 29–48. Oxford: Oxford University Press.
Simons, D. J., and Rensink, R. A. (2005) ‘Change Blindness: Past, Present, and
Future’. Trends in Cognitive Sciences, 9(1), 16–20.
Snodgrass, M., and Lepisto, S. A. (2007) ‘Access for What? Reflective Consciousness’.
Behavioral and Brain Sciences, 30(5–6), 525–526.
Sperling, G. (1960) ‘The Information Available in Brief Visual Presentations’.
Psychological Monographs General and Applied, 74(11), 1–29.
Thagard, P. (2009) ‘Why Cognitive Science Needs Philosophy and Vice Versa’.
Topics in Cognitive Science, 1(2), 237–254.
Tye, M. (2006) ‘Nonconceptual Content, Richness, and Fineness of Grain’. In T.
Gendler and J. Hawthorne (eds), Perceptual Experience, 504–550. Oxford: Oxford
University Press.
Wheeler, M. (2010) ‘In Defense of Extended Functionalism’. In R. Menary (ed.),
The Extended Mind, 245–270. Cambridge, MA: MIT Press.
Wilson, R. A. (2005) Genes and the Agents of Life. Cambridge: Cambridge University
Press.
Wolfe, J. M. (1999) ‘Inattentional Amnesia’. In V. Coltheart (ed.), Fleeting Memories,
71–94. Cambridge, MA: MIT Press.
Woodward, J. (2008) ‘Mental Causation and Neural Mechanisms’. In J. Hohwy
and J. Kallestrup (eds), Being Reduced: New Essays on Reduction Explanation and
Causation, 1–57. Oxford: Oxford University Press.
10
Psychological Explanation,
Ontological Commitment and the
Semantic View of Theories
Colin Klein

Naturalistic philosophers of mind must assume some philosophy of


science. For naturalism demands that we look to psychology – but to
be guided by psychological theories, one must have some story about
what theories are and how they work. In this way, philosophy of mind
was subtly guided by philosophy of science. For the past forty years,
mainstream philosophy of mind has implicitly endorsed the so-called
‘received’ or ‘axiomatic’ view of theories. On such a view, theories are
sets of sentences formulated in first-order predicate logic, explanations
are deductions from the theories and the ontology of a theory can be
read off from the predicates used in explanations.
The persistence of the received view in philosophy of mind is surprising,
given that few philosophers of science these days would endorse it. An
alternative, the so-called semantic view of theories, has become far more
popular. With it comes a new view about explanation and about onto-
logical commitment more generally. One might therefore worry, as I
do, that many problems in philosophy of mind are actually pseudo-
problems introduced by an outdated notion of theories.
Philosophy of mind has seen some important moves beyond the axio-
matic view and the corresponding view of explanation in recent years
(Craver 2007 is a notable example). I think, however, that philosophy of
mind – especially the metaphysics of mind – has not fully appreciated
how different the landscape looks when one moves away from the old
view of theories. The new wave in philosophy of mind will involve reim-
porting some of these lessons from philosophy of science and rethinking
some of the old puzzles that arose in the context of the axiomatic theory.
What follows is a first step in that process, focusing on the key issue of
explanation and ontological commitment.

208
Psychological Explanation 209

1 Two views about explanation

1.1 Explanatory literalism


Consider the following pairs of explanations:

1. (a) The square peg failed to pass through the hole because its cross
section was longer than the diameter of the hole.
(b) The peg failed to pass through the hole because [extremely long
description of atomic movements].

2. (a) Klein got a ticket because he was driving over 60 mph.


(b) Klein got a ticket because he was driving exactly 73 mph.

3. (a) Socrates died because he drank hemlock.


(b) Socrates died because he guzzled hemlock.

4. (a) Esther ran because she was scared of the bee.


(b) Esther ran because [complicated neural description].

Many have the strong intuition that the first sentence in each pair is
a better explanation than the second. This is true, note, even though the
truth of the second sentence guarantees the truth of the first. I want to
take that intuition for granted and explore two different stories about
why this might be the case.
There is a well-loved account, tracing at least back to Hilary Putnam,
for the superiority of some explanations. Explanation 1a, Putnam
claimed, is clearly better because:

In this explanation certain relevant structural features of the situa-


tion are brought out. The geometrical features are brought out. It
is relevant that a square one inch high is bigger than a circle one
inch around. And the relationship between the size and shape of the
peg and the size and the shape of the holes is relevant. It is relevant
that both the board and the peg are rigid under transportation. And
nothing else is relevant. The same explanation will go in any world
(whatever the microstructure) in which these higher-level structural
features are present. In that sense this explanation is autonomous.
(Putnam 1975, 296)
210 Colin Klein

On Putnam’s story, (1a) refers to a higher-level property of the peg,


the shape, that is most commensurate with the explanandum. Following
Yablo, Bontly has called this the Goldilocks Principle: the peg’s shape
is just enough (and no more) to cause its failure to pass; so too with
all truly explanatory properties. (2a) is a better explanation because the
property of my speed – being above the limit – was sufficient for a ticket;
my exact speed was not required. (3a) is a better explanation than (3b)
because it was drinking hemlock that was fatal, guzzled or not. (4a) is a
better explanation than (4b) because the extra detail is irrelevant: Esther
would have run no matter how her fear was instantiated.
Generality does not always make for better explanation. Consider

4. (c) Esther ran because she was scared of the small flying thing.

This is both true and more general than (4a); nevertheless, it is an infe-
rior explanation if Esther is scared only of bees but indifferent to flies.
Rather, it is proportionality between higher-level cause and effect that
picks out the most explanatory of the causally relevant properties.
Call someone who adopts this view a literalist about explanatory good-
ness. Literalism says that good explanations are superior to rivals because
they pick out a property that their rivals don’t and that this property
bears the right sort of relationship to the explanans. Our best explana-
tions are thus ontologically committing. If a term ϕ appears in the best
explanation of some phenomenon, then we are, all things being equal,
justified in believing that ϕ refers to some unique property. Hence the
term ‘literalism’: one can read off the ontological commitments from a
good explanation largely by taking it literally and supposing that each
term ϕ really is meant to refer to a corresponding property or entity.1
The literalist view is widely accepted in philosophy of mind. It has
been a particular comfort to non-reductive physicalists. The fact that
(4b) is inferior to (4a) suggests that even were psychology to be reduced
to neuroscience, the resulting neural explanations would be inferior
to the psychological ones because they would no longer refer to the
most commensurate high-level properties. Further, the explanatory
superiority of proportionate properties might lead us to suppose that
we have a solution to the hoary causal exclusion argument. The causal
exclusion argument says, in simplified terms, that mental and physical
properties must (if distinct) compete for causal influence and that a
plausible physicalism should force us to assign causal priority to the
physical one. Not so, literalism responds: both properties are causally
relevant, but only the higher-level one counts as the cause. It does so
Psychological Explanation 211

because it is more proportionate to or commensurate with or otherwise


better fitted to the effect (Yablo 1992). Not only is the exclusion argu-
ment avoided, but the mental is given a certain causal priority over the
physical. Non-reductive physicalism is saved. For this reason, various
forms of literalism are increasingly popular in philosophy of mind and
philosophy of neuroscience.
Finally, literalism is simply assumed as uncontroversial by many
philosophers of mind. The alternatives to literalism seem to be some sort
of antinaturalism or scientific antirealism, neither of which are particu-
larly attractive. That alone seems to be reason to accept it.

1.2 Explanatory agnosticism


Literalism is not the only account of explanatory goodness available to
the naturalist, however. For each pair above, it is possible to account for
the superiority of one of the explanations by appealing to facts about
the language in which the explanations are couched while remaining
provisionally neutral about the ontology one is thereby committed to.
Call this agnosticism about explanatory goodness. The agnostic denies
that we can move easily from language to ontology. Crudely put, the fact
that a certain predicate appears in a superior explanation is no reason to
believe that there is a property corresponding to that predicate.
I want to defend agnosticism about higher-level properties. Note that
the position I favour is properly agnostic rather than sceptical. I don’t
want to take a stand on whether there are higher-level properties (or
determinables or whatever). Maybe there are. Maybe there aren’t. Rather,
my claim is that in ordinary and scientific explanation, apparent refer-
ence to higher-level properties carries no ontological commitment to
the existence of such properties. There may well be higher-level causes;
I just don’t think that our best explanations are a good guide to what
they are.
Agnosticism also has a certain prima facie plausibility. First, many
have noted that shifts in the presumed interests of a listener can make a
difference in the explanations that it is appropriate to give (van Fraassen
1980). Consider the explanation

5. Socrates died because he angered the Athenians.

In certain contexts (historical/political ones, say), explanation (5) is


superior to either (3a) or (3b); in other contexts (physiological/medical
ones), the reverse is true. Yet presumably the facts about what properties
are involved and their commensurability remain unchanged.
212 Colin Klein

A defence of agnosticism is strengthened by reflection on conversa-


tional pragmatics and their role in shaping our intuitions about explana-
tions. We find the more general explanations of the pair more acceptable,
says the agnostic, because of pragmatic constraints on the descriptive
form of explanations (and not because they refer to more commensu-
rate properties). These pragmatic constraints – in particular, the Gricean
maxims that underlie cooperative conversation – may favour a more
general description of the same circumstance, but that description is not
superior because it picks out a more general property.
A few quick examples for how this might look. (2a) is superior to (2b)
because the Gricean maxim of Relevance tells me to give only such
information as is relevant to my hearer’s interests (Grice 1989). My wife
wants to know why I got another ticket; the fact that I broke the speed
limit is sufficient to satisfy her interests, and the specific speed is (we
assume) irrelevant to her interests in the conversation. Similarly, the
Gricean maxims of Quantity and Quality should forbid me from giving
(4b) as an explanation when the equally effective and much shorter
description (4a) is available. Indeed, to give (4b) would (on the assump-
tion that I’m being cooperative) produce several false implicatures: that
the extra detail is relevant, in the sense that counterfactuals involving
small changes to Esther’s neural state would result in her calm or that
I have good evidence in a particular case for the complicated neural
process at which I have hinted. Neither of these is likely to be true. So to
give (4b) would be misleading, in the sense that I would implicate some-
thing false to my listener. Nevertheless, it is true that the complicated
neural process was the cause of her flight, not some additional distinct
higher-level property.
Changing conversational demands can produce shifts in explanatory
goodness without shifts in interest. The patrolman is testifying. The
judge, like my wife, wants to know why I got a ticket. The patrolman
would have ticketed me for any speed above 60; if my speed instanti-
ated a higher-level property of having a speed above 60 mph that was most
proportionate to my ticketing before, it continues to do so now. Yet it
would now be more appropriate for the patrolman to utter (2b) than (2a).
Why? The patrolman’s testimony must justify his ticket giving. Uttering
(2b) implies that he has precise information about my speed – which is
to say that he determined my speed by some suitably precise measure-
ment. To utter (2a) would give the false implication that he doesn’t have
such information (since by the maxim of Relevance he should be as
specific as necessary for the demands of the conversation). This implica-
tion is cancellable (‘He was going over 60 mph – in fact, I clocked him
Psychological Explanation 213

at exactly 73 mph’), but in ordinary circumstances the patrolman can


achieve his ends through the parsimonious (2b).
So here is a general strategy for the agnostic: concede that the second
explanation in each pair above is superior but explain that superiority by
appeal to language and conversational context, not the world. Thomas
Bontly has argued (convincingly in my opinion) that the implicatures
of many causal claims are non-detachable and cancellable, the standard
marks of conversational implicatures.2 Antecedents of the strategy might
be found in Kim’s insistence that there are only higher-level predicates,
not higher-level properties (1998), and in Lewis’s remarks on the prag-
matics of causal explanation (1986).

1.3 The plan


The above cases are not, to be sure, knockdown. The literalist has a ready
response to them: the explanations cite facts that are causally relevant,
and shifts in context alter which causally relevant factors are appropriate
to cite. But we should be suspicious of this: the evidence for literalism
above was supposed to be our judgements about the appropriateness
or inappropriateness of single explanations. That confidence should be
undermined if we find serious context-sensitive effects on our judge-
ments of appropriateness.
I think that agnosticism can be given a further defence. So the next
section gives an extended argument in favour of agnosticism over liter-
alism in the particular case of higher-level causal properties. The overall
form of the argument is as follows. There is a set, S, of intuitions that
favour the proportionality argument for higher-level causes. S is prima-
rily constituted by our judgments about the examples at the beginning
of Section 1.1 and those like them. I claim that the pragmatics of expla-
nation are such that we would have S regardless of whether there are
higher-level causes or not. So the fact that we have S can’t be part of an
argument for higher-level causes.
Further, there are some more specific reasons to think that literalism
itself is problematic. In particular, it is clear that there are certain predi-
cates that are simply shorthand placeholders for functions defined in
terms of other quantities. There are, I claim, good reason to treat such
predicates as non-referring; even more strongly, there is no positive
benefit to treating them as referring to properties. Yet literalism demands
that we do so, which is a mark against literalism.
After the defence, I turn to diagnosis. Literalism, I argue, is plausible
mainly because philosophers of mind are mostly wedded to a bad old
sort of philosophy of science, one left over from the late positivists.
214 Colin Klein

I argue that the plausibility of literalism vanishes if we move to an


updated philosophy of science. With that move, we in turn have new
resources to deal with – and dissolve – philosophical puzzles that presup-
pose literalism.

2 Agnosticism and derived quantities

2.1 Derived quantities


Working psychologists, when faced with a good explanation, can still
wonder whether it is ontologically committing. When we look at the
sciences relevant to philosophy of mind – psychology, cognitive science
and neuroscience, at least – we find that there is often considerable
debate about whether a term used in this or that explanation actually
refers to a causal property. In a classic textbook on psychometrics, for
example, Nunnally warns:

It is not necessarily the case that all the terms used to describe people
are matched by measurable attributes – e.g., ego strength, extrasensory
perception, and dogmatism. Another possibility is that a measure may
concern a mixture of attributes rather than only one attribute. This
frequently occurs in questionnaire measures of ‘adjustment,’ which
tend to contain items relating to a number of separable attributes.
Although such conglomerate measures sometimes are partly justifiable
on practical grounds, the use of such conglomerate measures offers a
poor foundation for psychological science. (Nunnally 1967, 3)

Consider the second possibility mentioned, that of ‘conglomerate meas-


ures’. Some predicate P might have good predictive power, be measur-
able in straightforward ways and appear in good explanations. Yet P may
not correspond to a real property because it is simply a label that aggre-
gates over several different psychological attributes. In short, P may be
a derived quantity – a label for a function of other, more basic properties.
The problem of derived quantities has been overlooked by philoso-
phers of mind because most of the explanations we tend to consider are
toy examples that connect two simple events under ideal circumstances.
The primary criterion for acceptability in simple explanations is simply
that the explanans be described in the simplest, most informative way.
These simple redescriptions look a lot like the attribution of higher-level
properties and that in turn goes a long way to explaining the plausi-
bility of literalism. That plausibility vanishes when we move to more
Psychological Explanation 215

realistic scientific explanations. So I’d like to look in depth at a case from


neuroscience to explain just why derived quantities are problematic for
literalism.

2.2 The problem for literalism


The Hodgkin-Huxley model of the action potential has received
renewed attention from philosophers of neuroscience. Hodgkin and
Huxley showed that the changes in membrane potential of the neuron
are determined by GNa and Gk, functions that determine the sodium
and potassium conductance, respectively, as a function of membrane
potential. Briefly, the membrane potential is a function of the differ-
ential concentrations of Na+ and K+ ions on either side of the neural
membrane. The membrane is studded with channels that open at an
overall rate dependent on the membrane potential; the opening and
closing of these channels in turn changes the membrane potential by
changing the relative concentration of those ions. Hodgkin and Huxley’s
experimental determination of GNa and Gk allows accurate derivation of
the shape and amplitude of the action potential. It is a great triumph in
that regard.
One thing that Hodgkin and Huxley’s work explained was the fact
that action potentials are threshold phenomena: membrane potential is
stable below a certain threshold but rapidly depolarizes above it. We can
explain this by noting that

6. Below the threshold membrane potential, GNa/Gk = 1, and so small


depolarizations result in offsetting Na and K currents. Above the
threshold, GNa/Gk > 1, which results in a net Na current with positive
feedback.

(6) is a testament to the explanatory fertility of the Hodgkin-Huxley


model. Not only does it explain the threshold phenomena in action
potentials, but it implies a number of useful, testable, true counterfac-
tuals (e.g., that action potentials would not be generated if GNa/Gk could
be artificially pegged to ≤ 1, as it is by certain toxins.) Further, by parallel
with explanations (1a) and (1b), it is arguably a better explanation than
one that goes into the details of the opening of sodium channels and for
the same reason: it gives us precisely the information needed to explain
the threshold and no more. The details of the mechanism wouldn’t
add anything to (6)’s goodness. This is not because the details aren’t
causally important – they are – but rather because (6) has already told
us all we need to know about those mechanisms. Like the other good
216 Colin Klein

explanations above, (6) implies precisely the right sorts of counterfac-


tuals, in the right way and so on.3
Suppose we do think that (6) explains why neurons fire in an all-or-
nothing way. The literalist faces a dilemma. He could say that the expres-
sion GNa/Gk does not itself designate a property, that it only stands for
a mathematical operation defined over the determinate value of two
distinct properties. But we could as a matter of convention introduce a
singular term (say, ϕ) to stand in for GNa/Gk. ϕ would be a derived quan-
tity. Since GNa/Gk did not designate a property, ϕ should not either. But
that is to concede the main claim of the agnostic view: that one cannot
unproblematically move from the presence of a singular term to a causal
property, even in our best explanations.
On the other hand, the literalist could say that GNa/Gk designates a
distinct higher-order causal property. This is implausible for at least three
reasons. First, it is an unnatural reading of (6): the most natural reading
of it is as expressing a relationship between GNa and Gk. This reading
connects the explanation to other explanations in terms of GNa and Gk.
(For example, we can explain the refractory period of the neuron by
talking about the time-sensitive decay of GNa.) The connection between
this explanation and (6) is lost or at least obscured if we think that there
are two distinct properties involved in the threshold and the refractory
period.
Second, the mathematical form of the explanans is important: the
mathematical properties of ratios can be used, along with the math-
ematical properties of facts about GNa and Gk, to explain further facts
about the shape of the action potential. Treating GNa/Gk as a single prop-
erty again obscures this explanatorily useful relationship.
Third, treating GNa/Gk as designating a property leads to an unreason-
able proliferation of causal properties between which the literalist can
offer no ground for decision. For if GNa/Gk designates a property, then so
should 2(GNa)/2(Gk), 3(GNa)/3(Gk) and so on. Each of these properties are
causally commensurate with the threshold effect, since in each case the
action potential fires iff the property had a value ≥ 1. There is nothing,
from the point of view of causal facts, to distinguish them. This prolif-
eration is a reductio against literalism.
Of course, scientists might prefer the unadorned GNa/Gk to its multi-
ples. This is no argument, however; indeed, it’s rather uncomfortable
for the literalist. For surely what exists doesn’t depend on what people
prefer to talk about. So the expression 2(GNa)/2(Gk) either refers or not. If
it does refer, we have an explosion; if it doesn’t, I fail to see an argument
for why it doesn’t refer that doesn’t also impugn GNa/Gk itself.
Psychological Explanation 217

Insofar as (6) is preferable to explanations in terms of, say, 2(GNa)/2(Gk),


it is for pragmatic rather than causal reasons. The more complex formula-
tion would be inappropriate because it implies that the extra complexity
is somehow relevant. By the maxims of Quality and Relevance, we
should prefer to give a simpler, shorter, less complex explanation if it
would suffice. That’s what’s makes (6) better than other mathematically
equivalent counterparts.
So either way the literalist treats GNa/Gk, he must say that the quality
of some explanations lies in how they describe a set of causal properties,
not just that they describe causal properties. But that is to concede the
agnostic’s main point.

3 A diagnosis

What’s the lesson from all of this? One could, I suppose, use it to defend
a crude sort of old-fashioned reductionism. That is, one could argue that
all mental predicates are simply derived quantities and that the only real
causal properties are the physical ones and the properties that are iden-
tical to them. (Indeed, much of the above was inspired by Kim’s remarks
about second-order descriptions in science [in ch. 4 of Kim 1998] and
could be thought of as one way of unpacking them.) I think, though,
that we can draw another, deeper conclusion. The real question is why
literalism seems so plausible even if it’s problematic, especially to natu-
ralistically minded philosophers of mind. Here, I think, I can offer a
diagnosis.

3.1 Literalism and the axiomatic view


Literalism’s plausibility has a historical origin. Many classic papers
in metaphysics of mind emerged against the background of the late
positivist conception of theories as developed in the writings of Carl
Hempel and Ernest Nagel.4 This is sometimes called the ‘received view’
or ‘standard view’ of theories (though that is now an anachronism). I’ll
call it the axiomatic view of theories, because on it theories are conceptu-
alized as the best axiomatizations of a domain of phenomena.
On the axiomatic view, a theory has two parts. The first part consists
of a set of theoretical postulates: a finite set of sentences, constructed
from a basic vocabulary containing a fixed set of names and predicates
and augmented with the resources of the first-order predicate calculus.
Speaking loosely, the predicates in the standard vocabulary are the
properties and relations that the theory attributes to the world. The
universally quantified statements among the theoretical postulates are
218 Colin Klein

the laws of a theory. The laws, together with statements of particular


fact, allow us to derive particular consequences that predict and explain
phenomena.
The second aspect of theories is a set of coordinating definitions,
which supply a semantics for the theory by connecting at least some of
the terms in the basic vocabulary to the world. By the time of Hempel, it
was widely agreed that this connection would not involve an exhaustive
characterization of theoretical terms via observational terms. Instead,
in Hempel’s formulation – later imported into philosophy of mind by
Lewis (1970) – coordinating definitions link theoretical terms to other
terms we already have a handle on, often because they occur in natural
language. The coordinating definitions, together with the interrelations
between terms described by the theoretical postulates, provide a partial
interpretation of the theoretical terms. This partial interpretation allows
us to connect the predictions of the theory to the world and so to give
our theories empirical content.
If you endorse this view of theories, then literalism and its conclu-
sions are nearly inevitable. Theories are individuated by the languages
they use. A different language just gives you a different and, therefore,
competing ontology. Assuming that this different language is not simply
reducible to the original, there really are two sets of properties in the
world that compete for the title of the most explanatory.

3.2 The semantic view of theories


The axiomatic view is no longer popular among philosophers of science.
It fell out of favour for a number of reasons.5 Two in particular are worth
noting. First, as Suppes notes, first-order formulations of theories are inad-
equate for many scientific purposes. Any theory that requires, say, the
real numbers will be difficult to capture in first-order language. Further,
axiomatizing both the theory and the accompanying math would be, in
Suppes’s words, ‘awkward and unduly laborious’ (Suppes 1967, 58). By
this, I take it that Suppes means that even if we can axiomatize the rele-
vant math, it would be inappropriate to include mathematical apparatus
in the theory itself; certainly it is more natural to describe set theory as
something we use to talk about various theories, not something that
happens to be part of many distinct theories.
Second, the axiomatic view requires theories to be axiomatizable.
Theories that can be axiomatized turn out to be rare, and theories that
are actually treated as a set of axioms rarer still. This was bad enough in
disciplines like biology and psychology, where it was hard to find things
that counted as laws. But it seemed to be true even of physics; as van
Psychological Explanation 219

Fraassen notes, many useful treatments of quantum mechanics are non-


axiomatic in form (1970). Even if we are confident that theories could be
identified with sets of axioms, then, it seems like a stretch to claim that
the axiomatic view has captured how scientists treat theories.
From these criticisms, an alternative naturally follows. The semantic
view of theories claims that theories are to be identified with sets of
models rather than sets of sentences. These models are real structures –
abstract entities like sets or state-spaces in Suppes and van Fraassen,
concrete objects in more recent treatments.6 These structures are meant
to be isomorphic to the world in some respect. Theoretical models are
often described using language, but the important linkages hold between
models and the world, not between any canonical description and the
world. So on the semantic view, a theory consists of two parts: a set of
models and a postulation of isomorphism between certain aspects of
models and parts of the world.
The semantic view seems to fit better with scientific practice; many
disciplines present models of some target phenomenon and then reason
about them. This is most obvious in a field like cognitive psychology.
Models of facial recognition, say, are never presented as sets of laws.
Instead, one is presented with a model mechanism and an assertion that
this is what the brain does – that is, an assertion that the brain is isomor-
phic to the model in some respect. Similarly, as Lloyd has shown, many
of the central claims of evolutionary theory can be interpreted as models
of systems under selection (1994). Newtonian mechanics can be inter-
preted as the postulation of certain models, the permissible Newtonian
spaces (van Fraassen 1970). And so on.
The semantic view is problematic for literalism. On the semantic view,
there is no presumption that the language in which theories are desig-
nated is at all important. The same set of models can be described using
a variety of different terms, none of which need pick out the driving
causal properties in a model (van Fraassen 1989, ch. 9). As a simple
example, Hodgkin and Huxley could be thought of as specifying a state-
space for neural processes. Later work on the molecular configuration of
sodium and potassium channels described the same state-space using the
language of molecular biology. Same models, same theory, completely
different language. Again, literalism is unwarranted. Similarly, the rela-
tionship between model and world need not be exact: model-world
mappings can be inexact, fuzzy or otherwise complex (Godfrey-Smith
2006). So the mere fact that there is an element in a model does not
warrant concluding that there is an isomorphic property or object in
220 Colin Klein

the world: that depends, at the very least, on the intended model-world
mapping.7
In addition to fitting the apparent practice of science, the semantic
view also provides a neat solution to the role of mathematics in
science. Mathematics is something we use to reason about the models.
Mathematics is not a part of any theory but is available to all. Thus, as
van Fraassen puts it, physics first sets up a framework of models and
then, having done so, ‘The theoretical reasoning of the physicist is
viewed as ordinary mathematical reasoning concerning this framework’
(van Fraassen 1970, 338).
With that in mind, consider mathematically complex claims, like the
Hodgkin-Huxley equation, or mathematically complex expressions, like
the one describing GNa:

GNa = gNa(max)m3h

where m and h themselves stand for complex exponential functions


governing activation and inactivation of the sodium channel (Kuffler,
Nicholls, and Martin 1984, 150ff.). It would be a mug’s game to try to
recast any of these in a first-order language. If you can’t, then the received
view forces you to treat things like ratios, products and multivariable
embedded functions as causal properties. As we saw in Section 2.2, this
isn’t a very plausible reading of explanations like (6). Further, recasting
(6) this way would be a futile exercise: you can keep your ontology trim
only by including all of the individual properties in (6) along with the
axioms of mathematics.
Once we move to a view on which scientific theories are not artifi-
cially hampered in their expressive power, something like agnosticism
is forced upon us. On the received view, there is one best way to state
the content of an explanation, because there are so few ways to express
anything. On a semantic view, by contrast, one has the possibility of
talking about models in a variety of different ways. When that happens,
one will need to take pragmatic factors into account when we evaluate
the goodness of explanations. Figuring out the ontological commit-
ments of an explanation is a complicated, hermeneutic process, not a
straightforward leap from terms to the world.

4 Going further

Literalism ultimately relies on an unrealistically simple view about


how scientific theories work. Attention to the pragmatic aspects of
Psychological Explanation 221

explanation shows several reasons why this simple view must be aban-
doned. Good explanations often involve abstract redescriptions of
specific, lower-order properties. These redescriptions are required for
pragmatic reasons, not for ontological ones. This in turn fits well with
the semantic view of theories, which carefully separates the language in
which models are specified from the models themselves and the model-
world relationships asserted by the theory.
I want to conclude by considering ways in which the abandonment
of literalism might matter for philosophy of mind. I have argued else-
where that once we break the link between theories and the language
in which they are formulated, traditional arguments for multiple realiz-
ability fail.8 This is so because traditional arguments for multiple realiz-
ability suppose that the only explanations available to physics are those
that describe atoms and their motions in tedious detail. The idea that
physics describes only mereological simples is almost unavoidable on
the axiomatic view, for reasons outlined above. It is also patently absurd:
physicists spend most of their time trying to give high-level abstract
explanations of physical phenomena. Once we realize this, as well as the
role of model redescription in science, multiple realizability becomes
difficult to motivate.
Indeed, I think there’s a more general point that can be made here
about the individuation of scientific disciplines. There has been an
assumption that scientific disciplines are individuated by their domains:
that is, what’s characteristic about physics or biology is primarily the set
of things that fall under their laws. This view is again almost unavoid-
able on the axiomatic view: the domain of a science just is the domain of
its quantifiers. This turn leads to the hierarchical, striated view of reality
made famous by Oppenheim and Putnam (1958). On such a view, each
scientific discipline corresponds to a distinct level of reality. Again, a
metaphysical point grows out of a substantive view in philosophy of
science. If my argument is right, however, we should be wary of this
view of the world. Sciences have more descriptive flexibility than the
philosopher of mind tends to ascribe to them, and there is no reason
why scientific disciplines must carve the world into non-overlapping
spheres of influence.
The semantic view of theories permits an alternative view of discipli-
nary individuation: what I’ll call (with some trepidation) a paradigm-based
view. Every discipline or subdiscipline starts with a set of characteristic
phenomena that it tries to explain: living things for biology, minds for
psychology, nerves for neuroscience, lenses for optics and so on. The
investigation of characteristic phenomena often hinges on creating
222 Colin Klein

local levels – again, it’s scientifically useful to abstract, to decompose, to


divide things up by size and to look at the behaviour of aggregates and
compounds.
This makes the standard level-based view of the world problematic
for two reasons. First, there’s no guarantee that some sciences, when
decomposing things into their parts, won’t run into another science
that cares about aggregates (or vice versa). Often, these distinct subdis-
ciplines bump into each other: seeking to explain the behaviour of life,
biologists decompose living things into cells and cells into organelles
and organelles into their parts. At that point, they bump into chemistry,
which has been investigating the same phenomena as special cases of
some more general abstract principles. That’s not, note, due to some
overarching commitment to a ‘unity of science’ program: this is normal
science within one discipline extended until it – as a matter of contin-
gent, empirical fact – hits normal science that started with a different set
of characteristic phenomena. Sometimes when this happens there is a
complete merger – as, for example, when the science of lenses came to
be swallowed up to become a special branch of physics. Other times the
merger is tentative or incomplete, as it currently is with biochemistry
or cognitive neuroscience. These mergers should, in my opinion, be
counted as forms of intertheoretic reduction. But note that the picture
of reduction that emerges will not be an imperialistic one: there is not
the science of one level of being co-opting a distinct one. Instead, insofar
as disciplines evolve and merge, it is an outgrowth of perfectly ordinary
intratheoretic endeavours on each side.
Second, many sciences care about making models at a relatively high
level of abstraction. The same model of oscillatory motion turns out to
be useful both for the investigation of springs and for the vibrations of
electrons. Again, this is one of the things that physics is good at: taking
the behaviour of a specific set of things and showing that at some level
of abstraction it is the behaviour of a diverse set of things. This sort
of abstraction is, as I conceive of it, intratheoretic: it is part of ordi-
nary scientific practice within a discipline. But models formulated at
that level of abstraction also often turn out to have unexpected uses in
other domains: modelling electrical circuits, say, or the oscillatory firing
of neurons. In these cases, it’s natural (and note, actual) that models
from one science get imported, largely unchanged, into another. But
this again makes problems for hierarchical concepts of nature.
In conclusion, the shift from an axiomatic to a semantic view of theo-
ries should result in a shift in how naturalistically inclined philosophers
approach scientific language. The very same theory can be couched in
Psychological Explanation 223

different language, and even canonical formulations of a theory can


hide considerable complexity in the real-world properties to which a
model corresponds.
Fodor was once able to write confidently that ‘Roughly, psychological
states are what the terms in psychological theories denote if the theo-
ries are true’ (Fodor 1997, 162, n. 1). Moving to the semantic view of
theories should sap that confidence. With new humility, however, also
comes new opportunities for close reading of scientific theories and a
more engaged approach to determining the ontology to which psycho-
logical explanations actually commit us.9

Notes
1. Note that one could hold a much stricter version of literalism, on which a
predicate is ontologically committing only if it is ineliminable or just in case it
appears in the best overall axiomatization of the phenomena. I do not focus on
these formulations for two reasons. First, in practice nobody actually adheres
to this standard, because figuring out whether a predicate is ineliminable or
part of the best axiomatization tout court is too difficult a task. If we held
ourselves to such a high standard, the game would be up from the beginning:
no one should have confidence that their predicates refer, and so literalism
would be a straw man. Second, formulations of the requirement in terms of
axiomatization or the eliminability of predicates is so obviously derived from
the axiomatic view of theories that the considerations presented in §4 will
apply directly. Thanks to Mark Sprevak for pressing me on this point.
2. See esp. Bontly (2005, 343). I am indebted to Bontly’s article for prompting
many of the reflections in this section.
3. Here, some care is needed. It has become recently fashionable to claim that
the Hodgkin-Huxley equation does not explain anything but merely describes
the shape of the action potential (Craver 2007, ch. 3). It is true that insofar as
the above is explanatory, it is not because it constitutes a deduction from the
more general laws postulated by Hodgkin and Huxley. Rather, (6) is explana-
tory because it details some facts about the mechanism that underlies the
action potential and then uses facts about that mechanism to explain the
threshold. It does not detail the mechanisms by which the voltage-gated ion
channels work; to the extent that the detailing of those mechanisms was part
of neuroscientists’ shared explanatory interests, Hodgkin and Huxley fell
short of explaining everything there was to explain about the action poten-
tial. But that does not mean that the equations they experimentally derived
were not themselves explanatory of some phenomena. Thanks to Carl Craver
for helpful discussion on this point.
4. See Nagel (1961) for a classic statement and Suppe (1989) for a contemporary
reconstruction and discussion.
5. See ch. 2 of Suppe (1989) for an extended discussion of problems with the
axiomatic account. The essays in Salmon (1998a), esp. Salmon (1998b), also
224 Colin Klein

contain a number of useful critiques of the deductive-nomological view of


explanation associated with the axiomatic view.
6. For the latter, see Giere (1988) and Godfrey-Smith (2006). I prefer concretist
accounts, both because I find them more natural for sciences like psychology
and also for the problems recently raised by Hans Halvorson (2012) against
more mathematically oriented approaches.
7. Thanks to Dan Weiskopf for drawing my attention to this point.
8. I develop this point further in Klein (2013). See also Klein (2009) for an earlier
exploration of this idea in the context of Nagel’s theory of reduction.
9. Thanks to Carl Craver, David Hilbert, Esther Klein, Tom Polger and several
APA audiences for comments on previous drafts. Special thanks to participants
in the New Waves online conference organized by Mark Sprevak and Jesper
Kallestrup for many helpful comments.

References
Bontly, T. (2005) ‘Proportionality, causation, and exclusion’. Philosophia 32(1),
331–348.
Craver, C. (2007) Explaining the Brain. New York: Oxford University Press.
Fodor, J. (1997) ‘Special sciences: Still autonomous after all these years’.
Philosophical Perspectives: Mind, Causation, and World 11, 149–163.
Giere, R. N. (1988) Explaining Science: A Cognitive Approach. Chicago: University
of Chicago Press.
Godfrey-Smith, P. (2006) ‘The strategy of model-based science’. Biology and
Philosophy 21, 725–740.
Grice, H. P. (1989) Studies in the Way of Words. Cambridge, MA: Harvard University
Press.
Halvorson, H. (2012) ‘What scientific theories could not be’. Philosophy of Science
79, 183–206.
Kim, J. (1998) Mind in a Physical World. Cambridge, MA: MIT Press.
Klein, C. (2009) ‘Reduction without reductionism: A defence of Nagel on
connectability’. Philosophical Quarterly 59(234), 39–53.
——. (2013) ‘Multiple realizability and the semantic view of theories’. Philosophical
Studies, 163(3), 683–695.
Kuffler, S. W., J. G. Nicholls and A. R. Martin. (1984) From Neuron to Brain: A
Cellular Approach to the Function of the Nervous System. 2nd edn. Sunderland,
MA: Sinauer.
Lewis, D. (1970) ‘How to define theoretical terms’. Journal of Philosophy, 63(13),
427–446.
——. (1986) ‘Causal explanation’. In Philosophical Papers, vol. 2. New York, Oxford
University Press.
Lloyd, E. (1994) The Structure and Confirmation of Evolutionary Theory. Princeton,
NJ: Princeton University Press.
Nagel, E. (1961) The Structure of Science: Problems in the Logic of Scientific Explanation.
New York: Harcourt, Brace and World.
Nunnally, J. (1967) Psychometric Theory. New York: McGraw-Hill.
Oppenheim, P., and H. Putnam. (1958) ‘Unity of science as a working hypoth-
esis’. Minnesota Studies in the Philosophy of Science 2, 3–36.
Psychological Explanation 225

Putnam, H. (1975) ‘Philosophy and our mental life’. In Mind, Language and Reality.
London: Cambridge University Press.
Salmon, W. (1998a) Causality and Explanation. New York: Oxford University
Press.
——. (1998b) ‘Deductivism visited and revisited’. In Salmon, Causality and
Explanation, 142–177. New York: Oxford University Press.
Suppe, F. (1989) The semantic conception of theories and scientific realism. Champaign:
University of Illinois Press.
Suppes, P. (1967) ‘What is a scientific theory?’. In S. Morgenbesser (ed.), Philosophy
of Science Today. New York: Basic Books.
van Fraassen, B. (1970) ‘On the extension of Beth’s semantics of physical theo-
ries’. Philosophy of Science 37(3), 325–338.
——. (1980) The Scientific Image. New York: Oxford University Press.
——. (1989) Laws and Symmetry. New York: Oxford University Press.
Yablo, S. (1992) ‘Mental causation’. Philosophical Review 101(2), 245–280.
11
Naturalizing Action Theory
Bence Nanay

About thirty years ago, a number of philosophers of action were urging a


naturalist turn in action theory. This turn did not happen. My aim is to
argue that if we accept a not too controversial claim about the centrality
of mental states that are not normally available to introspection (I call
them pragmatic representations) in bringing about actions, we have
strong reasons to naturalize action theory.
The most important proponent of the naturalization of action theory
was Myles Brand. In Brand (1984), he argued that philosophy of action
should enter its ‘third stage’ (the first one was in the 1950s and 1960s,
the second in the 1970s), the main mark of which would be its conti-
nuity with the empirical sciences. Brand’s methodology for philosophy
of action is a package deal. He endorses the following three guidelines
for the methodology that action theorists should follow:

1. Philosophy of action should be continuous with the empirical


sciences.
2. Philosophy of action should not privilege intentional actions.
3. Philosophy of action should be independent from ethics/moral
philosophy.

The last thirty years of philosophy of action could be described as


doing the exact opposite of what Brand suggested. Contemporary
philosophy of action is almost entirely about intentional actions (not
actions in general), and it is far from being independent from ethics/
moral philosophy: in fact it has (with some rare exceptions) virtually
become part of ethics/moral philosophy. Most importantly, contempo-
rary philosophy of action is not, generally speaking, a naturalist enter-
prise: it consistently ignores empirical findings about actions and their

226
Naturalizing Action Theory 227

mental antecedents; it has no patience with the cognitive neuroscience


of action, for example.1
Interestingly, however, a similar naturalist turn (at least a half turn)
did occur in contemporary philosophy of perception. More and more
contemporary philosophers of perception seem to have very similar
methodological commitments to the ones enumerated above (see also
Nanay 2010a):

1. Contemporary philosophy of perception takes empirical vision


science very seriously.
2. Contemporary philosophy of perception tends not to privilege
conscious perception.
3. Contemporary philosophy of perception tends to be independent of
epistemology.

In recent years, paying close attention to empirical findings about


perception seems to be the norm rather than the exception. What this
means is not that philosophy of perception has become theoretical
vision science. Rather, philosophical arguments about perception are
constrained by and, sometimes, supported by empirical evidence. Even
in the case of some of the most genuinely philosophical debates, such
as representationalism versus relationalism debate or the debate about
perceptual phenomenology, many of the arguments use empirical find-
ings as premises (see, e.g., Pautz 2010; Nanay [forthcoming]; Bayne 2009;
Nanay 2012b, respectively). The fact that many of these empirical find-
ings are about non-conscious perceptual processes shifts the emphasis
from conscious perceptual experience.
Epistemology has always had special ties to philosophy of perception,
traditionally because of the role perception is supposed to play in justi-
fication. But in contemporary philosophy of perception, perception is
no longer interesting save inasmuch as it can tell us something about
knowledge. Quite the contrary; epistemological considerations are often
used to answer intrinsically interesting questions about perception.2
The general picture that these methodological commitments outline
is one where philosophy of perception is an autonomous field of philos-
ophy, a field that has important ties to other fields but does not depend
on them and that is sensitive to the empirical findings of vision science.
This is very similar to the picture that Brand envisaged for philosophy
of action but that never in fact materialized.
My aim is to argue that since the mental states that make actions
actions are not normally accessible to introspection, naturalized action
228 Bence Nanay

theory is the only plausible option. Philosophy of action should turn


toward philosophy of perception for some methodological support (see
also Nanay 2013a).

1 Naturalism about action theory

I need to be explicit about what I take to be naturalism about action


theory. I have been talking about sensitivity to empirical results, but
this is only part of what naturalism entails. The most important natu-
ralist slogan since Quine has been the continuity between science and
philosophy. As Quine says,

I admit to naturalism and even glory in it. This means banishing the
dream of a first philosophy and pursuing philosophy rather as a part
of one’s system of the world, continuous with the rest of science.
(Quine 1984, 430–431)

Naturalism in the context of philosophy of action can be and has been


formulated in a similar manner. Brand, for example, talks about ‘the
integration of the philosophical with the scientific’ (Brand 1984, x).
Just what this ‘continuity’ or ‘integration’ is supposed to mean, however,
remains unclear. More specifically, what happens if what science tells us
is in conflict with what folk psychology tells us? Brand clearly hands
the decisive vote to folk psychology. As he says, ‘Scientific psychology
is not free to develop any arbitrary conceptual scheme; it is constrained
by the conceptual base of folk psychology’ (Brand 1984, 239). But that
has little to do with naturalism, as Slezak (1987, 1989) points out (see
esp. the detailed point-by-point analysis of how Brand’s naturalism fails
on its own terms in Slezak 1989, 140–141, 161–163). If the only role
science is supposed to play in action theory is to fill in the details of the
pre-existent, unchangeable conceptual framework of folk psychology,
then science is not playing a very interesting role at all – the conceptual
framework of action theory would still be provided by folk psychology.
Brand’s theory, in spite of its false advertisement, is not naturalistic in
any sense of the term that would do justice to the Quinean slogan.
What would then constitute a naturalized action theory? We can
use Brand’s original formulation as a starting point: naturalized action
theory urges the integration of the philosophical with the scientific –
but a very specific kind of integration, one where the philosophical does
not automatically trump the scientific. If it turns out that some of our
key folk psychological concepts in philosophy of action (e.g., those of
‘action’ or ‘intention’) fail to pick out any natural kinds, we have to
Naturalizing Action Theory 229

replace them with concepts that do pick out natural kinds.3 Science can
tell us what this new concept should be.
I talked about the importance of empirical findings in naturalized
action theory: empirical findings constrain the philosophical theories of
action we can plausibly hold. But the interaction between philosophy
and the empirical sciences is bidirectional. The philosophical hypoth-
eses and theories, as a result of being empirically informed, should be
specific enough to be falsified or verified by further empirical studies.
Psychologists and neuroscientists often accuse philosophers in general,
philosophers of mind in particular, of providing theories that are too
general and abstract, that are of no use for the empirical sciences.
Philosophers of a non-naturalistic creed are of course free to do so, but
if we want to preserve the naturalistic insight that philosophy should
be continuous with the empirical sciences, such disconnect would not
be permissible. Thus, naturalistic philosophy needs to give exact, test-
able hypotheses that psychologists as well as cognitive neuroscientists
of action can engage with. Naturalized action theory, besides using
empirical studies, could also be used for future empirical research. This
is the only sense in which the ‘integration of the philosophical with the
scientific’ that Brand talked about does not become a mere slogan. This
is the methodology that has been used by more and more philosophers
of perception (I won’t pretend that it has been used by all), and given
the extremely rich body of empirical research, especially in the cognitive
neuroscience of action,4 more and more philosophers of action should
use the same methodology.
This may sound like a manifesto about how nice naturalized action
theory would be. But the aim of this section is to argue that it is difficult
to see how naturalized action theory can be avoided. The sketch of the
argument is the following: pragmatic representations, the mental states
that make actions actions, are not normally accessible to introspection.
So we have no other option but to turn to the empirical sciences if we
want to characterize and analyse them.

2 Pragmatic representations

One of the most important questions of philosophy of action, maybe


even the most ‘fundamental question’ (see Bach 1978; Brand 1979), is
the following: What makes actions actions? How do actions differ from
mere bodily movements? What is the difference between performing the
action of raising my hand and the bodily movement of my hand going
up (maybe as a result of a neuroscientist manipulating my motor cortex)?
In short, what makes actions more than just bodily movements? Since
230 Bence Nanay

the bodily movement in these two cases is the same, whatever it is that
makes the difference, it seems to be a plausible assumption that what
makes actions actions is a mental state that triggers, guides or maybe
accompanies the bodily movements. If bodily movements are triggered
(or guided or accompanied) by mental states of a certain kind, they
qualify as actions. If they are not, they are mere bodily movements.5
The big question is of course what mental states are the ones that trigger
(or guide or accompany) actions. There is no consensus about what
these mental antecedents of actions are supposed to be. Whatever they
are, they seem to be representational states that attribute properties the
representation of which is necessary for the performance of the action.
They guide, and sometimes even monitor, our bodily movements. Myles
Brand called mental states of this kind ‘immediate intentions’ (Brand
1984), Kent Bach calls them ‘executive representations’ (Bach 1978),
John Searle ‘intentions-in-action’ (Searle 1983), Ruth Millikan ‘goal state
representation’ (Millikan 2004, ch. 16) and Marc Jeannerod ‘represen-
tation of goals for actions’ or ‘visuomotor representations’ (Jeannerod
1994, §5; Jeannerod 1997; Jacob and Jeannerod 2003, 202–204). I called
them ‘action-oriented perceptual states’ (Nanay 2012a) or ‘action-guiding
perceptual representations’ (Nanay 2011).6 Here I just use the placeholder
term ‘the immediate mental antecedent of actions’.
I use the term ‘the immediate mental antecedent of actions’ as a place-
holder for the mental state that makes actions actions, that is present when
our bodily movement counts as action but is absent in the case of reflexes
and other mere bodily movements. Thus, we can talk about the ‘immediate
mental antecedents of actions’ in the case of all actions. Intentional actions
have immediate mental antecedents, but so do non-intentional actions.
Autonomous intentional actions have immediate mental antecedents as
much as non-autonomous actions (see Velleman 2000; Hornsby 2004).
As immediate mental antecedents of action are what make actions
actions, understanding the nature of these mental states is a logically prior
task for philosophers of action to all other questions in action theory.
In order to even set out to answer questions like ‘What makes actions
intentional?’ or ‘What makes actions autonomous?’ one needs to have an
answer to the question ‘What makes actions actions?’ The way to answer
this question is to describe the immediate mental antecedents of action.
Many philosophers of action distinguish between two different
components of the immediate mental antecedent of actions. Kent Bach
differentiates ‘receptive representations’ and ‘effective representations’
that together make up ‘executive representations’, which is his label for
the immediate mental antecedent of action (Bach 1978, see esp. 366).
Myles Brand talks about the cognitive and the conative components of
Naturalizing Action Theory 231

‘immediate intentions’, as he calls the immediate mental antecedent of


action (Brand 1984, 45). Leaving the specifics of these accounts behind,
the general insight is that the immediate mental antecedent of action
has two distinct components: one that represents the world in a certain
way and one that moves us to act. These two components can come
apart, but the immediate mental antecedent of actions consists of both
(at least in most cases).7
I want to focus on the representational component of the immediate
mental antecedent of actions. I call these mental states ‘pragmatic repre-
sentations’ (see Nanay 2013a,b). Thus, it is true by definition that in
order to perform an action, we must have a pragmatic representation.
But having a pragmatic representation does not necessarily manifest in
an action, as it is the conative component of the mental antecedent of
actions that moves us to act, and if we have the representational but not
the conative component of the immediate mental antecedent of action,
then the action is not performed.
Pragmatic representations are genuine mental representations: they
represent objects as having a number of properties that are relevant for
performing the action. As a result, pragmatic representations can be
correct or incorrect. If they are correct, they are more likely to guide
our actions well; if they are incorrect, they are more likely to guide our
actions badly.
What properties do pragmatic representations represent objects as
having? Suppose that you want to pick up a cup. In order to perform
this action, you need to represent the cup as having a certain spatial
location, otherwise you would have no idea which direction to reach
out towards. You also need to represent it as having a certain size, other-
wise you could not approach it with the appropriate grip size. And you
also need to represent it as having a certain weight, otherwise you would
not know what force you need to exert when lifting it.8 My claim is that
these properties are represented unconsciously.

3 Pragmatic representations are not normally


accessible to introspection

Consider the following short but impressive demonstration of percep-


tual learning.9 We are asked to put on a pair of distorting goggles that
shifts everything we see to the left. Then we are supposed to throw a
basketball into a basket in front of us. The first couple of attempts fail
miserably: the ball lands not in the basket but to the left of it. After a
number of attempts, however, the ball lands accurately in the basket.
232 Bence Nanay

But after having practised our throws a couple of times with the goggles
on, we are asked to take off the goggles and perform the task without
them. Now we experience the same phenomenon again: when we first
attempt to throw the ball towards the basket without the goggles, we
miss it; after several attempts, we manage to throw it as we did before
putting on the goggles.
I would like to focus on this change in our perception and action
after taking off the goggles. At the beginning of the learning process, my
pragmatic representation is clearly different from my pragmatic repre-
sentation at the end, when I can successfully throw the ball into the
basket. My pragmatic representation changes during this process; it is
this change that allows me to perform the action successfully at the
end of the process. The mental state that guides my action at the end of
the process does so much more efficiently than the one that guides my
action at the beginning.
Here is how we can make sense of this phenomenon: our pragmatic
representation attributes a certain location property to the basket, which
enables and guides us to execute the action of throwing the ball in the
basket. Our conscious perceptual experience attributes another location
property to the basket. During the process of perceptual learning, the
former representation changes, but the latter does not.
Similar results are documented in the case of a number of optical
illusions that mislead our perceptual experience but not our pragmatic
representation. One such example is the three-dimensional Ebbinghaus
illusion. The two-dimensional Ebbinghaus illusion is a simple optical
illusion. A circle that is surrounded by smaller circles looks larger than a
circle of the same size that is surrounded by larger circles. The three-di-
mensional Ebbinghaus illusion reproduces this illusion in space: a poker
chip surrounded by smaller poker chips appears to be larger than a poker
chip of the same diameter surrounded by larger ones. The surprising
finding is that although our perceptual experience is incorrect – we
experience the first chip to be larger than the second one – if we are
asked to pick up one of the chips, our grip size is hardly influenced by
the illusion (Aglioti et al. 1995, see also Milner and Goodale 1995, ch.
6; Goodale and Milner 2004). Similar results can be reproduced in the
case of other optical illusions, like the Müller-Lyer illusion (Goodale and
Humphrey 1998; Gentilucci et al. 1996; Daprati and Gentilucci 1997;
Bruno 2001), the Kanizsa compression illusion (Bruno and Bernardis
2002), the dot-in-frame illusion (Bridgeman et al. 1997), the Ponzo illu-
sion (Jackson and Shaw 2000; Gonzalez et al. 2008) and the ‘hollow face
illusion’ (Króliczak et al. 2006).10
Naturalizing Action Theory 233

In the case of the three-dimensional Ebbinghaus illusion, our prag-


matic representation attributes a size property to the chip, and our
conscious perceptual experience attributes another size property to it.
Our conscious perceptual experience misrepresents, but the pragmatic
representation represents the size of the chip (more or less) correctly.
Thus, we have two different mental states in this scenario: a conscious,
incorrect one and a pragmatic representation, which is more or less
correct. They are both representations; they both attribute properties to
the same object. But they attribute different properties. The conscious
experience attributes the size property we experience the chip as having.
The pragmatic representation attributes the size property that guides our
(successful) action.11
Importantly, given that we have a conscious and incorrect represen-
tation at the same time as we have a (more or less) correct pragmatic
representation of the same properties of the same object, this pragmatic
representation must be unconscious. Our conscious perceptual experi-
ence attributes a certain size property to the chip, but our pragmatic
representation attributes another size property. It can do so only uncon-
sciously. Hence, pragmatic representations are (normally) unconscious.
We need to be careful about what is meant by unconscious here.
Do these states lack phenomenal consciousness or access conscious-
ness (Block 1995)? Is it visual awareness or visual attention that is
missing (Lamme 2003)? Luckily, we do not have to engage with the
Byzantine details of these distinctions. What matters for the purposes
of my argument is that pragmatic representations are not accessible to
introspection. When we are grasping the chips in the three-dimensional
Ebbinghaus scenario, we have no introspective access to the representa-
tion that guides our action and that represents the size of the chip (more
or less) correctly. We have only introspective access to the conscious
perceptual experience that represents the size of the chip incorrectly.
Pragmatic representations are not normally accessible to introspection.
As a final objection, I said that pragmatic representations are not
normally accessible to introspection. But am I justified in using the word
normally here? Couldn’t one argue that the scenarios I have analysed are
the ‘abnormal’ ones? I don’t think so. Here is a so-far-unmentioned body
of empirical evidence that demonstrates this. If the location (or some
other relevant property) of the target of our reaching or grasping actions
suddenly changes, the trajectory and/or velocity of our movement
changes very quickly (in less than 100 ms) afterwards. The change in our
movement is unconscious: subjects do not notice this change, and as it
occurs within 100 ms of the change in the target’s location, this time is
234 Bence Nanay

not enough for the information to reach consciousness (Paulignan et al.


1991; Pelisson et al. 1986; Goodale, Pelisson and Prablanc 1986, see also
Brogaard 2011). In short, the subjects’ pragmatic representation changes
as the target’s location changes, but this change is not available to intro-
spection. And this is true of all actions that require microadjustments to
our ongoing action, which means it is true of most of our perceptually
guided actions (see also Schnall et al. 2010 for some further structurally
similar cases).

4 Conclusion: to naturalize or not to naturalize

If the argument I have presented is correct, pragmatic representations


are not normally accessible to introspection. Now we can use this argu-
ment to conclude the necessity of naturalizing action theory.
If we accept that pragmatic representations are not normally acces-
sible to introspection, we have a straightforward argument for the
need to naturalize action theory. If the representational component of
the immediate mental antecedent of action is not normally available
to introspection, introspection obviously cannot deliver any reliable
evidence about it.
Introspection, of course, may not be the only alternative to scientific
evidence. There may be other genuinely philosophical ways in which
we can acquire information about a mental state: folk psychology, ordi-
nary-language analysis, conceptual analysis, etc. But note that none
of these philosophical methods are in a position to say much about
pragmatic representations. Pragmatic representations are not part of our
folk psychology, as we have seen. When we think about other people’s
mental states, we think about their beliefs, desires and wishes, not so
much about how their perceptual system represents the shape prop-
erties of the objects in front of them. Similarly, talk about pragmatic
representations is not part of our ordinary language; ordinary language
analysis will not get us far. How about conceptual analysis? Arguably,
the generation of action theorists that gave us the distinction between
the cognitive and conative components of the immediate mental ante-
cedents of action (Brand 1984; Bach 1978) did use conceptual analysis
or, more precisely, some version of a transcendental argument. We need
to postulate this distinction in order to explain a number of odd features
of our behaviour. I see nothing wrong with this approach, but it has its
limits. We can and should postulate certain mental states – specifically,
pragmatic representations – in order to be able to explain some features
of our goal-directed actions, but postulating is only the first step. The
Naturalizing Action Theory 235

real work is in figuring out what these representations are, what proper-
ties they represent objects as having, how they interact or fail to interact
with the rest of our mind, etc. These are things that conceptual analysis
is unlikely to be able to do.
Hence, it seems that the only way to find out more about pragmatic
representations is by means of empirical research. We have no other
option but to turn to the empirical sciences if we want to characterize
and analyse them. As pragmatic representations are the representational
components of what makes actions actions, this means that we have no
other option but to turn to the empirical sciences if we want to under-
stand what actions are.12 Relying on empirical evidence is not a nice,
optional feature of action theory: it is the only way action theory can
proceed.13

Notes
1. A notable exception is the recent philosophical literature on the ‘illusion
of free will’: the sense of agency and conscious will (see, e.g., Libet 1985;
Wegner 2002; Haggard and Clark 2003; Pacherie 2007). It is important to
acknowledge that experimental philosophers do use empirical data on our
intuitions about actions and our way of talking about them. But even experi-
mental philosophers of action tend to ignore empirical findings about action
itself (as opposed to our intuitions about it).
2. One important example comes from Fred Dretske’s work. The original link
between perception and knowledge is at least partly due to the works of
Fred Dretske over the decades (starting with Dretske 1969). Dretske’s recent
writings, however, turn the established connection between perception
and knowledge on its head. He is interested in what we perceive, and some
of the considerations he uses in order to answer this question are about
what we know (see Dretske 2007, 2010). Dretske’s work exemplifies a more
general point about the shift of emphasis in contemporary philosophy of
perception.
3. I am using here the widely accepted way of referring to natural kinds as the
real joints of nature because it is a convenient rhetorical device, but I have
my reservations about the very concept, for a variety of reasons (see Nanay
2010b, 2013c).
4. The literature is too large to survey, but an important and philosophically
sensitive example is Jeannerod (1997).
5. Theories of ‘agent causation’ deny this and claim that what distinguishes
actions and bodily movements is that the former are caused by the agent
herself (and not a specific mental state of her). I leave these accounts aside
because of the various criticisms of the very idea of agent causation (see
Pereboom 2004 for a summary).
6. This list is supposed to be representative, not complete. Another important
concept that may also be listed here is John Perry’s concept of ‘belief-how’
(Israel et al. 1993; Perry 2001).
236 Bence Nanay

7. Sometimes they don’t consist of both; the action of Anarchic Hand Syndrome
patients could, e.g., be analysed as actions where the representational
component of the immediate mental antecedent of actions is present but
the ‘moving to act’ component is absent: what moves these patients to act is
something external. The same may be true of some actions of healthy adult
humans as well. Elsewhere, I call actions where one of the two components
of the immediate mental antecedents of action is missing ‘semi-actions’
(Nanay 2013a).
8. Are these all the properties pragmatic representations represent? Can they
also represent the goal state of the action? Some philosophers take the repre-
sentational component of the immediate mental antecedent to be the repre-
sentation of the goal of the action (see, e.g., Millikan 2004; Butterfill and
Sinigaglia [Forthcoming]). I myself think that while these goal states can be
represented, they do not need to be represented in order for the action to be
performed. But the argument I present in the next section can be adjusted
to apply to this ‘goal-state-representation’ way of thinking about pragmatic
representations as well.
9. This interactive demonstration can be found in a number of science exhibi-
tions. I first saw it at the San Francisco Exploratorium. See also Held (1965)
for the same phenomenon in an experimental context.
10. I focus on the three-dimensional Ebbinghaus illusion because of the simplicity
of the results, but it needs to be noted that the experimental conditions of
this experiment have been criticized recently. The main line of criticism is
that the experimental design of the grasping experiment is very different
from that of the perceptual judgment experiment. When the subjects grasp
the middle chip, there is only one middle chip, surrounded by either smaller
or larger chips. When they are judging the size of the middle chip, however,
they are comparing two chips – one surrounded by smaller chips, the other by
larger ones (Pavani et al. 1999; Franz 2001, 2003; Franz et al. 2000, 2003, see
also Gillam 1998; Vishton 2004; Vishton and Fabre 2003 – but see Haffenden
and Goodale 1998; Haffenden et al. 2001 for a response). See Briscoe (2008)
for a good philosophically sensitive overview on this question. I focus on
the three-dimensional Ebbinghaus experiment in spite of these worries, but
those who are moved by the style considerations of Franz et al. can substitute
some other visual illusion – viz., the Müller-Lyer illusion, the Ponzo illusion,
the hollow face illusion or the Kanizsa compression illusion – where there is
evidence that the illusion influences our perceptual judgments but not our
perceptually guided actions.
11. There is a lot of empirical data in favour of the existence of two more or less
separate visual subsystems that may explain the presence of these two different
representations here (Milner and Goodale 1995; Goodale and Milner 2004;
Jacob and Jeannerod 2003; Jeannerod 1997). The dorsal visual subsystem is
(normally) unconscious and is responsible for the perceptual guidance of our
actions. The ventral visual subsystem, in contrast, is (normally) conscious
and is responsible for categorization and identification. I do not want to rely
on this distinction in my argument (partly because of the emerging evidence
of the interactions between the two subsystems, partly because of the debate
about whether and to what extent the dorsal stream needs to be uncon-
scious; see Dehaene et al. 1998; Clark 2001; Brogaard 2011, forthcoming;
Naturalizing Action Theory 237

Briscoe 2008, 2009; Milner and Goodale 2008; Jeannerod and Jacob 2004;
Goodale 2011; Clark 2009; Kravitz et al. 2011). But one consequence of my
argument is that if we are to naturalize action theory, the empirical data on
dorsal perception will be of special importance.
12. I have been focusing on the representational component of the mental ante-
cedent of actions – pragmatic representations – and have argued that they are
not normally accessible to introspection. But it may be worth noting that,
arguably, the other, ‘moving us to act’ or ‘conative’ component of the mental
antecedent of actions is not normally accessible to introspection either. Here
is a nice literary example by Robert Musil:
I have never caught myself in the act of willing. It was always the case that
I saw only the thought – for example when I’m lying on one side in bed: now
you ought to turn yourself over. This thought goes marching on in a state
of complete equality with a whole set of other ones: for example, your foot
is starting to feel stiff, the pillow is getting hot, etc. It is still a proper act of
reflection; but it is still far from breaking out into a deed. On the contrary,
I confirm with a certain consternation that, despite these thoughts, I still
haven’t turned over. As I admonish myself that I ought to do so and see
that this does not happen, something akin to depression takes possession of
me, albeit a depression that is at once scornful and resigned. And then, all
of a sudden, and always in an unguarded moment, I turn over. As I do so,
the first thing that I am conscious of is the movement as it is actually being
performed, and frequently a memory that this started out from some part
of the body or other, from the feet, for example, that moved a little, or were
unconsciously shifted, from where they had been lying, and that they then
drew all the rest after them. (Robert Musil, Diaries [New York: Basic Books,
1999], 101; see also Goldie 2004, 97–98)
13. Various bits of the arguments in this chapter are also presented in Nanay
(2013a), esp. in chs 2 and 4. This work was supported by the EU FP7 CIG
grant PCIG09-GA-2011-293818 and the FWO Odysseus grant G.0020.12N. I
presented a very early version of this paper at the 2011 APA Pacific Division
Meeting in San Francisco. I am grateful for comments from Kent Bach, Keith
Lehrer, Ian Phillips, OlleBlomberg, EmanuelePodio, Declan Smithies, and the
editors of this volume.

References
Aglioti, S., DeSouza, J. F. X., and Goodale, M. A. (1995) ‘Size-contrast Illusions
Deceive the Eye But Not the Hand’. Current Biology, 5, 679–685.
Bach, Kent (1978) ‘A Representational Theory of Action’. Philosophical Studies,
34, 361–379.
Bayne, Tim (2009) ‘Perception and the Reach of Phenomenal Content’.
Philosophical Quarterly, 59, 385–404.
Block, Ned. (1995) ‘A Confusion about Consciousness’. Behavioral and Brain
Sciences, 18, 227–247.
Brand, Myles (1979) ‘The Fundamental Question of Action Theory’. Nous, 13,
131–151.
Brand, Myles (1984) Intending and Acting. Cambridge, MA: MIT Press.
238 Bence Nanay

Bridgeman, B., Peery, S., and Anand, S. (1997) ‘Interaction of Cognitive and
Sensorimotor Maps of Visual Space’. Perception & Psychophysics, 59, 456–459.
Briscoe, R. (2008) ‘Another Look at the Two Visual Systems Hypothesis’. Journal
of Conscious Studies, 15, 35–62.
Briscoe, R. (2009) ‘Egocentric Spatial Representation in Action and Perception’.
Philosophy and Phenomenological Research, 79, 423–460.
Brogaard, B. (2011) ‘Are there Unconscious Perceptual Processes?’ Consciousness
and Cognition, 20, 449–463.
Brogaard, B. (Forthcoming) ‘Unconscious Vision for Action Versus Conscious
Vision for Action?’ Journal of Philosophy.
Bruno, Nicola (2001) ‘When Does Action Resist Visual Illusions?’ Trends in
Cognitive Sciences, 5, 385–388.
Bruno, Nicola, and Bernardis, Paolo (2002) ‘Dissociating Perception and Action in
Kanizsa’s Compression Illusion’. Psychonomic Bulletin & Review, 9, 723–730.
Butterfill, S., and Sinigaglia, C. (Forthcoming) ‘Intention and Motor Representation
in Purposive Action’. Philosophy and Phenomenological Research.
Clark, A. (2001) ‘Visual Experience and Motor Action: Are the Bonds Too Tight?’
Philosophical Review, 110, 495–519.
Clark, A. (2009) ‘Perception, Action, and Experience: Unraveling the Golden
Braid’. Neuropsychologia, 47, 1460–1468.
Daprati, E., and Gentilucci, M. (1997) ‘Grasping an Illusion’. Neuropsychologia,
35, 1577–1582.
Dehaene, S., Naccache, L., Le Clec’H, G., Koechlin, E., Mueller, M., Dehaene-
Lambertz, G., van de Moortele, P. F., and Le Bihan, D. (1998) ‘Imaging
Unconscious Semantic Priming’. Nature, 395, 597–600.
Dretske, Fred (1969) Seeing and Knowing. London: Routledge.
Dretske, Fred (2007) ‘What Change Blindness Teaches about Consciousness’.
Philosophical Perspectives, 21, 215–230.
Dretske, Fred (2010) ‘On What We See’. In B. Nanay (ed.) Perceiving the World.
New York: Oxford University Press.
Franz, V. (2001) ‘Action Does Not Resist Visual Illusions’. Trends in Cognitive
Sciences, 5, 457–459.
Franz, V. (2003) ‘Manual Size Estimation: A Neuropsychological Measure of
Perception?’ Experimental Brain Research, 151, 471–477.
Franz, V. H., Bülthoff, H. H., and Fahle, M. (2003) ‘Grasp Effects of the Ebbinghaus
Illusion: Obstacle Avoidance Is Not the Explanation’. Experimental Brain
Research, 149, 470–477.
Franz, V., and Gegenfurtner, K. (2008) ‘Grasping Visual Illusions: Consistent Data
and No Dissociation’. Cognitive Neuropsychology, 25, 920–950.
Franz, V., Gegenfurtner, K., Bülthoff, H., and Fahle, M. (2000) ‘Grasping Visual
Illusions: No Evidence for a Dissociation Between Perception and Action’.
Psychological Science, 11, 20–25.
Gentilucci, M., Cheiffe, S., Daprati, E., Saetti, M. C., and Toni, I. (1996) ‘Visual
Illusion and Action’. Neuropsychologia, 34, 369–376.
Gentilucci, M., Daprati, E., Toni, I., Chieffi, S., and Saetti, M. C. (1995)
‘Unconscious Updating of Grasp Motor Program’. Experimental Brain Research,
105, 291–303.
Gillam, Barbara (1998) ‘Illusions at Century’s End’. In Hochberg, Julian (ed.)
Perception and Cognition at Century’s End, 95–136. San Diego: Academic Press.
Naturalizing Action Theory 239

Goldie, Peter (2004) On Personality. London: Routledge.


Gonzalez, C., Ganel, T., Whitwell, R., Morrissey, B., and Goodale, M. (2008)
‘Practice makes Perfect, But only with the Right Hand: Sensitivity to Perceptual
Illusions with Awkward Grasps Decreases with Practice in the Right But Not the
Left Hand’. Neuropsychologia, 46, 624–631.
Goodale, M. A. (2011) ‘Transforming Vision into Action’. Vision Research, 51,
1567–1587.
Goodale, M. A., and Humphrey, G. K. (1998) ‘The Objects of Action and
Perception’. Cognition, 67, 181–207.
Goodale, M. A., and A. D. Milner (2004) Sights Unseen. Oxford: Oxford University
Press.
Goodale, M. A., Pelisson, D., and Prablanc, C. (1986) ‘Large Adjustments in
Visually Guided Reaching Do not Depend on Vision of the Hand or Perception
of Target Displacement’. Nature, 320, 748–750.
Haffenden, A., and Goodale, M. A. (1998) ‘The Effect of Pictorial Illusion on
Prehension and Perception’. Journal of Cognitive Neuroscience, 10, 122–136.
Haffenden, A. M., Schiff, K. C., and Goodale, M. A. (2001) ‘The Dissociation
between Perception and Action in the Ebbinghaus Illusion: Nonillusory Effects
of Pictorial Cues on Grasp’. Current Biology, 11, 177–181.
Haggard P., and Clark, S. (2003) ‘Intentional Action: Conscious Experience and
Neural Prediction’. Consciousness and Cognition, 12, 695–707.
Held, R. (1965) ‘Plasticity in the Sensory-motor System’. Scientific American, 213,
84–94.
Hornsby, Jennifer (2004) ‘Agency and Alienation’. In Naturalism in Question,
Mario De Caro and David Macarthur (eds). Cambridge, MA: Harvard University
Press.
Israel, David, Perry, John, and Tutiya, Syun (1993) ‘Executions, Motivations and
Accomplishments’. Philosophical Review, 102, 515–540.
Jackson, S., and Shaw, A. (2000) ‘The Ponzo Illusion Affects Grip-force But Not
Grip-aperture Scaling during Prehension Movements’. Journal of Experimental
Psychology HPP, 26, 418–423.
Jacob, Pierre, and Jeannerod, Marc (2003) Ways of Seeing. Oxford: Oxford
University Press.
Jeannerod, M. (1997) The Cognitive Neuroscience of Action. Oxford: Blackwell.
Kravitz, Dwight J., Kadharbatcha, S., Saleem, Chris, Baker, I., and Mishkin, M.
(2011) ‘A New Neural Framework for Visuospatial Processing’. Nature Reviews
Neuroscience, 12, 217–230.
Króliczak, Grzegorz, Heard, Priscilla, Goodale, Melvyn A., and Gregory, Richard
L. (2006) ‘Dissociation of Perception and Action Unmasked by the Hollow-face
Illusion’. Brain Research, 1080, 9–16.
Lamme, Victor A. F. (2003) ‘Why Visual Attention and Awareness are Different’.
Trends in Cognitive Sciences, 7, 12–18.
Libet, B. (1985), Unconscious Cerebral Initiative and the Role of Conscious Will
in Voluntary Action’. Behavioral and Brain Sciences, 8, 529–566.
Millikan, Ruth (2004) Varieties of Meaning. Cambridge, MA: MIT Press.
Milner, A. D., and Goodale, M. A. (1995) The Visual Brain in Action. Oxford:
Oxford University Press.
Milner, A. D., and Goodale, M. A. (2008) ‘Two Visual Systems Re-viewed’.
Neuropsychologia, 46, 774–785.
240 Bence Nanay

Nanay, Bence (2010a) ‘Philosophy of Perception: The New Wave’. In B. Nanay


(ed.) Perceiving the World, 3–12. New York: Oxford University Press.
Nanay, Bence (2010b) ‘A Modal Theory of Function’. Journal of Philosophy, 107,
412–431.
Nanay, Bence (2011) ‘Do We see Apples as Edible?’ Pacific Philosophical Quarterly,
92, 305–322.
Nanay, Bence (2012a) ‘Action-oriented Perception’. European Journal of Philosophy,
20, 430–446.
Nanay, Bence (2012b) ‘Perceptual Phenomenology’. Philosophical Perspectives, 26,
235–246.
Nanay, Bence (2013a) Between Perception and Action. Oxford: Oxford University
Press.
Nanay, Bence (2013b) ‘Success Semantics: The Sequel’. Philosophical Studies, 165,
151–165.
Nanay, Bence (2013c) ‘Singularist Semirealism’. British Journal for the Philosophy of
Science, 64, 371–394.
Nanay, Bence (Forthcoming) ‘Empirical Problems with Antirepresentationalism’.
In B. Brogaard (ed.) Does Perception have Content? New York: Oxford University
Press.
Pacherie, E. (2000) ‘The Content of Intentions’. Mind and Language, 15, 400–432.
Paulignan, Y., MacKenzie, C. L., Marteniuk, R. G., and Jeannerod, M. (1991)
‘Selective Perturbation of Visual Input During Prehension Movements I: The
Effect of Changing Object Position’. Experimental Brain Research, 83, 502–512.
Pautz, Adam (2010) ‘An Argument for the Intentional View of Visual Experience’.
In Bence Nanay (ed.) Perceiving the World. Oxford: Oxford University Press.
Pavani, F., Boscagli, I., Benvenuti, F., Rabuffetti, M., and Farnè, A. (1999) ‘Are
Perception and Action Affected Differently by the Titchener Circles Illusion?’
Experimental Brain Research, 127, 95–101.
Pelisson, D., Prablanc, C., Goodale, M. A., and Jeannerod, M. (1986) ‘Visual
Control of Reaching Movements without Vision of the Limb II: Evidence of
Fast Unconscious Processes Correcting the Trajectory of the Hand to the Final
Position of a Double-step Stimulus. Experimental Brain Research, 62, 303–311.
Pereboom, Derk (2004) ‘Is our Conception of Agent Causation Coherent?’
Philosophical Topics, 32, 275–286.
Perry, John (2001) Knowledge, Possibility and Consciousness. Cambridge, MA: MIT
Press.
Quine, W. V. O. (1969) ‘Epistemology Naturalized’. In Quine (ed.) Ontological
Relativity and Other Essays, New York: Columbia University Press.
Quine, W. V. O. (1984) ‘Reply to Putnam’. In L. E. Hahn and P. A. Schillp (eds) The
Philosophy of W. V. Quine. LaSalle, IL: Open Court.
Schnall, S., J. R. Zadra, and D. R. Proffitt (2010) ‘Direct Evidence for the Economy
of Action: Glucose and the Perception of Geographic Slant’. Perception, 39,
464–482.
Searle, John (1983) Intentionality. Cambridge: Cambridge University Press.
Slezak, Peter (1987) ‘Intending and Acting’. Journal of Philosophy, 84, 49–54. Book
review.
Slezak, Peter (1989) ‘How NOT to Naturalize the Theory of Action’. In Peter
Slezak and W. R. Arbury (eds) Computers, Brains and Minds, 137–166. Dordrecht:
Kluwer.
Naturalizing Action Theory 241

Velleman, David (2000) The Possibility of Practical Reason. Oxford: Oxford


University Press.
Vishton, P. (2004) ‘Human Vision Focuses on Information Relevant to a Task,
to the Detriment of Information that is not Relevant’. Behavioral and Brain
Sciences, 27, 53–54.
Vishton, P., and Farbe, E. (2003) ‘Effects of the Ebbinghaus Illusion on Different
Behaviors’. Spatial Vision, 16, 377–392.
Wegner, D. (2002) The Illusion of Conscious Will. Cambridge, MA: MIT Press.
12
The Architecture of Higher
Thought
Daniel A. Weiskopf

The idea that mental activity can be arranged according to a hierarchy


of complexity has ancient origins, with roots in the Aristotelian psycho-
logical division of the soul into nutritive, perceptual and intellectual
faculties. On this view, elementary functions dedicated to sustaining life
undergird those that engage cognitively with the sensible world, which
in turn support the active faculties of abstract reason. Higher faculties
are meant to be those that have, in some sense, greater abstraction,
generality or complexity, which links them ethologically with distinc-
tively human cognitive capacities and developmentally with the mature
state of the human cognitive system as opposed to that of the infant or
child.1
This distinction recurs at the inception of modern scientific psychology
in the work of Wilhelm Wundt, who sharply distinguished between
sensory and perceptual capacities and other mental capacities such as
learning and memory. For Wundt, the distinction between the two was
not just between two types of cognitive process but also implied a meth-
odological dualism: sensory capacities were grounded in species-typical
physiological mechanisms, while higher thought, insofar as it is condi-
tioned by language and culture, must be studied using the methods of
sociology and anthropology, as part of a distinctive Völkerpsychologie
(Hatfield 1997). This separation continued well into the 20th century.
Neisser (1967), for instance, does not share Wundt’s methodological
dualism, but his landmark textbook on cognitive psychology is divided
into two major halves, one dealing with visual processing and the other
with auditory processing, with higher cognitive processes (memory,
language and thought) confined to the brief epilogue.
In this chapter I critically assess existing accounts of what it means for
there to be a hierarchy of thought, and I offer an alternative taxonomy

242
The Architecture of Higher Thought 243

that views higher cognitive capacities, paradigmatically exemplified by


conceptualized thought, as instantiating three distinct but convergent
functional profiles. This analysis can overcome problems that plague
other accounts, which tend to be hopelessly underspecified or oversim-
plified; moreover, it shows an important sense in which the apparently
disunified higher faculties can be seen to belong to genuine psycho-
logical kinds.

1 Attempts to analyse the hierarchy

The simplest account of the higher/lower division follows Wundt,


Neisser and much of the rest of the psychological tradition in drawing
a sharp line at the senses. On this view, lower cognition is made up of
perception and action systems, with higher cognitive processes being
the remainder, including such things as language comprehension and
production, memory encoding and access, decision making, categori-
zation, planning, analogy and creative thought and various forms of
reasoning: deductive, inductive and abductive. Sometimes added to the
list are forms of metacognitive awareness, of both one’s own thoughts
and those of others, as well as metacognitive control over one’s thought
processes. Such other cognitive states and systems as the emotions and
other affective, motivational and drive systems are harder to place on
this account, although emotions have widely noted affinities with
perceptual states.
An immediate problem with this list is that it fails to tell us what
higher processes have in common, aside from merely not being sensori-
motor: the higher processes are represented as a motley and disunified
lot. Prima facie, though, we should attempt to seek out and display theo-
retically unifying features in cognitive systems wherever they occur. Call
this the Unity Problem.
A further problem is that as it stands, the list fails to make enough
distinctions. There may be a various processes that are not sensorimotor
but are not intuitively higher processes. For example, elementary grasp
of number and quantity seems to involve a rapidly activated analogue
magnitude system that estimates the number of distinct items in an
array and can distinguish two separate arrays as long as they stand in
a favourable ratio (roughly, obeying Weber’s law). But this system is
amodal or at least supramodal and hence is not part of any particular
perceptual system. At the same time, the numerical estimations and
computations it can carry out are highly limited (Beck 2013), falling
far short of full-fledged adult numerical competence. The mind appears
244 Daniel A. Weiskopf

well stocked with intermediate systems of this type (Carey 2009), which
are not captured by a simple binary distinction. Call this the Richness
Problem: we need our taxonomy to make enough distinctions for all the
systems we find.
More worrisome is that sensorimotor systems themselves may func-
tion by making use of what appear to be some of these very same higher
processes (Rock 1983). Consider categorization. There is no clear and
unambiguous notion of what a categorization process is, except that it
either assigns an individual to a category (‘a is F’) or makes a distinction
between two or more individuals or categories (‘these Fs are G, those
Fs are H’). But many cognitive systems do this. Vision is composed of
a hierarchy of categorization devices for responding to edges, textures,
motion and whole objects of various sorts, and the language faculty
categorizes incoming sounds as having various abstract phrasal bounda-
ries, determines the phonetic, syntactic and semantic classes that words
belong to and so on. Indeed, almost every cognitive process can be seen
as involving categorization in this sense (van Gelder 1993).
Similarly for reasoning. It is widely assumed that perceptual systems
carry out complex inferences formally equivalent to reasoning from the
evidence given at their inputs to the conclusions that they produce as
output, as in models that depict visual analysis as a process involving
Bayesian updating (Knill and Richards 1996). Language comprehen-
sion systems must recover the structure of a sentence from fragmentary
bits of evidence, a task that has essentially the form of a miniature
abductive inference problem. So the mere existence of a certain type of
process cannot draw the needed distinction, since these processes may
occur in both higher and lower forms.2 Call this the Collapse Problem:
the difference between higher and lower faculties must be a stable and
principled one.
A related way of drawing the higher/lower distinction says that lower
cognition takes place within modular systems, while higher cognition
is non-modular. This too requires more precision, since there is a range
of different notions of modularity one might adopt (Coltheart 1999).
In Fodor’s original (1983) discussion of modularity, the line between
modular and non-modular systems coincides roughly with the line
between higher and lower systems: modular systems are, de facto, the
peripheral input and output systems (plus language), while central
cognition – the home of the folk-psychologically individuated propo-
sitional attitudes – is the paradigm of higher cognition. But unlike on
the simple account, there can be a whole sequence of modular processes
that take place before arriving at central cognition.
The Architecture of Higher Thought 245

A virtue of using modularity to distinguish higher and lower systems is


that non-modular systems can be given an independent positive charac-
terization. The signature phenomena associated with central cognition
are the properties of being Quinean and isotropic. Quinean processes
are those that are sensitive to properties of a whole belief system, while
isotropic processes are those that in principle may access any belief in
the course of their operation. These two properties are, in turn, best
explained by central cognition being a domain-general and informa-
tionally unencapsulated system. The ascent towards higher cognitive
processing involves the progressive loosening of the constraints of
modularity: greater generality in processing and wider access to stored
information. This corresponds to the classical understanding of canon-
ical higher thought processes such as reasoning, analogy making and
decision processes. As there is thus some account of how to recognize
whether a process belongs to higher or lower cognition, this view is
unlike the simple one on which higher processes are more or less just
those that are not perception.
However, the necessity of these conditions is challenged by massively
modular models of cognition. In these architectures, even central cogni-
tion is fragmented into a host of domain-specific processors. Moreover,
none of these systems has the unrestricted scope and access charac-
teristic of Fodorian central cognition. In the standard formulation of
evolutionary psychology, cognitive domains corresponding to adaptive
problems are assigned to dedicated reasoning systems. Mate selection,
predator detection, conspecific recognition, cheater detection and a host
of other survival-linked cognitive skills all employ distinctive reasoning
principles, tap separate bodies of information and operate semi-auton-
omously from each other. The total interaction of all of these systems,
rather than any single domain-general reasoning device, produces the
rich complexity of human thought (Carruthers 2006).
At issue here is not whether the mind is in fact massively modular;
the point is that as far as constructing a taxonomy of higher and lower
systems goes, modularity by itself does not seem to capture what we are
looking for, since it seems perfectly coherent for there to be modularly
organized higher cognitive systems. Call this the Neutrality Problem. In
attempting to reconstruct the distinction, we should try to do so in a
way that accommodates massively modular architectures (and others)
in principle.
Finally, consider two non-psychological accounts of the higher/lower
distinction. One is drawn from neuroscience. The brain offers what
seems to be an obvious model of higher and lower systems – namely,
246 Daniel A. Weiskopf

one grounded in afference hierarchies. Moving inwards from the periph-


eral nervous system, each synaptic step represents a rung on the ladder
towards ever higher brain regions. This model is implicit in theories
that assume that brain network connectivity has a straightforward hier-
archical structure, with inputs flowing to a central dominant control
point before turning into efferent signals and flowing outward again.
Activity is propagated linearly upwards through sensory areas, through
‘association cortices’ and ultimately to wherever this seat of control is
located – perhaps the distinctively human prefrontal cortex.3
This model offers a literal interpretation of our central metaphor:
higher cognitive functions are localized at higher levels of the affer-
ence hierarchy. The model also gains support from the fact that sensory
systems typically involve internal processing hierarchies in which they
move from simpler to more abstract and complex representations
of their objects: from edges and various other primitive features to
two-dimensional shapes and from shapes of objects to assignments
of category membership. Higher cognition involves extracting and
manipulating such abstractions. Since they are the peak of the affer-
ence hierarchy, higher brain regions in this scheme would also receive
and integrate inputs from all lower regions, which puts them in a
position to implement informationally unconstrained reasoning and
decision-making processes. On representational and control-theoretic
grounds this might seem to capture some important properties of
higher thought.
Unfortunately, the picture of neural organization that this model
relies on is false when examined in detail. Within only a few synaptic
hops inward from the periphery, it rapidly becomes impossible to
trace any single neat hierarchy of regions. While afferent pathways are
initially organized into linear tracts, widespread lateral branching struc-
ture emerges early on. Even in primary sensory areas there is massive
crossover; early sensory processing rapidly becomes multimodal, and
intertwined pathways are the norm (Ghazanfar and Schroeder 2006;
Schroeder and Foxe 2004). Many of these pathways are also recurrent,
forming loops rather than unidirectional paths. Such top-down influ-
ences are essential for resolving ambiguities in the perceptual array
and speeding recognition (Kveraga, Ghuman and Bar 2007). Finally,
there is no single neural pinnacle where these tracts converge. Higher
cognitive processes recruit widespread networks activating multiple
regions at many putatively distinct levels of processing (Anderson
2010). Reading higher processes off the neural architecture, then,
seems unpromising.
The Architecture of Higher Thought 247

A final non-psychological account appeals to evolutionary history.


Higher cognitive processes on this view would be those that are of
more recent evolutionary origin, with lower processes being earlier and
(therefore) simpler and higher processes being those that build on these
and enable more complex and powerful cognitive achievements. This
notion goes back to T. H. Huxley and his ‘doctrine of continuity’, along
with the work of C. Lloyd Morgan on the varieties of animal intelligence
and ultimately to Darwin himself, who in The Descent of Man posited
‘numberless gradations’ in the mental powers of non-human animals.4
However, this proposal faces several objections. For one thing, even
structures that make their first appearance early in a species’ history are
subject to continual modification. This is true even for the visual system,
which shares many commonalities across existing primate lineages;
human area V1 exhibits distinctive specializations in laminar organi-
zation and neuronal phenotypes and spacing, and area V3A exhibits
greater motion sensitivity than does its macaque homologue (Preuss
2011). If it is the current species-typical form of the capacity that counts,
these too may have a relatively short evolutionary history. As a species
undergoes new phenotypic modifications, these systems may adjust as
well; as new capacities are added, they modify the old ones in various
ways. The addition of language areas to the brain, for example, does not
leave the underlying architecture the same, with widespread changes in
connectivity within and among areas that have existing primate homo-
logues (Barrett 2012). Even more surprisingly, bipedal walking seems to
recruit areas in and around the primary visual cortex independently of
whether visual input is being processed (Allen 2009, 89). So there may
be no stable temporal order of fixed systems to appeal to; what these
systems do changes as new components are added around them.
Moreover, some systems that do not fit the intuitive profile of higher
cognition may be comparatively recent, and some higher cognitive
structures may be ancient. According to some computational models,
emergence of language may depend on the development of an expanded
short-term memory buffer for temporal sequences, but this expanded
buffer per se does not seem to be an especially ‘higher’ capacity (Elman
1993). For an example of the latter, Carruthers (2004) has argued that
even bees and other insects may literally have beliefs and desires, making
them positively ancient in evolutionary terms. Whether this is true or
not (see Camp 2009 for criticism), it doesn’t seem that there is any essen-
tial link between being on the list of intuitively higher cognitive states
and falling in any particular place on the evolutionary calendar.5
248 Daniel A. Weiskopf

2 The functional components of higher cognition

Many of the existing accounts of how cognitive faculties can be ordered


and distinguished, then, are non-starters when examined in detail. But
considering their defects may point the way towards a more adequate
account. For present purposes, I take conceptualized thought as the
paradigmatic form of higher cognition. If we can get clear on the func-
tional role that concepts play in cognition, we will have a grasp of what
at least a large chunk of higher thought consists in. We can then proceed
to see whether the notion may be generalized in any way. In defence of
this strategy, I take it that concept possession is special: not any cogni-
tive system automatically counts as having concepts. Mature, normal
humans have concepts, as do very young humans that are developing
normally; and some animals may, although this is an open question.
The usefulness of having a general criterion for concept possession is
precisely that it allows us to make these sorts of developmental and
ethological distinctions.
I suggest that there are three properties that characterize the concep-
tual system. These properties are logically and causally independent, but
they collectively describe what concept possessors share. They are

1. representational abstraction
2. causal autonomy
3. free recombination

These are not ‘analytic’ claims or stipulative definitions, nor do they


necessarily express what we mean in everyday life by using the term
‘concept’. These are intended as part of a theoretical definition aimed at
carving out a functional and explanatory role that distinguishes concepts
as such from other kinds posited in our psychological theories.6

2.1 Representational abstraction


Take representational abstraction first. Concepts are mental represen-
tations, but there are many types of mental representations besides
concepts. There are perceptual representations in various sense modali-
ties and submodalities, and there are motor representations that control
how we are to move our bodies. There are representations of quantity
and number, at least some of which appear to be amodal or cross-modal.
There are various spatial, temporal and geometric representations; for
example, representations of where we are located in space, where we
have travelled recently, where landmarks in the environment are and
The Architecture of Higher Thought 249

so on. And language users have a host of representations of the phono-


logical, morphological, syntactic and semantic properties of words and
sentences.
Concepts, like other representations, are individuated in part by what
they represent. A concept is always a concept of a certain entity (my
cat, the man in the silver jacket, the Eiffel tower, the number 3), prop-
erty (being furry, being a third son, being a quadruped, being irascible),
kind (cats, unicorns, gluons, Hilbert spaces), state (being true, being
dead, feeling elated) or event (exploding, kicking, kissing). Some of the
things that we can have concepts of are things that can also be repre-
sented by us in other ways. We can perceive red, furry things and we
can think about redness and furriness. These properties are represented
both in perception and conceptually. The property of being a triple may
be perceptually detectable and manipulable by our faculty of numer-
ical cognition, but we can also entertain the concept of the number
3. Similarly, we learn to detect and represent nouns long before we
learn the concept of a noun. The mind’s systems are representationally
redundant in this way.
But many things cannot be represented by our dedicated systems for
perception, action, numerical cognition, language and the like. For
instance, Hilbert spaces, being mathematical objects, have no percep-
tual manifestations and surely exceed our number faculty’s powers.
Whether something is an allusion to Dante’s Inferno is not something
that the language faculty is capable of representing, and yet we can,
with training, come to recognize such allusions. The same goes for social
roles, such as the ancient Greek notion of being an erastes or eromenos or
the contemporary relation of being a ‘friend with benefits’; for aesthetic
qualities, like being an example of minimalist composition or being a
sonnet; for material kinds, such as a polyester-rayon blend fabric; for
theoretical kinds in various sciences, such as extrasolar planets, ribos-
omes and prions; and so on. We can represent and think about all of
these types of things, but the capacity to do so does not seem proprietary
to any cognitive system, particularly not our sensorimotor systems.
To say that concepts are abstract representations, then, is to say the
following. First, we are capable of having concepts of things that we
cannot represent using any of our other representational systems. Our
concepts can transcend the representational resources of these systems.
As many of the above examples indicate, this can happen where the
categories we are representing are ones that simply have no perceivable
or manipulable manifestations at all and hence cannot be directly repre-
sented by sensorimotor systems.
250 Daniel A. Weiskopf

Second, not only can we exceed the representational power of these


systems; we can cross-classify the world relative to the distinctions made
by these systems. Where perception groups objects together (e.g., by
their shape, size, colour, motion), we can distinguish them in concep-
tualized thought. Two kinds of berries look similar but are actually
distinct; jadeite and nephrite are similar in their perceivable traits but
differ in molecular composition. And where perception groups objects
separately, we can bring them under a common concept; hence we
recognize that the caterpillar and the butterfly, appearances aside, are
one and the same.
Third, we are also at least sometimes capable of having concepts of
the very same categories that are represented by these other cognitive
systems. However, our concepts need not represent them in the same
way as these systems do. Hence there can be concepts of perceivable
objects that do not represent them in a perceptual way. This is clear from
the fact that we can think about entities without knowing their appear-
ance, and we can perceive their appearances without knowing what they
are. Concepts may re-encode categories that are represented elsewhere in
the mind, and some of these re-encodings involve representing the same
category but discarding some of the information carried by earlier forms
of representation (e.g., fine-grained information about appearances).
To summarize these points, we can say that a system S1 has represen-
tational resources that are more abstract than S2 when it can represent
categories that S2 cannot represent given its resources, when it can cross-
classify the categories represented by S2 and when it can represent cate-
gories that S2 can but in an informationally reduced fashion. Abstraction
is thus not only a complex property, having several, possibly separable
components; it is also defined relationally between pairs of systems.
Concepts are abstract relative to our perceptual systems and also with
respect to many intermediate non-conceptual processors. So the first
function of conceptualized thought is to enable the creature that has it
to transcend the ways of representing the world that its other cognitive
systems give it.

2.2 Causal autonomy


Second, concepts are causally autonomous. This means at least two
things. First, they are capable of being tokened without the presence of
their referents. Crudely, we can think about banana cream pies without
being hit in the face with one. This capacity comes in several strengths.
In the weak form we can think about the referent where it exists but
is simply not perceptually available at the moment. There are no pies
The Architecture of Higher Thought 251

here, but I can wish for one and wonder how to make it. In the stronger
form, we can entertain concepts of things even despite the fact that
their referents do not and perhaps cannot exist. Unicorns, perfect circles
and gods do not exist, hence our concepts of these things are necessarily
causally isolated from them. Even so, they have been the subjects of an
enormous amount of cogitation. Concepts enable our representational
powers to persist across absences.
Second, even when the referent of a concept is causally present and
impinging on the senses, the way in which we represent and reason
about it is in principle independent of our ongoing interactions with it
and the world. We may decide to think about the cat that we are looking
at or may decide not to. Even if we do think about it, the way we are
thinking about it may depart from the way we perceive it to be (since
appearances can be deceiving) or believe it to be (if we are reasoning
counterfactually).
What these two characteristics show is that the conceptual system
is to some degree causally autonomous from how the world affects us
perceptually. This idea has been expressed by saying that concepts are
under endogenous or organismic control (Prinz 2002). This autonomy
may be automatic, or it may be intentionally engaged and directed. For
an example of automatic or unintentional disengagement, consider the
phenomenon known as ‘mind wandering’, in which cognition proceeds
according to its own internally driven schedule despite what the crea-
ture is perceiving. When there is no particular goal being pursued and
attention is not captured by any specific object, trains of thought stop
and start based only on our own idiosyncratic associations, interests,
dreams, memories and needs. These also intrude into otherwise organ-
ized and focused cognitive processes, depending on one’s ability to
inhibit them,. This is an uncontrolled form of causal autonomy.
More sophisticated forms of causal autonomy involve the creature’s
being able to direct its own thoughts; for example, to think about quad-
ratic equations rather than pie because that is what the creature wants
to do. Deciding to think about one thing rather than another is a form
of intentional mental action. Hence it is one typical feature of higher
cognitive processes that they can be under intentional control or, more
broadly, can be subject to metacognitive governance. Metacognition
involves being able to represent and direct the course of our psycho-
logical states – a form of causal autonomy.
The capacity to deal with absences through causally autonomous
cognition involves ‘detached’ representations (Gärdenfors 1996; Sterelny
2003), which can be manipulated independently of the creature’s
252 Daniel A. Weiskopf

present environment. Information about particulars needs to be stored


and retrieved for future use: I need to be able to locate my car keys in the
morning, birds need to be able to locate caches of stored food in winter,
and so on. Information about general categories (ways to get a cab in
Chicago, kinds of berries that are edible in these woods) also needs to
be accessible. Going beyond merely reacting to what the present envi-
ronment contains requires planning for situations that don’t now exist
and may never exist. Planning, in turn, requires the ability to entertain
possible scenarios and reason about how to either make them the case or
not. And this requires bringing to bear knowledge of the particular and
general facts about the world beyond what is immediately present.
This sort of autonomy also shows up in hypothetical reasoning of
various kinds. Hypotheses are conjectures concerning what might be
the case. While some are straightforward generalizations of experience,
others (e.g., abductive inferences) are ampliative leaps that are not deter-
mined by the perceptual evidence accumulated thus far. Long-range
planning (where to find food and shelter when winter comes or what to
do in order to get a Ph.D.) demands an especially high degree of causal
autonomy. Wherever the goal to be achieved is sufficiently far ahead in
time or requires sufficiently many intervening stages, the only way to
reliably guide oneself to it is by disengaging from present concerns.
Causally autonomous cognition is closely connected with counter-
factual thinking, though they do not precisely come to the same thing.
Being able to entertain thoughts that are not causally determined by
what is present is part of being able to entertain thoughts of what is not
the case. Proper counterfactual cognition requires more than this, of
course: the creature needs to be aware (in some sense) that the scenario
being entertained is one that isn’t the case, otherwise there would be
nothing stopping it from acting as if it were. Hence the close relationship
between the sort of supposing that goes on in counterfactual thinking
and imagining, pretending and pretence more generally. These capacities,
despite their differences, derive from the same functional template;
namely, the causal autonomy of cognition, wherein representations
are manipulated in a way that is independent of ongoing sensorimotor
processing and which is also marked as explicitly non-factive.
The second function of conceptualized thought, then, is to be avail-
able for deployment and manipulation in ways that are driven by the
internal condition of the organism and not by the environment or its
impact on the organism, where this may be guided by practical ends
(as in planning) or by theoretical ends (as in reasoning and generating
explanations).
The Architecture of Higher Thought 253

2.3 Free recombination


Third, concepts are capable of free recombination. By this I mean that
they obey Evans’s Generality Constraint (GC) or something much like
it. In Evans’s terms, GC amounts to the following: if a creature can think
‘a is F’ and can also entertain thoughts about G, then it can think that
‘a is G’; and if it can also entertain thoughts about b, then it can think
that ‘b is F’. If you have available an individual concept and a predicate
concept and are capable of combining them, then you are also capable
of exercising that combinatorial ability with respect to all of the other
individual and predicate concepts that you have. A child who can think
thoughts about the individuals Daddy, Mommy and Kitty and who can
think ‘Kitty is mean’ and ‘Mommy is nice’ can also think ‘Daddy is mean’
and ‘Kitty is nice’, inter alia. Conceptualized thought is general insofar
as it allows this sort of free recombination regardless of the content of
the concepts involved. Put a slightly different way, once the possible
forms of combination are fixed, there are no restrictions on the sort of
information that can be accessed and combined by concept possessors.
As stated, GC applies only to ‘atomic’ thoughts, but it is easily gener-
alized. For instance, where P, Q and R stand for atomic thoughts, if a
creature can think ‘P and Q’, then it can also think ‘P and R’. And where
it can think ‘Fs are Gs’ and can also think about Hs, then it can think ‘Fs
are Hs’. These category-combining thoughts make it clear that a signifi-
cant function of the Generality Constraint construed as being about free
recombination is that it allows us to think these sorts of potentially cross-
domain thoughts.7 If we can think that ‘Sophie is a cat’ (a biological clas-
sification) and that ‘cats are prohibited in the building’ (an expression of
a social norm), then we can think ‘Sophie is prohibited in the building’
(an application of a norm to an individual).
Being able to integrate information across disparate domains in this
way is central to the ability to draw productive inferences and reason
flexibly about the world, which consists of an overlapping mosaic of
properties drawn from indefinitely many interrelated realms. Of course,
GC needs to be put together with sufficiently powerful inferential appa-
ratus in order to generate these conclusions. From ‘Fs are Gs’ and ‘Gs are
Hs’, the conclusion that ‘Fs are Hs’ can be drawn only if one is equipped
to draw the relevant inference. One needs access to a rule implementing
transitive categorical inference. By the same token, causal inferences
and other forms of non-deductive reasoning also require premises that
bridge representational domains. Having an overall representational
system satisfying GC is necessary for this.
254 Daniel A. Weiskopf

Saying that concepts satisfy the Generality Constraint is another way


of saying that conceptual representations are to a certain extent context-
independent. That is, they are capable of being deployed in connection
with any number of different concepts for an open-ended number of
purposes for the satisfaction of indefinitely many tasks and goals, all
under the control of a wide array of processes. For example, presumably
most creatures are capable of reasoning about mates and possible mating
strategies. And at least some employ a theory of mind in achieving their
mating ends – they operate with a range of mentalistic construals of the
thoughts, desires and emotions of their conspecifics. And finally, some
are capable of thinking about quantity in general, as well as particular
numbers such as 2, 3 and so on. But only by putting these capacities
together could a creature arrive at the baroque combinations of parallel
romantic entanglements involved in the average daytime soap opera,
in which one needs to manage the (often multiple) romantic affairs of
several protagonists engaged in complex webs of deception and manip-
ulation. Putting together the separate domains of mating, minds and
math requires Generality.
The notion of Generality as deployed by Evans ties it closely with the
notion of fully propositional thought. GC can be seen as a closure condi-
tion on the domain of propositions that a system can entertain. But we
can envisage forms of the constraint that do not require propositional
representation. All the basic notion of Generality needs is that repre-
sentations have some form of componential structure. For a non-prop-
ositional example, consider a Tinkertoy model of chemical structure,
in which spheres represent various types of atoms and sticks represent
chemical bonds. The size and shape of a stick determines the type of
sphere it can be combined with, and the whole set of possible arrange-
ments can be generated by a set of rules not unlike a grammar. Yet while
these chemical models represent possible objects rather than proposi-
tions, they conform to Generality in our extended sense: where a and b
are spheres having the same configuration of holes (similar size, shape,
etc.), they are intersubstitutable in a model in the same way that indi-
vidual concepts are intersubstitutable in thoughts. Any process that can
manipulate and transform a model containing a should be able to do
the same for one containing b.
A final point: Generality is not to be confused with recursion. Many
have pinpointed the emergence of recursion as the decisive transforma-
tion that enabled human thought to transcend its evolutionary ante-
cedents.8 The power of recursion lies in its giving access to an infinite
number of new representations from a finite base. All suprasentential
The Architecture of Higher Thought 255

inferences and many subsentential ones require structures that display at


least some recursively embedded structure (e.g., what the propositional
connectives deliver). So systems that display free recombination become
truly useful only once simple recursion is available. It goes without
saying that recursive representational systems obey Generality at least
in their recursive parts, since, for example, any propositional representa-
tions can be intersubstituted for p and q in the material conditional if p
then q. This does not strictly require imposing Generality on the atomic
propositions themselves, although dissociating these two properties
would be strange. For present purposes, we can regard recursive systems
as a special case of Generality. Put differently, free recombination is a
more cognitively fundamental property than productivity.
So the third function of conceptualized thought is to enable the free
and context-independent combination of information across domains
and more broadly to facilitate the construction of complex representa-
tions that can be the subjects of a range of inferential processes.

3 Higher cognition as a psychological kind

These three properties constitute the core functions of the human


conceptual system. As these properties are graded, there will be cases of
marginal concept possession involving creatures that have these qual-
ities only to a certain degree; but more importantly, these properties
pick out functional roles that are logically independent and empirically
dissociable as well.
Conceptualized thought, perhaps uniquely, occupies the apex defined
by joint possession of all three properties, with higher faculties as
conceived of here occupying a graded series on which some faculties
may be higher in one respect without being so in another. This fractional
approach to describing higher faculties also avoids the problems posed
by earlier single-criterion accounts.9 It overcomes the Unity Problem by
showing that higher faculties share core similarities in their causal role.
It overcomes the Richness Problem by providing a set of possible distinc-
tions among higher faculties, each of which is itself graded in various
ways, thus generating a potentially elaborate taxonomy. It avoids the
Collapse Problem by adding substantially new functionality to the cogni-
tive system at each level of the hierarchy. Finally, it sustains Neutrality
by maintaining silence about the underlying systems-level decomposi-
tion into modular versus non-modular components, as well as on the
precise neural substrates and evolutionary origins of higher faculties.
256 Daniel A. Weiskopf

On this picture we can see both the hierarchy of higher cognitive


faculties generally and conceptualized thought in particular, as consti-
tuting psychological kinds. A psychological kind, in the present sense, is
a type of structure that plays an important unifying or explanatory role
in a sufficiently wide range of cognitive and behavioural phenomena
(Weiskopf 2011). The taxonomy itself meets this standard insofar as
it lays out three well-defined and scientifically interesting dimensions
of variance and provides a framework for guiding investigations into
cognitive development and evolution, as well as in comparative cogni-
tive ethology.
First, these capacities play a role in developmental explanations.
Developmental psychologists are keenly interested in characterizing
the nature and timing of the so-called ‘perceptual-to-conceptual shift’
(Rakison 2005). This refers to the period when infants go from being able
to perceptually track complex categories to having genuine concepts of
them. Even very young infants can make perceptual discriminations
among relatively high-level categories, such as animals, vehicles and
furniture. But they are initially able to do so only in passive tasks such
as perceptual habituation. It is widely agreed that this evidence by itself
is too weak to establish that they have the relevant concepts. But there
is disagreement over what more is needed.
On this account, two things are required: the ability to carry out at
least somewhat sustained offline processing using these categories and
the ability to combine these categories with others. Without some such
account of the functional role of higher thought, we cannot frame
experiments or test hypotheses aimed at discovering when such thought
comes online. With such an account, however, we can do so – and,
importantly, in a way that delivers knowledge about infants’ capacities
but does not require detailed knowledge about the precise form of the
underlying systems and representations they employ. Relatively high-
level functional categories often have this feature: we have some idea of
what is involved in their presence even if we do not know just how, in
detail, they are implemented.
Second, these capacities may have distinctive evolutionary origins.
In one of the most careful and sustained discussions of this question,
Sterelny (2003) proposes several different types of environmental
structures that may impose selection pressures towards the develop-
ment of abstract representations that can be decoupled from ongoing
input. Importantly, these pressures have somewhat different sources.
The former depend on whether the categories necessary for a creature’s
survival are ‘transparent’ to its sensory systems (20–26), while the latter
The Architecture of Higher Thought 257

depend on the kind of cognitive tracking that emerges mainly among


social animals with an interest in reading and manipulating each other’s
thoughts (76). It remains to be investigated what sorts of pressures, if any,
facilitate the development of freely combinatorial representational vehi-
cles. Generally, however, the fact that a capacity is potentially subject
to such selection pressures indicates not only that it can be explained
but also that its presence can in turn explain the success facts about an
organism in its environment.
Third, note that these capacities are polymorphic. By this I mean that
they are general types of cognitive capacities – functional templates –
which may be instantiated in cognitive systems in many different ways.
There are many different low-level organizations of cognitive compo-
nents that can satisfy the requirements for various forms of higher cogni-
tion. These components may differ in their representational resources,
processing characteristics, overall location in the architecture and so
on, so that creatures that have similarly sophisticated capacities may
still differ from one another in their precise internal organization (their
cognitive ‘wiring diagram’).
However, we may nevertheless want to state generalizations across
creatures having different forms of these polymorphic capacities.10
To take an ethological example, there may be little reason to suppose
that higher cognition in non-human animals precisely resembles that
in humans. Different sensory capacities, needs and environments may
interact to lead them to make radically different conceptual divisions in
their worlds, which in turn leads to higher capacities that deal in distinc-
tive types of rules and representations. Indeed, for all we know, there
may be substantial diversity in these capacities even among humans
considered developmentally and cross-culturally.
Given this diversity, we may still want to co-classify these beings that
possess differently structured forms of these polymorphic capacities.
While they may not reason about or represent the world in precisely the
same way that we do, they are undeniably doing something of the same
broad type. In order to isolate this capacity, trace its origin and compare
its structure and content to our own, we need a way of describing it that
is a level of abstraction removed from the precise details of how it proc-
esses information. I would emphasize that there is nothing unusual in
this: precisely the same questions would arise if we were investigating
alien neurobiology and attempting to determine what sorts of neurons,
neurotransmitters and synaptic junctions they possessed. Without an
appropriate functional taxonomy in hand, such comparative investiga-
tions have no plausible starting point.
258 Daniel A. Weiskopf

If higher cognition is a kind, then the division between higher and


lower systems should capture categories that are structurally real and
explanatorily fruitful. Taxonomic schemes generally pay their way in
science by unifying disparate phenomena, by helping to frame new
targets for explanation and by suggesting novel empirical hypoth-
eses. This brief sketch should suggest that the hierarchy of thought as
conceived of here can do all three of these things.

4 Conclusions

One might, not unreasonably, have supposed that the very idea of
dividing cognitive capacities into higher and lower reflects a kind of
philosophical atavism. Neither the casual way in which the distinc-
tion is made by psychologists nor the poverty of standard attempts to
unpack it inspires hope. However, it turns out that not only can we
make a rather finely structured set of distinctions that capture many
of the relevant phenomena, these distinctions also seem to correspond
to genuinely explanatory psychological categories. Seeing organisms
through this functional lens, then, allows us to gain empirical purchase
on significant developmental, evolutionary and ethological questions
concerning their cognitive structure.11

Notes
1. For discussion of the Aristotelean divisions and related ancient notions, see
Danziger (1997, 21–35).
2. This point is made by Stich (1978), who attempts to distinguish doxastic
states and processes from subdoxastic ones on the grounds that the former
are inferentially integrated and conscious. Inferential integration, although
not consciousness, appears as one of the criteria for higher cognition on the
account developed here.
3. So Benton (1991) comments that historically the prefrontal region was
thought to provide ‘the neural substrate of complex mental processes such as
abstract reasoning, foresight, planning capacity, self-awareness, empathy, and
the elaboration and modulation of emotional reactions’ (3). For the history of
ideas about ‘association cortex’ more generally and about the localization of
intellectual function, see Finger (1994), chs 21 and 22.
4. Danziger (1997, 66–84) provides an excellent survey of the emergence of the
modern term ‘intelligence’ and the debates concerning its proper application
to animal cognition and behaviour.
5. For a reconstruction of the phylogeny of human cognition that depicts it as
emerging from a graded series of broadly holistic modifications to the brain
occurring at many levels simultaneously, see Sherwood, Subiaul and Zawidzki
(2008).
The Architecture of Higher Thought 259

6. On the picture of functional kinds being employed here, see Weiskopf


(2011).
7. I have emphasized the role of GC in allowing cross-domain thought and
inference in Weiskopf (2010), where I also argued that massively modular
minds cannot conform to GC and hence cannot be concept possessors in
this sense. Here one might wonder whether requiring free recombination
involves a violation of my neutrality principle; i.e., whether this is a way of
ruling out massively modular accounts of higher cognition by fiat. I think
not, since it needs to be argued that massive modularity violates GC; it is
not simply built into the view that it does. Free recombination picks out
a functional property that is independent of the organization of cognitive
architecture and its realization. Whether a particular architecture can achieve
Generality is an open question, and so no questions are being begged vis-
à-vis neutrality.
8. See Corballis (2011) for a defence of recursion as central to explaining human
cognitive uniqueness. There have also been questions about just how recur-
sive human thought is – particularly as expressed in language (Evans and
Levinson 2009; Everett 2005).
9. Several existing accounts of higher cognition are similar to the present one,
which attempts to build on their insights. Allen and Hauser (1991) argued
that higher cognition in animals involves being able to represent abstract
categories, i.e., those that have highly various perceptual manifestations.
This is close to the representational abstractness criterion proposed here.
Christensen and Hooker (2000) give a highly detailed account of how various
forms of causal autonomy might be structured and related to lower-level
capacities for maintaining the integrity of living forms. Camp (2009) argues
that both Generality and stimulus independence are required for conceptual
thought but does not separately mention representational abstractness. And
Sterelny (2003) proposes that robust tracking using decoupled representations
is essential for belief-like cognition; this corresponds to the combination of
causal autonomy and representational abstractness. Another extraordinarily
detailed proposal concerning how these two might be fused comes in Amati
and Shallice’s (2007) theory of abstract projectuality. On the present view, all
of these make a distinctive and necessary contribution to higher cognition
and conceptual thought proper, though none are the whole story taken on
their own.
10. This dovetails with the earlier emphasis on neutrality: in merely stating the
functional requirements on higher cognition, we do not want to presuppose
the existence of one or another type of underlying neural or systems-level
organization. That does not mean, however, that certain kinds of internal
organization won’t be ruled out by these requirements. To take an example,
subsumption-style architectures, such as those used in certain types of mobile
robots, may be incapable of satisfying the requirements on higher cognition,
even though they possess robust behavioural capacities. Since this archi-
tectural restriction is discovered rather than stipulated, it does not violate
neutrality.
11. Thanks to those who commented on an earlier version of this chapter during
the online conference: Mark Sprevak, Bryce Huebner, Justin Fisher, Robert
O’Shaughnessy and Georg Theiner. Their comments and discussion helped
260 Daniel A. Weiskopf

me to think through these issues and clarify arguments at several points.


Thanks also to the editors, Mark Sprevak and Jesper Kallestrup, for their kind
invitation to participate in this volume.

References
Allen, C., and Hauser, M. (1991). Concept attribution in nonhuman animals:
Theoretical and methodological problems in ascribing complex mental proc-
esses. Philosophy of Science 58: 221–240.
Allen, J. S. (2009). The Lives of the Brain. Cambridge, MA: Harvard University
Press.
Amati, D., and Shallice, T. (2007). On the emergence of modern humans. Cognition
103: 358–385.
Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of
the brain. Behavioral and Brain Sciences 33: 245–313.
Barrett, H. C. (2012). A hierarchical model of the evolution of human brain special-
izations. Proceedings of the National Academy of Sciences 109: 10733–10740.
Beck, J. (2013). The generality constraint and the structure of thought. Mind 121:
563–600.
Benton, A. L. (1991). The prefrontal region: Its early history. In H. Levin, H.
Eisenberg and A. Benton (eds), Frontal Lobe Function and Dysfunction (3–12).
Oxford: Oxford University Press.
Camp, E. (2009). Putting thoughts to work: Concepts, systematicity, and stimu-
lus-independence. Philosophy and Phenomenological Research 78: 275–311.
Carey, S. (2009). The Origin of Concepts. Oxford: Oxford University Press.
Carruthers, P. (2004). On being simple-minded. American Philosophical Quarterly
41: 205–222.
——. (2006). The Architecture of the Mind. Oxford: Oxford University Press.
Christensen, W. D., and Hooker, C. A. (2000). An interactivist-constructivist
approach to intelligence: Self-directed anticipative learning. Philosophical
Psychology 13: 7–45.
Coltheart, M. (1999). Modularity and cognition. Trends in Cognitive Science 3:
115–120.
Corballis, M. C. (2011). The Recursive Mind: The Origins of Human Language,
Thought, and Civilization. Princeton, NJ: Princeton University Press.
Danziger, K. (1997). Naming the Mind: How Psychology Found its Language. London:
Sage.
Elman, J. L. (1993). Learning and development in neural networks: The impor-
tance of starting small. Cognition 48: 71–99.
Evans, N., and Levinson, S. (2009). The myth of language universals: Language
diversity and its importance for cognitive science. Behavioral and Brain Sciences
32: 429–492.
Everett, D. (2005). Cultural constraints on grammar and cognition in Pirahã.
Current Anthropology 46: 621–646.
Finger, S. (1994). Origins of Neuroscience: A History of Explorations into Brain
Function. Oxford: Oxford University Press.
Gärdenfors, P. (1996). Cued and detached representations in animal cognition.
Behavioral Processes 35: 263–273.
The Architecture of Higher Thought 261

Ghazanfar, A. A., and Schroeder, C. E. (2006). Is neocortex essentially multisen-


sory? Trends in Cognitive Science 10: 278–285.
Hatfield, G. (1997). Wundt and psychology as science: Disciplinary transforma-
tions. Perspectives on Science 5: 349–382.
Knill, D. C., and Richards, W., (eds), (1996). Perception as Bayesian Inference.
Cambridge: Cambridge University Press.
Kveraga, K., Ghurman, A. S., and Bar, M. (2007). Top-down predictions in the
cognitive brain. Brain and Cognition 65: 145–168.
Neisser, U. (1967). Cognitive Psychology. New York: Meredith.
Preuss, T. M. (2011). The human brain: Rewired and running hot. Annals of the
New York Academy of Science 1225: 182–191.
Prinz, J. (2002). Furnishing the Mind. Cambridge, MA: MIT Press.
Rakison, D. H. (2005). The perceptual to conceptual shift in infancy and early
childhood: A surface or deep distinction? In L. Gershkoff-Stowe and D. H.
Rakison (eds), Building Object Categories in Developmental Time, 131–158.
Mahwah, NJ: Erlbaum.
Rock, I. (1983). The Logic of Perception. Cambridge, MA: MIT Press.
Schroeder, C. E., and Foxe, J. J. (2004). Multisensory convergence in early cortical
processing. In G. A. Calvert, C. Spence and B. E. Stein (eds), The Handbook of
Multisensory Processes, 295–310. Cambridge, MA: MIT Press.
Sherwood, C. C., Subiaul, F., and Zawidzki, T. W. (2008). A natural history of the
human mind: Tracing evolutionary changes in brain and cognition. Journal of
Anatomy 212: 426–454.
Sterelny, K. (2003). Thought in a Hostile World. Malden, MA: Blackwell.
Stich, S. (1978). Beliefs and subdoxastic states. Philosophy of Science 45: 499–518.
van Gelder, T. (1993). Is cognition categorization? Psychology of Learning and
Motivation 29: 469–494.
Weiskopf, D. A. (2010). Concepts and the modularity of thought. Dialectica 64:
107–130.
——. (2011). The functional unity of special science kinds. British Journal for the
Philosophy of Science 62: 233–258.
13
Significance Testing in
Neuroimagery
Edouard Machery

Progress in psychology and neuroscience has often influenced the


philosophy of mind. Fodor’s (1968) defence of intellectualist expla-
nations of behavioural and cognitive capacities was inspired by the
explanatory role of flow charts and computer simulations in the cogni-
tive psychology of the 1960s, and his (1974) defence of non-reductionist
physicalism rested in part on the promises of computer models in cogni-
tive psychology. As he put it, ‘the classical formulation of the unity of
science is at the mercy of progress in the field of computer simulation.
[ ... ] The unity of science was intended to be an empirical hypothesis,
defeasible by possible scientific findings. But no one had it in mind that
it should be defeated by Newell, Shaw and Simon’ (Fodor 1974, 106).
The development of connectionist models of cognition in the 1980s led
Ramsey, Stich and Garon (1990) to argue for the elimination of proposi-
tional attitudes. Contemporary philosophers of mind frequently appeal
to findings obtained by brain-imagery techniques (primarily, fMRI). To
give only two examples, Byrne (2011) appeals to fMRI results to substan-
tiate claims about the difference between inner and outer speech, which
plays an important role in his theory of self-knowledge, while Block’s
recent discussion of phenomenal consciousness (2007) appeals exten-
sively to results in neuroimagery.
Unfortunately, philosophers of mind are sometimes insufficiently
circumspect in embracing the most recent advances in psychology and
neuroscience and in drawing philosophical lessons from them. Thirty
years after ‘Special sciences (or: The disunity of science as a working
hypothesis)’, Fodor’s appeal to the computer models developed by
cognitive psychologists rings somewhat hollow in light of the rela-
tive fall out of favour of such models in contemporary psychology and
cognitive neuroscience. And there is no reason to limit this concern to

262
Significance Testing in Neuroimagery 263

yesterday’s philosophy of mind. Contemporary philosophers of mind


too rarely bring a sceptical attitude toward the recent scientific theories,
methods and results in psychology and in neuroscience, including brain
imagery in cognitive neuroscience. Their naivety is particularly regret-
table considering the alluring nature of the coloured pictures obtained
by processing brain imagery data (McCabe and Castel 2008; Weisberg
et al. 2008; Keehner and Fischer 2011).1
The goal of this chapter is to bring a sceptical attitude to bear on the
methods of neuroimagery. More precisely, I examine the most common
way of testing a cognitive-neuroscientific hypothesis about the func-
tion of a brain area or network: derive a statistical hypothesis from it,
and test it by means of null hypothesis significance testing. To do so, I
use Colin Klein’s (2010) critical discussion (‘Images are not the evidence
in neuroimaging’) as a foil. Drawing inspiration from Meehl’s (1967)
famous criticism of significance tests, Klein has argued that because of
the reliance of neuroimagery on null hypothesis significance testing,
fMRI data cannot provide evidence for or against functional hypotheses
about brain areas and networks.2 As he puts it (2010, 265), ‘[N]euroim-
ages present the results of null hypothesis significance tests performed
on fMRI data. Significance tests alone cannot provide evidence about the
functional structure of causally dense systems in the brain.’ If correct,
this criticism would be devastating for the field of neuroimagery, and it
would also undermine the frequent appeal to neuroimagery results by
philosophers of mind. But fear not! As I argue in this chapter, Klein’s
criticism fails because he misunderstands the way null hypothesis signif-
icance testing works in neuroimagery.3
I proceed as follows. In Section 1, I describe the most typical method of
hypothesis testing in neuroimagery before examining the substance of
Klein’s argument in Section 2.4 In Section 3, I clarify the critical feature
of null hypothesis significance testing that is misunderstood by Klein.
In Section 4, I apply the lessons of Section 3 to the use of significance
tests in neuroimagery.

1 Significance testing in neuroimagery

1.1 Hypothesis testing in neuroimagery


Neuroimagery is brought to bear on a diverse range of hypotheses,
including functional hypotheses about brain areas or networks – for
example, the hypothesis that the function of the fusiform face area is to
identify faces (Kanwisher, McDermott and Chun 1997) or the hypothesis
264 Edouard Machery

that mindreading is done by a network of areas that involves the right


temporo-parietal junction and the medial prefrontal cortex (e.g., Saxe
and Wexler 2005) – and partitioning hypotheses; namely, hypotheses
that two distinct tasks recruit two distinct psychological processes (for
discussion of this kind of cognitive-neuroscientific hypothesis, see
Machery 2012). Following Klein (2010), my concern in this chapter is
with functional hypotheses. For simplicity, I also focus on fMRI in what
follows, but the discussion in this chapter can be extended to other
neuroimagery methods, such as PET.
From a hypothesis about the function of a brain area or network,
cognitive neuroscientists derive a prediction about brain activation; for
instance, they may predict that a brain area or network (e.g., the right
temporo-parietal junction) will be more involved in a task recruiting
the relevant function (e.g., mindreading) than in a control task. More
precisely, they derive a statistical hypothesis about the value or time
course of the BOLD signal in this area in different conditions. Naturally,
the exact nature of this statistical hypothesis depends on the kind of
design used in the study and on the kind of analysis conducted by
neuroscientists.
In the 1990s, studies in neuroimagery often used a block-design
method. Participants complete an experimental task a specific number
of times or for a specific duration (first block), then complete a control
task a specific number of times or for a specific duration (second block),
then complete the first block again, and so on. The average value of the
BOLD signal in each block is compared, typically by means of a t-test,
for each voxel (in whole-brain studies) or across the voxels defining
a Region of Interest. The null hypothesis states that the average BOLD
signal is the same in the experimental and control tasks. If the p-value
associated with the computed t statistic is below the significance level in
a voxel or area, the null hypothesis is rejected for this voxel or area, and
(modulo a few complications)5 it is concluded that in it the average BOLD
signal differs between the two conditions. This difference is taken to be
evidence that the voxel or area is functionally involved in completing
the experimental task.
Cognitive neuroscientists have moved beyond this kind of statis-
tical analysis for more than a decade, often using instead the general
linear model.6 In a nutshell, cognitive neuroscientists develop a linear
model made of weighted regressors for the independent variables and
of various nuisance regressors (e.g., regressors for the movements of the
head), some of which are convolved with a hemodynamic function. The
model is fitted to the time course of the BOLD signal, and the values
Significance Testing in Neuroimagery 265

of the weights of the regressors (called ‘beta weights’) are computed.


Statistical tests compare particular beta weights to 0, or more commonly,
they compare the values of two or more beta weights, depending on the
contrast of interest. The null hypothesis states for each voxel that the
beta weight of interest is either equal to 0 or, more commonly, that
the values of two or more beta weights do not differ. The null hypoth-
esis is rejected for a given voxel if the p-value of the relevant statistic is
below the significance level. If it is, it is concluded (modulo, again, a
few complications) that the BOLD signal in this voxel differs between two
or more conditions, typically an experimental and a control condition.
This is taken as evidence that the voxel is functionally involved in the
experimental condition.

1.2 Null hypothesis significance testing


The previous section surveyed some influential methods used to gather
evidence for functional hypotheses. In all of them, the statistical hypoth-
eses that cognitive neuroscientists derive from functional hypotheses
are tested by means of null hypothesis significance testing. Significance
tests are a particular manner of testing statistical hypotheses (hypoth-
eses about the parameters of populations of data) on the basis of samples
(drawn from the relevant populations). In a significance test, the statis-
tical hypothesis (often called ‘the alternative hypothesis’) is typically
a range hypothesis: it states, for instance, that the value of a param-
eter (e.g., the population value of a beta weight in the general linear
model discussed above) is different from 0, without specifying its exact
value, or it states that the values of two parameters (e.g., the popula-
tion values of two beta weights or the average values of the BOLD signals
in two conditions) are different, without specifying by how much they
differ. Because the alternative hypotheses are range hypotheses, it is not
possible to determine how likely it is to obtain a statistic (a function of
the data in the sample) of a particular size or a larger one conditional
on the truth of the alternative hypothesis, and it is thus not possible to
determine whether the obtained data do or do not support the alterna-
tive hypothesis. For instance, it is not possible to determine whether the
value of the difference between two sample beta weights in the general
linear model supports the alternative hypothesis that the population
beta weights differ since the size of this latter difference is not speci-
fied; after all, even if these population beta weights are equal, the beta
weights computed on the basis of the data in the sample drawn from the
populations are likely to differ from 0 by some extent. To deal with this
issue, significance tests introduce a null hypothesis that contradicts the
266 Edouard Machery

alternative hypothesis and that specifies a particular value; for instance,


the null hypothesis could state that a beta weight is equal to 0 or that
two beta weights are equal. It is possible to compute how likely it is to
obtain a statistic of a particular size or a more extreme one if the null
hypothesis is true. If this probability is very low, the null hypothesis is
rejected and its contradictory, the alternative hypothesis, is accepted.
So significance tests bring data to bear on statistical hypotheses indi-
rectly: data are brought to bear on null hypotheses, and the alternative
hypotheses are accepted when and only when these null hypotheses are
rejected.
Because the statistical hypotheses derived from functional hypoth-
eses in cognitive neuroscience are range hypotheses – for instance, a
statistical hypothesis in the block design may specify that the average
value of the BOLD signal differs in the two blocks that are compared – the
computed statistics (average BOLD signal, correlation coefficient between
an independent variable and the BOLD signal, sample beta weights in the
general linear model, etc.) cannot be brought to bear directly on them.
However, they are brought to bear on null hypotheses (e.g., the hypoth-
esis that for all voxels or for a particular Region of Interest, the average
BOLD signal should be the same in two conditions or that the correlation
between an independent variable and the BOLD signal should be equal
to 0, etc.). One can compute a sampling distribution for the statistic of
interest conditional on the truth of the null hypothesis; that is, one can
compute how likely a statistic of a given size (e.g., t when one compares
the value of two average BOLD signals or when one compares the value of
two beta weights in the general linear model) or a larger one is if the null
hypothesis is true. If this probability is very small, one rejects the null
hypothesis; for instance, if, for a particular voxel, it is unlikely that one
would have obtained a difference between two sample beta weights or a
larger one if the null hypothesis (viz., the hypothesis stating that these
two population beta weights are identical) were true, one rejects the null
hypothesis for this voxel. Because the alternative hypothesis contradicts
the null hypothesis, one accepts the former when and only when one
rejects the latter. Acceptance of the alternative hypothesis provides
evidence for the functional hypothesis from which it is derived.

2 Klein’s argument

In this section, I describe the substance of Klein’s argument, abstracting


away from some of the details of his formulation.
Significance Testing in Neuroimagery 267

2.1 Step 1 of Klein’s argument


As I interpret it, Klein’s argument is a two-step argument. The first step
can be presented as follows:

Step 1
1. Any change induced in a variable of a causally dense system causes a
change in all the other variables.
2. The brain is a causally dense system.
3. An experimental task induces a change in the BOLD signal in the brain
areas and voxels functionally involved in completing this task.
4. Hence, whether or not area or set of voxels A is functionally involved
in completing task T, completing T induces a change in the BOLD
signal in A.
5. Hence, changes in the BOLD signal cannot support functional
hypotheses.

In order to cast light on this argument, I first need to explain what a


causally dense system is. A system is a set of variables that stand in causal
relations. If a system is represented in graph-theoretic terms, it is a set of
nodes connected by oriented edges. A causal relation between two varia-
bles is direct when and only when it is not mediated by another variable;
it is indirect when and only when it is not direct. When two variables
are indirectly related, they can be more or less apart, depending on the
number of other variables that mediate their causal relation. An indirect
causal relation is short if and only if it is mediated by few other variables.
A system is causally dense (in contrast to causally sparse) if and only if
most variables stand either in direct causal relations or in short indirect
causal relations to most variables. Thus, in a causally dense system, any
node can be reached from any other node in a small number of steps.
Modular systems are not causally dense even when all nodes can be
reached from all other nodes because many steps separate the nodes that
belong to different modules. They are causally sparse.
Klein takes Premise 1 of Step 1 to be a direct consequence of the
nature of causally dense systems: on his view, any change in a variable
constitutive of a causally dense system results in changes in the vari-
ables it is directly and indirectly connected to. This is an oversimplifica-
tion, however.7 Causal density is a necessary but not sufficient condition
for the spread of causal activation throughout a system. The values of
the parameters of the causal relations (or arrows) between variables (or
nodes) also need to have the right values for a change in a node to result
268 Edouard Machery

in changes in the other nodes of a causally dense system. For the sake of
the argument, I overlook this complication in what follows.
Let’s now turn to Premise 2. When the brain is viewed as a system,
the variables can be specified at several levels of aggregation: neurons,
columns, voxels (a typical voxel contains more than 5 million neurons,
according to Logothetis 2008) and brain areas of various sizes. Since this
chapter is concerned with neuroimagery, voxels and brain areas are the
appropriate levels of brain organization. So the variables that constitute
the brain viewed as a system are the BOLD signals in the voxels or brain
areas of interest.
It is not clear how causally dense the whole brain is. At the organiza-
tion level of neurons, neurons often form modules: They are causally
connected with a small number of neurons which are close to them and
project to only a few other parts of the brain. At a higher level of organi-
zation, smaller brain areas within lobes, gyri or sulci seem to be massively
interconnected, as is illustrated by Van Essen’s famous functional map
of the primate visual cortex (Van Essen, Anderson and Felleman 1992).
Sporns’s network analysis supports the same conclusion (Sporns 2010).
On the other hand, Sporns’s analysis also suggests that different areas
(lobes, gyri, sulci, etc.) of the brain tend to have a modular architecture.
Be it as it may, for the sake of the argument we accept Premise 2 for the
time being, but we revisit this issue in the last section of this chapter.
Premise 3 takes it for granted that specific brain areas – in contrast to
the whole brain – are functionally involved in completing a task. Thus,
mindreading does not involve the whole brain but a specific network of
brain areas. Furthermore, it takes for granted that the involvement of a
brain area in completing an experimental task results in a change in the
BOLD signal.
Conclusion 4 follows from the three premises. Experimental tasks
cause a change in the BOLD signal in the brain areas involved in
completing them (Premise 3). Because the brain is a causally dense
system (and the parameters of the causal relations have the right
values), the change in the BOLD signal in these brain areas and in
the voxels that compose them causes a change in the BOLD signal in
the other brain areas and their voxels (Premises 1 and 3). As a result,
experimental tasks typically cause a change in the BOLD signal in all
brain areas and voxels, whether or not these areas play a causal role in
completing these tasks. If Conclusion 4 is correct, then a change in the
BOLD signal during an experimental task provides no evidence that this
area is causally involved in completing this task and thus provides no
evidence about its function (Conclusion 5).
Significance Testing in Neuroimagery 269

To illustrate this argument, let’s engage in a bit of sci-fi cognitive


neuroscience. Let’s suppose that, contrary to what the large literature
on mirror neurons assumes (e.g., Cattaneo and Rizzolatti 2009), these
neurons in the premotor cortex are not functionally involved in the
recognition of others’ actions. However, because everything is caus-
ally connected to everything in the brain and thus because the mirror
neurons are directly or indirectly connected to the neurons that are
really involved in the recognition of others’ actions, the firing of the
latter causes the firing of the former, incidentally resulting in a change
in the BOLD signal in the premotor cortex. By hypothesis, it would be a
mistake to take this BOLD signal change to be evidence for the hypothesis
that the function of mirror neurons is to contribute to the recognition
of others’ actions.

2.2 Step 2 of Klein’s argument


Some readers will object that, while Conclusion 5 correctly follows from
Premises 1 to 3, this fact raises no problem for the use of fMRI to support
functional hypotheses about brain areas since it is not mere changes in
the BOLD signal but statistically significant changes in the BOLD signal that
are meant to provide evidence, and Conclusion 5 says nothing about
those.
Klein is aware of this line of response, and the second step of his argu-
ment is meant to deal with it.

Step 2
1. An empirical result is statistically significant if and only if the rele-
vant p-value is below the significance level.
2. The significance level is set at a particular value to limit the long-term
rate of false positives when the null hypothesis is true.
3. Any experimental task causes a change in the BOLD signal of any brain
area or voxel.
4. Hence, in significance tests in fMRI-based studies, the significance
level should be set at 0.
5. Hence, all changes in the BOLD signal should be treated as being
significant.

The second step of Klein’s argument calls for some clarifications.


Premise 1 is simply the definition of statistical significance. In experi-
mental psychology, the value of the significance level is usually set
at .05. Its value in cognitive neuroscience is often considerably lower
since many null hypotheses are often tested simultaneously. Premise 2
270 Edouard Machery

describes the justification for setting the significance level at any partic-
ular value. Its value determines the probability of committing a false
positive (rejecting a true null hypothesis) if the null hypothesis is true:
if it is set at .05, in the long run the null hypothesis will be rejected
erroneously in 5 per cent of the cases where it is true. Thus, the signifi-
cance level determines the upper bound of the rate of false positives. If
all null hypotheses that are tested are false, this rate is 0; if all of them
are true, this rate is 5 per cent. The value of the significance level is
determined on the basis of pragmatic considerations (Machery [n.d.]):
it is the largest risk of committing a false positive if the null hypothesis
is true that one finds acceptable, which depends on the possible conse-
quences of committing a false positive. Premise 3 has been established
by the first step of Klein’s argument.
Conclusion 4 follows from Premises 2 and 3: if any task causes a
change in the BOLD signal in any brain area, then the null hypothesis is
never true, and one cannot commit a false positive. As a consequence,
the significance level should be set at 0. But if this is the case, there is no
room for distinguishing statistically significant changes in the BOLD level
from mere changes in the BOLD level, and the response to the first step of
the argument is mistaken.

2.3 A misleading response


One may think that there is an obvious response to Klein’s two-step
argument. It is not the absolute value of the BOLD signal that is submitted
to statistical test but rather the difference between the value of the BOLD
signal in the task of interest and in a control task (for critical discussion of
this method, called the ‘subtraction method’, see Hardcastle and Steward
2002; Roskies 2010). One may think that, while the task of interest and
the control task are bound to elicit different amounts of activation in
the brain areas or voxels functionally involved in completing the task
of interest (supposing that the control task has been well chosen), they
are likely to elicit the same amount of activation in the brain areas or
voxels that are not functionally involved in completing the task but that
are (directly or indirectly) causally connected to the functional areas
or voxels. If that is true, the null hypothesis about the BOLD signal is
typically true in these brain areas or voxels. Hence, Klein’s argument is
invalid since in its Step 1, Conclusion 4 does not entail Conclusion 5.
The reader should keep in mind that statistical tests test do not the
absolute value of the BOLD signal but rather a difference between its value
in two conditions. This fact plays an important role in the final section
of this chapter. For present purposes, however, what matters is that the
Significance Testing in Neuroimagery 271

current response to Klein’s argument fails. The null hypothesis would be


true of those areas not functionally involved in the task of interest only
if the change in the BOLD signal incidentally caused by the control task
was equal to the change in the BOLD signal incidentally caused by the
experimental task. However, such equality is incredibly unlikely. Rather,
the changes in the BOLD signal incidentally caused by these two tasks are
bound to differ, if only by a minute quantity. Thus, Klein’s argument is
correct after all: Conclusion 5 really follows from Conclusion 4.

2.4 Moral
This argument seems devastating. Most studies in neuroimagery rely
on null hypothesis significance testing to test cognitive-neuroscientific
hypotheses about the functions of brain areas or networks. But this
argument seems to show that when the statistical hypotheses derived
from functional hypotheses are tested by means of significance tests,
data obtained by means of fMRI (and many other neuroimagery tech-
niques) provide no evidence for these functional hypotheses. Should
most imagery-based research in cognitive neuroscience be discarded?

3 Significance testing when the null is false

As noted above, this argument extends to neuroimagery Meehl’s


(1967) influential discussion of null hypothesis significance testing in
psychology. On the basis of theoretical and empirical considerations,
Meehl argued that null hypotheses are always false in experimental
psychology, and he concluded that null hypothesis significance testing
is problematic. As he memorably put it, ‘our eager-beaver researcher,
[ ... ] relying blissfully on the “exactitude” of modern statistical hypoth-
esis-testing, has produced a long publication list and been promoted to
a full professorship. In terms of his contribution to the enduring body
of psychological knowledge, he has done hardly anything. His true posi-
tion is that of a potent-but-sterile intellectual rake, who leaves in his
merry path a long train of ravished maidens but no viable scientific
offspring’ (Meehl 1967, 114).
However, Meehl’s rhetoric is misleading. Even in areas where the null
hypothesis is bound to be false, it is not pointless to use null hypothesis
significance testing to test statistical hypotheses and to provide evidence
for the theories from which those are derived. In these cases, instead
of testing a point null hypothesis (e.g., that the value of a parameter is
identical in two conditions or that the correlation between two param-
eters is equal to 0), scientists are really testing a range null hypothesis;
272 Edouard Machery

for instance, the range null hypothesis that the value of a parameter is
nearly the same in two conditions or the range null hypothesis that the
correlation between two parameters is nearly equal to 0. The range null
hypothesis is rejected or, equivalently, the alternative hypothesis that,
for example, the difference between the value of a parameter in two
conditions is larger than a trivial value or that the correlation between
two parameters is larger than a trivial value is accepted if and only if the
probability of obtaining a statistic of a given size or a larger one condi-
tional on the truth of the point null hypothesis is below the significance
level and thus is very low.
The upper bound of the long-term error rate of false positives when
the range null hypothesis is true (where a false positive consists in
rejecting a true range null hypothesis) is equal to the power of the test
when the latter is computed assuming a trivial effect size. For instance,
if the power of the test, so computed, is equal to .10, in at most 10 per
cent of the cases where the range null hypothesis is true, the p-value,
computed with respect to the point null hypothesis, is below the signifi-
cance level, the range null hypothesis will be rejected, and a false posi-
tive will occur.
So the point of null hypothesis significance testing when the null
hypothesis is bound to be false is obviously not to reject the null
hypothesis and accept the contradictory alternative hypothesis; no test
is needed for this. Rather, the point of significance testing is to reject
a range null hypothesis, for example, that the difference between the
value of a parameter in two conditions is trivial and to accept a contra-
dictory alternative hypothesis – for instance, that this difference is larger
than a trivial value.
We can even go one step further: when the point null hypothesis is
bound to be false, null hypothesis significance testing can be used not
only to accept the alternative hypothesis that, for example, the differ-
ence between the value of a parameter in two conditions is larger than
a trivial value but also to accept the alternative hypothesis that, for
example, this difference is larger than a specific value, provided that the
power of the test, when it is computed assuming an effect size of this
value, is small (say, about .05).
Methodologists have long understood that significance tests are used
to reject a range null hypothesis when the point null hypothesis is bound
to be false, as happens in observational studies and, if Meehl is right,
in experimental studies. Thus, Good (1983, 62) writes that ‘we wish to
test whether the hypothesis is in some sense approximately true’, while
Binder (1963, 110) asserts that ‘[a]lthough we may specify a point null
Significance Testing in Neuroimagery 273

hypothesis for the purpose of our statistical test, we do recognize a more


or less broad indifference zone about the null hypothesis consisting of
values which are essentially equivalent to the null hypothesis for our
present theory or practice.’ Similarly, Greenwald (1975, 6) explains that
‘[i]n most cases, however, the investigator should not be concerned
about the hypothesis that the true value of a statistic equals exactly
zero, but rather about the hypothesis that the effect or relationship to
be tested is so small as not to be usefully distinguished from zero.’
One may question whether it really makes sense to reject a range
null hypothesis on the grounds that the probability that the computed
statistic is equal or larger than its actual value if the point null hypothesis
is true is very low. On this view, one rejects a first hypothesis because
it is unlikely that the data or more extreme ones would have occurred
if a second hypothesis were true! However, this doubt can easily be
alleviated. In classical statistics, tests are justified on the basis of their
long-run error rates: if one follows some specific rules (e.g., reject the
null hypothesis if the p-value if below the significance level), one is guar-
anteed to commit a mistake of a particular kind (e.g., a false positive)
in the long run at most at a specific rate. This justification also applies
to the rejection of a range null hypothesis on the basis of a conditional
probability defined by means of a point null hypothesis. For, as we have
seen above, if one accepts the alternative hypothesis that, for example,
a parameter is larger than a specific value (or that a correlation between
two parameters is larger than a specific value) when the power of the test
to detect an effect of this size is low – say, .05 – one will reject the range
null hypothesis when it is true in at most 5 per cent of the cases.
Klein overlooks the justification for testing statistical hypotheses by
means of significance tests discussed in this section, and he exclusively
focuses on the rejection of the point null hypothesis. This prevents him
from understanding the use of significance tests when the null hypoth-
esis is bound to be false, including in causally dense systems.

4 Neuroimagery and null hypothesis significance testing

4.1 Causally dense systems and


null hypothesis significance testing
Imagine that a system produces some effect, although not all the vari-
ables that constitute it are causally responsible for it. The system can be
brought to produce this effect by some experimental manipulation, and
scientists can measure whether the values of the variables change when
274 Edouard Machery

the system is brought to produce this effect. Their task is to identify the
variables that are causally responsible for producing it. The discussion of
null hypothesis significance testing in Section 3 casts light on the use of
significance tests to complete this task.
If the system is causally dense, the point null hypothesis is bound to
be false for the variables that are not responsible for the effect of interest
since their values change as a result of the causally responsible variable
producing the effect of interest. However, a range null hypothesis to the
effect that this change is small – or at least much smaller than the one
expected for the causally responsible variable – may well be true of them.
When this is the case, null hypothesis significance testing can be used to
determine, not whether the point null hypothesis is false – it is false – but
whether the range null hypothesis is false. And scientists will care about
whether the range null hypothesis is false of some variable because this will
occur only if the variable is causally responsible for the effect (or perhaps it
is only likely to occur if the variable is causally responsible for the effect).
When is the change in value of the variables that are not respon-
sible for the effect (V1 ... Vn) small or at least much smaller than the one
expected for the causally responsible variable VC? While this situation
may not occur in all causally dense systems, it occurs in at least the
following circumstance. Suppose that V1 is causally influenced by VC
and V2 and that V2 is not causally influenced (either directly or indi-
rectly) by VC. Suppose also that the value of V1 is an increasing function
of the values of VC and V2. Because only the value of VC changes as a
result of the experimental manipulation, the change in V1 will plausibly
be smaller than what it would have been if it were causally involved in
producing the effect of interest. The further apart VC and V1, the more
likely it is that the change in value of V1 depends on variables that are
not influenced by VC. So whether the change in value of the variables
that are not responsible for the effect of interest is small or at least much
smaller than the one expected for the causally responsible variable
depends on at least two related factors: the causal density of the system
and the distance between the causally responsible variables and the vari-
ables that are not causally responsible.

4.2 Back to neuroimagery


Null hypothesis significance testing is suitable to test hypotheses about
the functional involvement of brain areas in completing experimental
tasks for the reason just discussed: while point null hypotheses are false of
all voxels or brain areas, range null hypotheses, to the effect that a change
in activation is small, are likely to be false of those voxels or brain areas
Significance Testing in Neuroimagery 275

functionally involved in the task but not of the voxels or brain areas not
functionally involved in the task, and null hypothesis significance testing
can be used to identify those voxels or brain areas for which the range
null hypothesis is false. If one rejects the range null hypothesis when the
probability of obtaining a particular statistic or a more extreme one condi-
tional on the point null hypothesis is below the significance level, then
one will commit at most a small number of false positives (where a false
positive consists in rejecting a true range null hypothesis).
Several considerations support the claim that a range null hypoth-
esis is likely to be true of those voxels and brain areas not functionally
involved in the task of interest. First, as we saw in Section 2, the causal
density of the brain may be only moderate since, at least at the level of
gyri and sulci, the organization of the brain is in part modular (Sporns
2010). If the brain is modular to a significant extent, there are few direct
connections between brain areas and between voxels that belong to
different modules, and indirect connections between areas belonging
to distinct modules are long. As a result, the change in the BOLD signal
incidentally elicited is likely to be small or at least smaller than the one
induced in the voxel or brain area functionally involved in the task.
Second, as noted in Section 2 too, statistical tests bear not on the abso-
lute value of the change in the BOLD signal but on the difference between
the change induced by the task of interest and by the control task. While
unlikely to be null, this difference is likely to be small for the areas that
are not functionally involved or, at any rate, much smaller than the one
expected for the voxels or areas functionally involved in the task.

4.3 Upshot
Why do cognitive neuroscientists use significance tests to provide
evidence for and against functional hypotheses, while the relevant null
hypotheses are bound to be false? The reason is that cognitive neuro-
scientists are not interested in rejecting point null hypotheses, as Klein
erroneously believes, but rather in rejecting range null hypotheses
stating that the effect of interest (e.g., a difference in the BOLD signal
between two conditions) is small. If they reject the range null hypoth-
esis when the probability of obtaining a statistic of a given size or a
larger one conditional on the truth of the point null hypothesis is below
the significance level, they will commit at most only a small number
of false positives (equal to the power of the test computed assuming
this small effect size). That is, in the long run they will rarely reject true
range null hypotheses. Because range null hypotheses are likely to be
false if and only if the relevant functional hypotheses are true, rejection
276 Edouard Machery

of range hypotheses on the basis of significance tests provides genuine


evidence for functional hypotheses.

5 Conclusion

Cognitive neuroscientists and philosophers of mind appealing to brain


imagery results should not worry about the use of null hypothesis signifi-
cance testing in neuroimagery since the falsity of point null hypotheses
in a causally dense system such as the brain does not invalidate the use
of significance tests to test functional hypotheses about brain areas and
networks.8 While the rationale behind the use of significance tests to test
hypotheses about causally dense systems turns out to be more intricate
than one may have originally thought, it is sound and well understood.

Notes
1. The seductive allure of neuroimagery may have been overstated (Farah and
Hook 2013). In particular, Gruber and Dickerson (2012) failed to replicate
McCabe and Castel (2008).
2. For discussion of Meehl’s argument and of null hypothesis significance testing
in general, see Machery (n.d.).
3. For further discussion of inferential methods in cognitive neuroscience, see
Machery (2012, forthcoming).
4. For the sake of presentation, I simplify Klein’s argument and address its
substance rather than its specific wording. Nothing of importance has been
lost in my reformulation.
5. In particular, to control for false positives, it is often required that the p-value
be below the significance level in a specific number of adjacent voxels before
rejecting the null hypothesis for these voxels.
6. Event-related designs are also typically used instead of block designs.
7. I owe this point to Richard Scheines (annual meeting of the Philosophy of
Science Association, San Diego, November 2012).
8. Of course, that’s not to say that there is no problem at all with neuroimagery;
see, e.g., Machery (2012, forthcoming).

References
Binder, A. (1963) ‘Further Considerations on Testing the Null Hypothesis and the
Strategy and Tactics of Investigating Theoretical Models’. Psychological Review,
70, 107–115.
Block, N. (2007) ‘Overflow, Access, and Attention’. Behavioral and Brain Sciences,
30, 530–548.
Byrne, A. (2011) ‘Knowing that I Am Thinking’. In A. Hatzimoysis (ed.) Self-
Knowledge. Oxford: Oxford University Press.
Cattaneo, L., and G. Rizzolatti. (2009) ‘The Mirror Neuron System’. Archives of
Neurology, 66, 557.
Significance Testing in Neuroimagery 277

Farah, M. J., and C. J. Hook. (2013) ‘The Seductive Allure of ‘Seductive Allure’.
Perspectives on Psychological Science, 8, 88–90.
Fodor, J. A. (1968) ‘The Appeal to Tacit Knowledge in Psychological Explanation’.
Journal of Philosophy, 65, 627–40.
Fodor, J. A. (1974) ‘Special Sciences (or: The Disunity of Science as a Working
Hypothesis)’. Synthese, 28, 97–115.
Good, I. J. (1983) Good Thinking: The Foundations of Probability and its Applications.
Minneapolis: University of Minnesota Press.
Greenwald, A. G. (1975) ‘Consequences of Prejudice Against the Null Hypothesis’.
Psychological Bulletin, 82, 1–20.
Gruber, D., and J. A. Dickerson. (2012) ‘Persuasive Images in Popular Science:
Testing Judgments of Scientific Reasoning and Credibility’. Public Understanding
of Science, 21, 938–948.
Hardcastle, V. G., and C. M. Stewart. (2002) ‘What Do Brain Data Really Show?’
Philosophy of Science, 69, 72–82.
Kanwisher, N., J. McDermott, and M. M. Chun. (1997) ‘The Fusiform Face Area: A
Module in Human Extrastriate Cortex Specialized for Face Perception’. Journal
of Neuroscience, 17, 4302–4311.
Keehner, M., and M. H. Fischer. (2011) ‘Naive Realism in Public Perceptions of
Neuroimages’. Nature Reviews Neuroscience, 12, 118–165.
Klein, C. (2010) ‘Images Are Not the Evidence in Neuroimaging’. British Journal for
the Philosophy of Science, 61, 265–278.
Logothetis, N. K. (2008) ‘What We Can Do and What We Cannot Do with fMRI’.
Nature, 453, 869–878.
Machery, E. (2012) ‘Dissociations in Neuropsychology and Cognitive
Neuroscience’. Philosophy of Science, 79, 490–518.
Machery, E. (forthcoming) ‘In Defense of Reverse Inference’. British Journal for the
Philosophy of Science.
Machery, E. (n.d). ‘Evidence and Cognition’. Manuscript.
McCabe, D. P., and A. D. Castel (2008) ‘Seeing Is Believing: The Effect of Brain
Images on Judgments of Scientific Reasoning’. Cognition, 107, 343–352.
Meehl, P. E. (1967) ‘Theory Testing in Psychology and Physics: A Methodological
Paradox’. Philosophy of Science, 34, 103–115.
Ramsey, W., S. Stich, and J. Garon (1990) ‘Connectionism, Eliminativism and the
Future of Folk Psychology’. Philosophical Perspectives, 4, 499–533.
Roskies, A. (2010) ‘Saving Subtraction: A Reply to Van Orden and Paap’. British
Journal for the Philosophy of Science, 61, 635–665.
Saxe, R., and A. Wexler (2005) ‘Making Sense of Another Mind: The Role of the
Right Temporo-parietal Junction’. Neuropsychologia, 43, 1391–1399.
Sporns, O. (2010) Networks of the Brain. Cambridge, MA: MIT Press.
Van Essen, D. C., C. H. Anderson, and D. J. Felleman (1992) ‘Information
Processing in the Primate Visual System: An Integrated Systems Perspective’.
Science, 255, 419–423.
Weisberg, D. S., F. C. Keil, J. Goodstein, E. Rawson, and J. R. Gray (2008) ‘The
Seductive Allure of Neuroscience Explanations’. Journal of Cognitive Neuroscience,
20, 470–477.
14
Lack of Imagination: Individual
Differences in Mental Imagery and
the Significance of Consciousness
Ian Phillips

Ensconced in the armchair, a philosopher of mind is liable to mistake an


investigation of his or her own mind for an investigation of all minds.
This mistake is arguably encouraged by our monolithic talk of ‘The
Mind’, as in the ‘The Mind-Body Problem’ or ‘The Mind-Brain Identity
Theory’. In contrast, psychologists have long studied individual differ-
ences in our mental capacities, particularly in the areas of personality
and intelligence, but increasingly with respect to basic perceptual and
cognitive functions (Kanai and Rees 2011). Such differences merit philo-
sophical attention, too. The philosophers of the ‘new wave’ represented
in this collection should be philosophers of minds.
In this chapter, I focus on one particular individual difference and its
potential philosophical significance. The difference concerns variation
in visual mental imagery (and correlatively visual episodic memory).
This variation is one of the most remarkable individual differences in
psychology. At one end of the spectrum are super-imagers: subjects who
profess easily to be able to bring scenes before their mind’s eye with the
apparent richness and vivacity of normal vision; at the other end are
non-imagers: subjects who claim not to enjoy any genuine visual imagery
whatsoever. Such variation is striking for at least two reasons, no doubt
related. First, although one in forty (Faw 2009) or perhaps even one in
ten (Abelson 1979) of us are non-imagers, most of us are wholly unaware
of the extent or even existence of such disparities. Indeed, even when
their attention is drawn to them, super-imagers often struggle to believe
the reports of non-imagers, and vice-versa. Thus, in his pioneering inves-
tigation into the subject, Galton writes of his astonishment on finding
that ‘the great majority of the men of science to whom I first applied
protested that mental imagery was unknown to them, and looked on me

278
Lack of Imagination 279

as fanciful and fantastic in supposing that the words “mental imagery”


really expressed what I believed everybody supposed them to mean’
(1880, 302).1 Second, despite a great deal of experimental work, few if
any strong or systematic correlations have been uncovered between
reported imagery and levels of performance, even in tasks which intui-
tively implicate imagery – for example, Shepard and Metzler’s mental
rotation tasks, Kosslyn’s visual scanning tasks or standard recognition
and memory tasks.2 Again, as Galton writes, ‘men who declare themselves
entirely deficient in the power of seeing mental pictures can neverthe-
less ... become painters of the rank of Royal Academicians’ (1880, 304).
Consider how baffling it would be if we were to encounter a commu-
nity many of whose members were blind yet amongst whom there was
almost no recognition of significant variation with respect to sighted-
ness. On the face of it, this is exactly how Galton discovered our own
community to be with respect to imagery. With the case of blindness in
mind, it is understandable that many psychologists and philosophers
have been sceptical whether such reported differences in imagery are
in fact genuine. Indeed, according to one common line of thought, the
cost of taking subjective reports at face value is an implausible epiphe-
nomenalism concerning mental imagery. After all, if those who lack
imagery do no worse in objective tests typically regarded as implicating
imagery, it is obscure how imagery earns its functional keep.
In what follows I explore this challenge and thereby consider the
real significance of taking individual differences in imagery seriously.
I proceed as follows. In the next section, I develop the puzzle raised
by the apparent lack of correlation between objective task performance
in imagery tasks and reported imagery in terms of a dilemma which
apparently faces us, a dilemma between what I call inscrutability and
epiphenomenalism. In Section 2, I argue that the first step in any adequate
response to this dilemma is to distinguish between two conceptions
of imagery: imagery in the representational sense, meaning underlying
subpersonal representations of a putatively imagistic nature – the focus
of the so-called ‘mental imagery debate’ – and imagery in the conscious
or experiential sense, meaning conscious personal-level episodes of imag-
ining. Since subjective reports speak only to imagery in the second, expe-
riential sense, this distinction partly resolves our dilemma. Faced with
two subjects who perform equally on imagery tasks yet differ dramati-
cally in their reported imagery, we can credit both with similar repre-
sentational imagery (avoiding epiphenomenalism) but allow for marked
differences in their experiential imagery corresponding to their differing
reports (avoiding inscrutability).
280 Ian Phillips

This cannot be the whole story, however. For there remains the appar-
ently puzzling failure of conscious, experiential imagery to correlate with
objective task performance. In light of this failure, it may seem that
the import of individual differences in imagery is that conscious imagery
does lack a useful function. Moreover, generalizing, we might conclude
by adopting the increasingly widespread view that consciousness per se
lacks a useful function and is best conceived of as an evolutionary span-
drel (e.g., Blakemore 2005). Sections 3 and 4 offer rejoinders to these
two lines of thought. In Section 3, I argue that even if conscious imagery
does lack a useful function, nothing follows about the significance of
consciousness in general. I demonstrate this by arguing that on one
leading account of the significance of perceptual consciousness (namely,
as a condition of demonstrative thought), mental imagery could not
share the same significance. In Section 4, I return to the question of the
significance of conscious imagery. I argue that whilst objective data from
mental imagery tasks do plausibly establish the presence or absence of
imagery in the representational sense, it is not obvious that such data
do settle the presence or absence of imagery in the experiential sense.
Instead, the differences between conscious imagers and non-imagers
may emerge only when we consider exclusively personal-level differ-
ences in the way in which imagers perform imagery tasks: differences in
the personal-level genesis, justification and self-understanding of their
performances.

1 Individual differences in mental imagery:


The dilemma of inscrutability or epiphenomenalism

Questionnaire-based studies have repeatedly confirmed Galton’s finding


of substantial variation in reported imagery.3 Despite this, many theo-
rists have doubted that such reported differences accurately reflect
underlying reality. Most forcefully and recently, Schwitzgebel (2011,
ch. 3) has argued that Galton’s supposed discovery of large variation
in imagery is a myth based on ‘excessive optimism about subjective
report’ (36). Schwitzgebel accepts that individual reports of imagery vary
dramatically. What he disputes is the alleged link between such reports
and what he calls ‘underlying imagery experience’ (36).
Schwitzgebel expresses antecedent scepticism concerning the exist-
ence of sizable individual differences in experience within normal
human populations (e.g., 133–134).4 But we need not share that view
to feel the force of Schwitzgebel’s argument in the present context. The
argument is simply this. If subjects really did differ in their ‘underlying
Lack of Imagination 281

imagery experience’, we should predict ‘vast corresponding differences


in performance on cognitive tasks involving imagery – differences
comparable to that between a prodigy and a normal person or between
a normal person and one with severe disabilities’ (44; cf. the discus-
sion of blindness above). Such differences in performance are not found.
Thus, we should conclude that underlying imagery experience does not
vary much across individuals. As for their reports: many subjects are just
wildly wrong about their own experience.5
Schwitzgebel devotes most of his attention to defending his argu-
ment’s crucial premise that individual differences in visual imagery do
not closely or systematically correlate with objective task measures, in
terms either of basic task competence or of specific response patterns.
This premise is the subject of a long-standing and live controversy,
and Schwitzgebel (ch. 3, §5) provides an excellent review of the rele-
vant literature, including a critical assessment of the most recent major
review suggesting a weak positive correlation between task performance
and reported imagery (McKelvie 1995).6
As Schwitzgebel brings out, well over a century since Galton’s work
launched a substantial programme of research into the functional
correlates of individual differences in reported imagery, there is little
conclusive evidence of such a correlation. Instead, what emerges is
‘a disorganized smattering [of positive findings, with respect to what
Schwitzgebel later calls ‘a suspiciously desultory sprinkle of tasks’, 51],
with frequent failures of replication’ (48). Thomas offers a similar assess-
ment, writing that ‘most surprisingly and disappointingly ... virtually
no sign of any correlation has been found between people’s vividness
ratings and their performance (speed or accuracy) in various visuo-spa-
tial thinking and problem solving tasks, even though, subjectively, such
tasks seem to depend on imagery’ (2009, 450).7 Many prominent imagery
researchers make similar assessments of the literature.8 As a result it is
unsurprising that many have outright rejected any correlation between
reported imagery and objective task performance or simply turned their
back on subjective reports.9 As Reisberg, Pearson and Kosslyn remark,
‘negative results, showing no relationship between imagery self-report
and performance with imagery tasks, are relatively common. As a result,
roughly a century after Galton’s original publication, many cognitive
scientists remain deeply sceptical about the value of these self-reports’
(2003, 150).10
My aim here is not to enter this fray. Instead, I want to consider what
follows if there does indeed exist no strong and systematic relationship
between reported imagery and objective task performance. Thus, in
282 Ian Phillips

what follows, I propose to make this large and controversial empirical


assumption in order to consider its broader implications.11
Granting this assumption, let us return to Schwitzgebel’s argument
against the existence of large individual differences in imagery. Its crucial
move has undoubted appeal.12 It is the claim that subjective differences
will be closely mirrored by differences as measured by objective methods,
methods, as Schwitzgebel puts it, ‘in which success or failure on a task
depends on the nature of the subject’s imagery’ (2011, 43). In short, it
is the demand that our reports be backed up by action (or lack of it) – a
demand naturally associated with neobehaviourist (broadly, function-
alist or interpretationist) views in the philosophy of mind.
On the other hand, the view that many of us consistently produce
dramatically inaccurate reports about our own conscious experience is
not appealing. Schwitzgebel himself is notoriously happy to embrace
such a view about all aspects of our conscious lives (see 2011, passim).
However, despite disavowing any commitment to a particular concep-
tion of introspection (120), Schwitzgebel’s sanguinity here arguably
betrays a commitment to a questionable model of introspection. In
particular, Schwitzgebel talks of ‘the mechanisms’ of introspection and
of introspection as a ‘method [better, ‘a pluralistic confluence of proc-
esses’] by which we normally reach judgments about our experience’
(120). Such claims may seem innocuous, but they allow Schwitzgebel to
think of the mechanism(s) of introspection as breaking down, as being
variously ‘misleading’, ‘faulty’ and ‘untrustworthy’ (129) and perhaps
ultimately as being simply broken, leaving us cut off from our stream of
consciousness.
This last view seems to be that of Marks, who suggests ‘that non-im-
agers suffer from some sort of subclinical neurological disconnection
syndrome that somehow makes them unable to report on, or form
verbal memories about, images that they nevertheless, in some sense do
experience’.13 What is so hard to understand about such a view is what
it would mean for a non-imager to (in any sense) experience imagery
and yet for that imagery to be entirely inaccessible to her, for her to be
entirely ‘disconnected’ from it. It is not of course hard to understand our
making mistaken judgments about our conscious life from occasion to
occasion. What is arguably incoherent is that the layout of a subject’s
consciousness might be a certain way and yet that layout seem a quite
different way – or no way at all – from the subject’s own point of view
(for expressions of this kind of viewpoint, see Dennett 1991; Shoemaker
1994; Chalmers 1996, 196–197; Martin 2006, §7). In Shoemaker’s terms,
what is incoherent is that we be self-blind: constitutionally incapable of
Lack of Imagination 283

responding reliably to our own experience. After all, if an experience is


outside the ken of its own subject, it is hard to understand in what sense
we should think of it as conscious. In what sense does it contribute to
the subjective life of the individual (cf. Nagel 1974)?
As I say, there is nothing incoherent about our making mistakes about
our conscious lives. One might interpret Schwitzgebel as simply insisting
that such mistakes are extremely frequent. But in the present context
we want some explanation of why apparently attentive and rational
subjects consistently make the same mistakes about their inner lives. A
perceptual model of introspection (cf. Shoemaker 1994) licences such a
possibility, since with a mechanism at hand we can think of it as fragile.
But such a model also licences the apparently incoherent idea that we
could be completely cut off from our conscious life. If we reject a percep-
tual model, however, we must grant that non-imagers are in a position
to know about their imagery but for some reason fail to exploit that
position. Here it seems reasonable to ask for explanation. But no such
explanation is forthcoming. Indeed, Schwitzgebel wants to insist that
we ‘make gross, enduring mistakes about even the most basic features
of our currently ongoing conscious experience, even in favourable circum-
stances of careful reflection’ (119; my emphasis) and that such mistakes
are not merely the result of being ‘distracted, or passionate, or inatten-
tive, or self-deceived, or pathologically deluded’ and do not occur only
‘when we are reflecting about minor matters, or about the past, or only
for a moment, or when fine discrimination is required’ (118).14
More could certainly be said here but enough has been said to moti-
vate the search for an alternative to inscrutability. Yet as we have seen,
accepting subjects’ reports at face value and avoiding inscrutability
comes at the apparently severe cost of embracing epiphenomenalism
(here meaning ‘no functionalism’) about imagery. For if large variations
in imagistic experience obtain without being reflected in performance
in imagery tasks, it is hard to resist the conclusion that this is because
imagery is of little or no use in such tasks. At its simplest and most
extreme, the cost is of accepting that visual imagery has no cognitive
consequences beyond our reports on its existence. Striking as it is,
such a view has found adherents. Winch (1908) makes the argument
with refreshing directness. First, by appeal to his own case; arguing
that he has no mental images but is far from intellectually inferior,
so mental images can’t perform the functions they are supposed to;
they may even be a hindrance. Second, by appeal to experimental
evidence from Thorndike (1907) and his own studies, both of which
find no correlation between reports of vividness and performance
284 Ian Phillips

on a memory task.15 Winch imagines a contemporary Schwitzgebel


retorting to him, ‘If you were capable of proper introspection, you
would find these images you say you are without’ (1908, 342). He is
entirely unmoved.16
Schwitzgebel makes the natural objection to epiphenomenalism –
namely, that it ‘seems to posit a major faculty with a fairly obvious
range of purposes but in fact with little purpose at all, and little effect
on behaviour apart from the power to generate reports’ (2011, 51). In
short, we seem faced with an unappealing choice between inscrutability
and epiphenomenalism, a choice of either denying that subjects have
access to their own imagery or denying that imagery plays any useful
function. As Heuer et al. (1986) nicely summarize the dilemma: ‘there
are many reports of no relation between performance and imagery self-
report. This suggests that either the differences in imagery report are
misleading about the underlying experience, or that the differences in
experience do not have functional implications’ (1986, 162).
In the next section, I argue that the first step in navigating between
the horns of this dilemma is to distinguish imagery in the representa-
tional sense, referring to underlying subpersonal representations of
a putatively imagistic nature, from imagery in the experiential sense,
referring to conscious personal-level episodes. Since subjective reports
speak only to imagery in the second, experiential sense, where objective
performances match there is no reason not to explain those perform-
ances in terms of matching underlying representations.
Our puzzle then exclusively targets the mismatch between conscious
imagery and task performance. This puzzle is taken up in Sections 3
and 4. In Section 3, I argue that even if we conclude that conscious
imagery lacks significance, this need not imply that consciousness in
general lacks substantial significance. I demonstrate this by arguing that
on one leading account of the significance of perceptual consciousness
(viz., as a condition of demonstrative thought), mental imagery could
not share that same significance. In Section 4, I argue that whilst objec-
tive data from mental imagery tasks do plausibly establish the presence
or absence of imagery in the representational sense, it is not obvious
that such data do in fact settle the presence or absence of imagery in
the experiential sense. Instead, I argue that the significance of conscious
imagery may emerge only when we consider exclusively personal-level
differences in the way in which imagers perform imagery tasks: differ-
ences in the personal-level genesis, justification and self-understanding
of their performances.
Lack of Imagination 285

2 Two conceptions of imagery

Many theorists insist that mental imagery is by definition a form of


conscious experience.17 If imagery is by definition conscious, then we
seem to be directly driven onto the dilemma of inscrutability or epiphe-
nomenalism. Professed non-imagers must either be wrong about their
inner lives or their imagery must serve no function in relevant tasks: not
being itself a step in the process of solving the task but rather, as Neisser
puts it echoing Watson, ‘a kind of cognitive “luxury”, like an illustra-
tion in a novel’ (1967, 157). Plausibly, commitment to this connection
between consciousness and imagery is a major impetus towards such
views.
Others, however, have suggested that there is a perfectly respectable
notion of imagery shorn of its implication of conscious awareness.18
From an objective perspective, one obvious reason to ascribe imagery
is that subjects exhibit certain patterns of response in various tasks. As
already noted, in the field of imagery research two of the most famous
tasks are mental rotation tasks and visual scanning tasks. What we
observe in such tasks is a certain pattern of response, a pattern we then
explain by hypothesising representational structures, which by dint
of their functional and structural features we regard as imagistic. For
example, data from visual scanning tasks concerning the linear corre-
lation of reaction time and represented distance are held to evidence
imagistic representations, since ‘one of the defining properties of [an
imagistic] representation is that metric distances are embodied in the
same way as in a percept of a picture’ (Kosslyn et al. 1978, 53).
Whether such representations are in fact imagistic in format is the
issue at the heart of the mental imagery debate of the seventies and
eighties (see Kosslyn 1994, ch. 1, for an overview). However, as many
have observed, this debate is about underlying representations and not
about the nature of conscious mental imagery per se (Tye 1991 empha-
sizes this point).19 Imagery in this representational sense is clearly not
conscious by definition. It may be that such representations subserve
conscious imagery wherever it is found (i.e., such representations may
be an empirically necessary condition of experiential imagery), but we
need not think that such representations are sufficient for experiential
imagery. As a result, the fact that someone does not report conscious
imagery does not militate against crediting that subject with imagery in
the representational sense. For we are crediting them with subpersonal
informational structures intended primarily to explain behavioural
patterns. The nature of these representations (i.e., what can be inferred
286 Ian Phillips

about them from the relevant behavioural and, of course, increasingly


neuroimaging data) is an empirical question. There is no question of
subjects being positioned to know about such representations just in
virtue of having them.
What subjects report, on the other hand, are conscious episodes of
imagining. Arguably, such episodes cannot continue to exist entirely
outside their subject’s ken. Consequently, rational and attentive subjects’
honestly denying having imagery does militate against ascribing
conscious imagery to them. Thus equipped with the distinction between
representational and experiential or conscious imagery, a resolution of
our earlier dilemma suggests itself. For we might hope to avoid inscruta-
bility by granting that non-imagers lack ‘conscious imagery’ as they say,
whilst still crediting their task performances to imagery in the represen-
tational sense, thereby avoiding epiphenomenalism (cf. Kosslyn et al.
1978, 53).
This picture is attractive. It also goes some way towards explaining
our initial puzzlement as to why large individual differences in imagery
seemingly go unnoticed. The partial answer is that large differences in
experiential imagery are consistent with matched task performances
underlain by similar representational imagery. In this way, ordinary
non-imagers are not comparable to the blind, since the blind (at least
typically) lack the visual representations which underlie ordinary visual
experience. However, merely distinguishing between experiential and
representational imagery does not get to the root of matters. For even
accepting the resultant picture of individual differences, we still face
the challenge that conscious imagery is epiphenomenal, in the sense of
serving no clear function beyond prompting reports on its existence.20
Schwitzgebel presses the concern as follows:

Unless conscious experience is epiphenomenal, people whose imagery


is mostly conscious ought to perform somewhat differently on cogni-
tive tasks than people whose imagery is largely unconscious, and thus
it remains strange that such differences have not been found. Maybe
consciousness is epiphenomenal, or at least largely so, but such a
view faces the challenge of explaining why whatever biological or
functional facts permit some cognitive processes but not others to
be conscious seem to have so few other ramifications. (Schwitzgebel
2011, 51)21

In the remaining two sections I confront this concern directly. I do so


by challenging two common presumptions, both implicit in the passage
Lack of Imagination 287

just quoted. First, in Section 3, I argue that even if conscious imagery


lacks functional significance, this does not imply that consciousness in
general lacks substantial significance. I demonstrate this by arguing that
on one leading account of the significance of perceptual consciousness
(viz., as a condition of demonstrative thought), mental imagery could
not share the same significance. Finally, in Section 4, I propose that the
significance of conscious imagery may not be appropriately measured by
objective performance in imagery tasks. I thereby undermine the case
for the insignificance of consciousness even in the imagistic case.

3 The significance of consciousness

The general question of the significance of consciousness is often raised


with reference to cases of blindsight (Weiskrantz et al. 1974; Weiskrantz
1997). According to the traditional story, subjects with blindsight, a
condition caused by damage to or removal of some portion of primary
visual cortex, lack conscious awareness in regions of their visual field
corresponding to the damaged regions of their retinotopically mapped
cortex. Despite this lack of awareness, such subjects can be prompted
to make highly reliable judgments about certain features in their blind
field (e.g., the presence and location of a stimulus, the direction of
motion). According to Block, we can imagine a hypothetical blindseer
being trained ‘to prompt himself at will, guessing what is in the blind
field without being told to guess’.

The super-blindsighter spontaneously says ‘Now I know that there


is a horizontal line in my blind field even though I don’t actually
see it’. Visual information from his blind field simply pops into his
thoughts ... without having any perceptual experience of it. The
super-blindsighter himself contrasts what it is like to know visually
about an ‘X’ in his blind field and an ‘X’ in his sighted field. There
is something it is like to experience the latter, but not the former,
he says. It is the difference between just knowing and knowing via
a visual experience ... the content that there is an ‘X’ in his visual
field is [access]-conscious but not [phenomenally]-conscious. (Block
1995, 233)

Block supposes here that the hypothetical super-blindseer has a capacity


for knowledge about aspects of his environment, knowledge subserved by
a (damaged) visual system but not sourced in conscious perceptual expe-
rience. The alleged possibility of super-blindsight prompts the question:
288 Ian Phillips

if such knowledge can be available in the absence of perceptual experi-


ence, what is the distinctive role, if any, of perceptual experience?
There are at least two ways we might question the coherence of Block’s
hypothetical case. We might argue that ‘if a patient could be trained to
treat blindsight stimuli as self-cuing or prompting, this would amount
to restoring the patient’s consciousness of events in their scotoma, the
only remaining difference between such experience and normal vision
being the relative poverty of the content’ (Dennett 1995, describing his
1991, 332–343).22 Alternatively, we might argue that it is illegitimate
to credit the super-blindseer with environmental knowledge because of
her failure to possess internalist reasons – reasons either identical to or
provided by conscious perceptual experience (cf. Smithies, ch. 6 of this
volume).
A more concessive approach is to accept that the blindseer both lacks
conscious awareness of the blind field and can come to have knowledge
of the relevant portion of her environment but insist that there remains
a difference between the kind of knowledge blindseers are capable of
acquiring concerning their blind field and the kind of knowledge we
ordinarily acquire through perceptual experience. One suggestion here
(developed most fully in Campbell 2002) is to think of the blindseer
as incapable of acquiring demonstrative knowledge of the items in her
environment. As Roessler puts it, ‘We can imagine a blindseer who has
learned to exploit blindsight to verify existentially quantified proposi-
tions, such as “the object in my blind field is yellow”. But no amount
of training [except presumably insofar as the training restores sight] will
enable her to think of the object as “that lemon”’ (2009, 1035).23 My
interest here is not in defending this proposal. Rather, it is in showing
that, if a proposal along these lines is right, then perceptual and imag-
istic consciousness are importantly disanalogous. This disanalogy allows
us to credit perceptual consciousness with a significance that imagistic
consciousness lacks and so to resist any inference from the putative
insignificance of conscious imagery to a more general claim about the
insignificance of consciousness in general.
To begin, note that non-imagers are naturally compared to subjects
with super-blindsight. The comparison is misleading in various ways,
and I do not mean in making it to suggest that non-clinical non-imagers
can straightforwardly be compared to rare clinical cases of so-called
‘blind imagination’.24 But one similarity between the non-imager and
the super-blindseer is salient: namely, that both can successfully prompt
themselves to answer various questions of a kind which ‘normal’ subjects
answer by reference to their conscious experience. Super-blindseers can
Lack of Imagination 289

prompt themselves successfully to answer various questions about their


environment in the absence of perceptual experience. Non-imagers can
prompt themselves successfully to solve various imagery tasks (e.g.,
determine the congruence or incongruence of two Shepard figures) in
the absence of imaginative experience. The responses of non-imagers
can be highly reliable across a range of features; as we are assuming here,
they are as reliable as those of imagers.25 So we are led to ask: what role
does conscious imagery have to play?
In the case of blindsight two initial answers were forthcoming. First,
there was Dennett’s suggestion that, in virtue of subjects being able to
achieve knowledge via self-cueing, their sight is restored. This approach
to individual differences in imagery would in effect be to deny the exist-
ence of such differences (and so embrace inscrutability), for non-imagers
do unthinkingly self-cue when asked to respond in imagery tasks yet
deny that they have imagery. Second, there was the idea that blind-
seers lack knowledge because of the absence of internalist reasons. This
approach to individual differences would deny that non-imagers can
be said to know facts about, for example, the congruence of Shepard
figures, since lacking conscious reasons for making that judgment.
Whatever one’s intuitions about blindsight, in the case of imagery the
implausibility of this verdict, given suitably reliable performance in the
task, tells against this approach.
In the case of blindsight, a more concessive third answer emerged;
namely the hypothesis that what blindseers lack is the capacity for
acquiring demonstrative knowledge of (and so demonstrative reasons for
acting on) particulars in their environment. In the case of the imagina-
tion, however, the analogous suggestion cannot provide a substantive
explanatory role for conscious imagery. The reason is that conscious
imagery does not introduce new particulars to the mind. Perceptual
imagination is a faculty of re-presentation and re-combination. Insofar
as particulars can be imagined, it is because they have previously been
encountered in perceptual experience. Thus, in any case of conscious
imagery the possibility of demonstrative thought about an individual
will already have been secured in perceptual experience if it is a possi-
bility at all. Non-imagers, who have enjoyed the same kinds of perceptual
experience, will thus be in no worse a position to think demonstratively
about perceptually encountered particulars even though they cannot
imagine them. They are already enabled to think about such particulars,
and so the renewed presence before the mind of the particulars in imagi-
nation has no explanatory work to do.
290 Ian Phillips

It may seem that we can imagine particulars that we have not previ-
ously encountered in perceptual experience. However, it is more plau-
sible to think that we are able to recombine features we have previously
experienced in a purely general way. Thus, in the absence of prior
perceptual acquaintance, there will be no possibility of our imagining
qualitatively identical but numerically distinct individuals across imagi-
native episodes, independent of a stipulative act of propositional imagi-
nation. Similarly, there will be no possibility of imagining one but not
the other of two identical twins, neither of whom one has perceptually
encountered, independent of an act of stipulative imagination. In short,
conscious imagery never serves to enable demonstrative reference not
already enabled by perceptual experience. A plausible account of the
explanatory role of conscious experience in relation to perception is thus
not available as an account of the significance of conscious imagery. The
absence of a significant explanatory role for conscious imagery can now
be seen to be consistent with a significant explanatory role for conscious
perceptual experience. In consequence, even if we refuse to recognize a
useful function for conscious imagery, it will not follow that conscious-
ness in general lacks significance. In the next and final section I return
to the antecedent of this conditional; namely, the issue of whether we
really should think of conscious imagery as lacking a useful function.

4 Where to look for the significance of


conscious imagery

If imagers and non-imagers do not differ in their success at imagery tasks


and, furthermore, if they succeed because they share the same underlying
representations which may be imagistic in format, how do imagers and
non-imagers differ? Of course, imagers report their imagery. But they do
not merely report it. They think of themselves as summoning, manipu-
lating and inspecting imagery and of their doing so as prompting and
grounding their task successes.26 When an imager is asked how she knew
that the first figure was incongruent with the second, she will answer by
saying that she knew because she formed an image of the first figure,
rotated it in her mind’s eye and found it didn’t line up with the second.
She thus takes her imagery to explain and justify her answer.
In contrast, the non-imager will tell no such story. When a non-imager
is asked how he came to his answer, he will presumably answer along the
lines that he ‘just did’. The non-imager’s story, disclaiming reason and
(internalist) justification for his response, is a commonplace in many
contexts. Asked what you ate for dinner last night, you might be fully
Lack of Imagination 291

capable of forming a detailed episodic memory of the delicious meal


but nonetheless simply answer straight off without bothering. Asked
whether a certain person was at a party last week, you might answer that
she was, despite being entirely incapable of forming a visual (or more
generally perceptual) memory of her. Similarly, we often tell the time,
answer basic arithmetic puzzles or solve simple anagrams without being
able to say any more about how we reached the answer other than that
it just came to us or popped into our head.
What this reveals is that whilst matched success at an imagery task
may implicate the very same underlying subpersonal information
processing story with respect to representational imagery, it may mask
a marked contrast between two quite different accounts of a subject’s
personal-level psychology. On the first account, a desire to answer a
certain question leads the subject to engage in a certain complex imagi-
native activity the product of which is in turn taken to provide and
justify an answer to the question. On the second account, a desire to
answer a question yields an answer directly, an answer which the giver
cannot justify (except indirectly). In terms of task success both stories
are in one sense alike: the same answers are arrived at. And insofar as the
pattern of answers and response times is the same, it is plausible that a
shared information processing story is implicated. However, there is a
crucial difference between the stories with regard to the existence and
availability of reasons and self-understanding.
Such differences in the availability of reasons will, according to some
theorists, entail substantial epistemological differences. For example,
whilst on a reliabilist or other suitably externalist picture, the non-im-
ager may well count as knowing the relevant answer in a given imagery
task, on the kind of internalist or ‘phenomenal mentalist’ view defended
by Smithies (in ch. 6 of this volume), it is questionable whether the
non-imager does know the answer since lacking a phenomenally based
justification. However, even if we are sceptical of insisting that the reli-
able non-imager does not know, it is clear that there is an important
cognitive difference which needs marking between imagers who under-
stand how they reach their answer from the inside and non-imagers
who do not.
If this is the right way to think about the key difference between
imagers and non-imagers, the difference is not merely at the level of
verbal report. The difference is in the presence and availability of reasons.
In this light, we can see that there is an ambiguity in Schwitzgebel’s
claim that ‘people whose imagery is mostly conscious ought to perform
somewhat differently on cognitive tasks than people whose imagery
292 Ian Phillips

is largely unconscious’ (Schwitzgebel 2011, 51). In one sense, people


whose imagery is mostly conscious do not perform very differently:
the experimentalist records the same pattern of answers in the relevant
tasks. In another sense, however they do perform very differently. For,
considered at the personal-level, the performances of the imager and
non-imager are grounded and justified in fundamentally different ways.
Insofar as we care only about task performance in the first, ‘objective’
sense, conscious imagery is a ‘cognitive “luxury”’ (Neisser 1967, 157).
But as can be seen by considering the differences in performance in the
second sense, conscious imagery is no more an epiphenomenon than
the icing on the proverbial cake. Both may lack value from a certain
task-focused or nutritional perspective, but both clearly have signifi-
cance from a broader cognitive or hedonic perspective, respectively.27
An important issue for investigation is the geography of this broader
cognitive perspective. This is a task suited in part for philosophical
investigation, as is already clear from the brief mention made above
of disputes concerning internalism and externalism and the connec-
tion between reasons and knowledge. Another dimension along which
to consider the wider significance of conscious imagery is in the affec-
tive lives of subjects. Imagers are likely to think of their capacity for
conscious imagery as the potential source of positive and negative
affect – for instance in fantasy and fearful apprehension. Relatedly, there
are obvious differences between the ways in which non-imagers and
imagers relate to the past, given the essential involvement of imagery in
conscious episodic memory. (These differences need not emerge in tasks
which test only whether the subject can answer certain factual questions
about the past.) Possibly reflecting both these themes, there is some
evidence of correlations between aversions and vivid imagery (Dadds
et al. 2004) and between post-traumatic stress disorder and vivid imagery
(Jelinek et al. 2010). This is not the place to explore these matters. What
needs underlining here is that there is a potentially rich psychological
landscape of differences to explore between those possessed of conscious
imagery and those lacking it once we look beyond the narrow confines
of mental imagery tasks.

5 Conclusion

I began with a puzzle concerning the apparent lack of correlation


between objective task performance in imagery tasks and reported
imagery. In response to this puzzle I made three key suggestions. First,
I proposed that we distinguish between representational and experiential
Lack of Imagination 293

(i.e., conscious) imagery, crediting all those who produce a certain


pattern of task responses with representational imagery but crediting
only professed imagers with experiential imagery. Second, I argued that,
even if we take the lack of correlation between experiential imagery
and task performance to show that experiential imagery lacks signifi-
cance, it does not follow that consciousness in general does. To show
this I appealed to one putative role for perceptual consciousness (viz.,
as a condition of demonstrative thought) which could not be a role for
imagistic consciousness. Finally, I questioned whether we should in fact
conclude that experiential imagery lacks significance on the grounds
that it does not correlate with task performance in imagery tasks. Against
this, I suggested that the significance of experiential imagery may be
found only when one considers subjects’ performances from their own
point of view, in terms of their reasons, justification and understanding
of them.28

Notes
1. Galton’s work on mental imagery is summarized in Galton (1883/1907,
57–128). See also James 1890 which credits Fechner with the initial recogni-
tion of ‘great personal diversity’ in imagery (vol. II, ch. 18, 50–1). It should
be noted that Galton’s specific suggestion that ‘men of science’ are typically
poor imagers is neither properly supported by his data nor likely true (see
Brewer and Schommer-Aikins 2006).
2. For mental rotation tasks, see Shepard and Metzler (1971) and Shepard and
Cooper (1982). Thomas (2012) contains an excellent introductory supple-
ment. For visual scanning tasks, see Kosslyn (1973), Kosslyn et al. (1978),
Kosslyn (1980), Finke and Pinker (1982) and Borst et al. (2006). See the next
section for references to work on individual differences.
3. See studies based on Betts’s (1909) Questionnaire upon Mental Imagery
(QMI), revised by Sheehan (1967), and Marks’s (1973) Vividness of Visual
Imagery Questionnaire (VVIQ).
4. For criticism of Schwitzgebel on this score, see Humphrey (2011).
5. For Schwitzgebel, this is part of a larger theme concerning the unreliability of
naive introspection. However, it is worth noting that even if we agreed with
Schwitzgebel that underlying differences on a Galtonian scale were implau-
sible, we would not need to blame introspection per se. Another possibility
is that there is wide variation in our understanding of the concept of visual
imagery (cf. Flew 1956; Thomas 1989). Thus, two subjects accurately intro-
specting the same kind of experience might differ as to whether they think
such experience counts as visual imagery, just as, notoriously, two people
might differ in their understanding of what counts as arthritis. Schwitzgebel
shows awareness of this concern but, for reasons he doesn’t make explicit,
‘doubt[s] that the optimist about introspective accuracy can find much
consolation’ in it (2011, 53). These issues are closely connected to long-
294 Ian Phillips

standing methodological concerns about imagery questionnaires which go


back to Galton’s own subjects (see Burbridge 1994, 461, and for more general
discussion, e.g., Kaufmann 1981).
6. Schwitzgebel also lists subsequent studies of reported imagery and objective
task performance from 1995 to 2009. Of these, Schwitzgebel notes that 10
report a positive relationship, 9 a ‘mixed’ relationship and 21 no signifi-
cant relationship. Since the publication of Schwitzgebel’s book, at least two
further studies have been published: Nouchi (2011) reports a positive correla-
tion; Palmiero et al. (2011) obtain mixed results.
7. Thomas also comments that ‘even where reproducible correlations have
been found, it has often proven difficult to make much theoretical sense
out of them’ (2009, 450). His example is the fact that vivid imagers (as meas-
ured by Marks’s VVIQ) appear to be worse than non-vivid imagers at recalling
specific colour shades. Thomas is presumably referring to Heuer et al. (1986)
and Reisberg et al. (1986). See also Reisberg and Leak (1987).
8. Thus Dean and Morris: ‘little or no correlation has been found between
measures based upon subjective reports of the conscious experiences of
imagery and experimental tasks or spatial tests that are explained in terms
of their use or manipulation of mental images’ (2003, 246). And Borst and
Kosslyn: ‘subjective ... ratings only sporadically predict performance in visu-
ospatial tasks. ... For example, researchers have found little or no correlation
between rated vividness of imagery (using the Vividness of Visual Imagery
Questionnaire, VVIQ, Marks, 1973) and the performance on spatial abili-
ties tests’ (2010, 2031–2032). See these papers and Schwitzgebel (2011) for
references.
9. Schwitzgebel cites Ernest (1977), Richardson (1980) and Paivio (1986, 117).
10. Reisberg et al. (2003) direct us to Katz (1983, 42), Kerr and Neisser (1983,
213–4) and Kosslyn et al. (1985, 196). For discussion and further references,
see also Faw (2009, 2–3), who quotes Levine et al. (1985, 391): ‘The objec-
tive abilities can be considered descriptions of the subjective phenomenon’.
Chara and Verplanck argue against the validity of Marks’s VVIQ precisely
on the basis that, ‘If the construct validity is to be supported, we should
expect ... better performance by self-reported “good imagers” than “poor
imagers” on a test of pictorial recall’ (1986, 916).
11. This assumption substantially oversimplifies the empirical picture. One
important issue is that imagery is a complex multicomponent process. E.g.,
there may be separate spatial and object-based processes (Kozhevnikov et al.
2005), as well as many distinct processes involved in generating, manipu-
lating, inspecting and maintaining imagery (Kosslyn 1994, esp. chs 9–10).
Traditional imagery questionnaires thus plausibly fail to distinguish the
processes involved in different tasks, and better questionnaires may yet
reveal significant relationships between introspected imagery and perform-
ance on different tasks (see Dean and Morris 2003; Kosslyn and Borst 2010).
Moreover, some of the processes at work in standard imagery tasks are likely
not exclusively visual. This raises the possibility that certain tasks are solved
by ‘non-imagers’ using non-visual but nonetheless imagery-based strategies;
e.g., haptic or motor imagery strategies, perhaps exploiting a common spatial
code. For relevant background empirical discussion, see Reisberg et al. (1986),
Heuer et al. (1986), Slee (1980) and also the literature on imagery in the
Lack of Imagination 295

congenitally blind; e.g., Marmor and Zaback (1976), Carpenter and Eisenberg
(1978), Kerr (1983), Zimler and Keenan (1983).
12. Schwitzgebel notes similar arguments go back at least to Angell 1910.
13. This is how Thomas (unpublished) describes the view of Marks (1986, 237).
Cf. discussions of Anton’s syndrome and anosognosia more generally.
14. In fact, is not just that Schwitzgebel gives us little clue as to why the relevant
population should be so strongly and consistently committed to enormous
mistakes about their inner lives. Schwitzgebel does not explicitly indicate
who he thinks is wrong; i.e., what a ‘normal’ stream of conscious imagery
consists of. It is most natural to think that his view is that we all have some
modest degree of imagery and so error is especially pronounced amongst
professed super-imagers (since they do no better than ‘normal’ imagers) and
professed non-imagers (since they do no worse).
15. For this reference and several others, I am much indebted to Thomas (n.d.),
as well as to the hugely helpful Thomas (2012).
16. Although clear advocates of the ‘no function’ view are thin on the ground,
the issue was the source of a heated controversy in scientific circles a century
ago. In addition to Thorndike and Winch, see Fernald (1912, 135–138). There
is also the infamous case of the behaviourist Watson who seems to have been
motivated to deny his own mental imagery, declaring imagery ‘a mental
luxury (even if it really exists) without any functional significance’ (1913,
175). For discussion, see Faw (2009, 7–10) and Thomas (2012).
17. E.g., Marks writes, ‘Imagery, by definition, is a mental experience and verbal
reports therefore provide a necessary, albeit fallible, source of evidence’ (1983,
245). See also Richardson (1969). Thomas (2012) also mentions McKellar
(1957) and Finke (1989).
18. An early example is Neisser (1970), who proposes a distinction between
‘imagery as an experience’ and ‘imagery as a process’. Thomas (2012), whom
I follow here, puts the distinction in terms of experiential and representational
notions of mental imagery. Thomas himself does not endorse the distinc-
tion and indeed elsewhere suggests that ‘It is of the very nature of imagery
to be conscious’ (2003, §3.3). As Bence Nanay pointed out to me, a notion
of imagery which does not imply conscious awareness is plausibly at play in
van Leeuwen 2011, as well as explicitly in Nanay (2010).
19. That said, one’s own conscious mental imagery may bias one’s position in
the debate (Reisberg et al. 2003).
20. Having made his distinction between imagery as experience and imagery as
process, Neisser seems happy to embrace this consequence.
21. Schwitzgebel also argues that resolving the puzzle of individual differences
along the lines developed in this section will force us to think of everyone’s
underlying imagery (conscious or not) as equivalent in detail to that reported
in ‘the grandest self-assessments’. But if that were so, Schwitzgebel suggests
that ‘it is surprising that we don’t all perform substantially better on mental
rotation tasks, visual memory tasks, and the like’ (2011, 51). This objection
is problematic. The grandest self-assessments typically compare imagery to
ordinary perception. In this light, we ought to ask: how well should we expect
subjects to do in rotation or memory tasks equipped with imagery as rich as
perception? Yet then we need to ask: how rich is that? This is a famously
controversial issue, an issue where the gap between our objective capacities
296 Ian Phillips

(e.g., in short-term recall, discrimination and identification tasks) and self-


descriptions of richness is highly disputed. Arguably, then, no distinctive
issue about imagination arises here.
22. Note that Dennett construes Block’s case here in a very particular way,
describing the stimuli as self-cueing, as opposed to the super-blindseer as self-
prompting. Block’s super-blindseer, at least as initially described, does not
know to self-prompt whenever something interesting appears in his scotoma but,
apparently, must constantly be prompting himself along all the dimensions
that he is capable of guessing about. These are very different situations.
23. As with other examples in this literature (e.g., Marcel’s thirsty subject who
fails to reach for a glass of water located in his/her scotoma), we have to
allow poetic licence: ordinary blindseers lack the form perception to identify
lemons (and glasses of water). But ‘that thing’ will do.
24. On which, see Zeman et al. (2010). What Zeman et al. call ‘blind imagi-
nation’ apparently involves the ‘successful use of an alternative strategy to
perform imagery tasks in the absence of the experience of imagery’ (2010,
145). One piece of evidence for this is that their subject does not exhibit the
standard reaction time effect in the mental rotation task. Acknowledging the
size of this empirical assumption, I assumed above that this is not true of
professed non-imagers.
25. This is one major disanalogy between non-imagers and ordinary blindseers,
though one that is hardly surprising given the fact that imagery substantially
involves visual processing areas which are presumably intact in sighted, non-
clinical non-imagers.
26. Cf. Shepard and Metzler (1971, 701–702), and Kosslyn et al. (1978, 47, 51),
though Kosslyn et al. in particular suggest that such introspective reports
should be treated with scepticism.
27. It may still be objected that since the answers given are caused for both
imager and non-imager by the same underlying representations, any role
for conscious imagery is screened off. However, this objection assumes that
we treat two answers as the same just when they are scored similarly by the
experimentalist. As discussed, whilst this might be an entirely reasonable
perspective to take as an experimentalist, it is not the only perspective, and it
is not the subject’s natural perspective. From the subject’s perspective, there
is a fundamental difference between an answer which is internally justified
and one which is simply reliably produced by a subpersonal mechanism. If
we focus on the probability of an internally justified answer, conscious imagery
is not screened off by the presence of representational imagery.
28. I’m very grateful to comments and questions from Justin Fischer, Anil
Gomes, Liz Irvine, Nick Jones, Rory Madden, Mike Martin, Bence Nanay,
Robert O’Shaughnessy, Declan Smithies, Mark Sprevak, Lee Walters and (as
always and especially) Hanna Pickard.

References
Abelson, R. P. (1979) ‘Imagining the Purpose of Imagery’. Behavioural and Brain
Sciences 2: 548–549.
Lack of Imagination 297

Angell, J. R. (1910) ‘Methods for the Determination of Mental Imagery’.


Psychological Monographs 13: 61–108.
Betts, G. H. (1909) The Distribution and Functions of Mental Imagery. New York:
Teachers’ College, Columbia University.
Blakemore, C. (2005) ‘In Celebration of Cerebration’. Lancet 366 (9502):
2035–2057.
Block, N. (1995) ‘On a Confusion about a Function of Consciousness’. Behavioral
and Brain Sciences 18 (2): 227–247.
Borst, G., and Kosslyn, S. M. (2010) ‘Individual Differences in Spatial Mental
Imagery’. Quarterly Journal of Experimental Psychology 63: 2031–2050.
Borst, G., Kosslyn, S. M., and Denis, M. (2006) ‘Different Cognitive Processes in
Two Image-Scanning Paradigms’. Memory and Cognition 34: 475–490.
Brewer, W. F., and Schommer-Aikins, M. (2006) ‘Scientists Are Not Deficient in
Mental Imagery: Galton Revised’. Review of General Psychology 10: 130–146.
Burbridge, D. (1994) ‘Galton’s 100: An Exploration of Francis Galton’s Imagery
Studies’. British Journal for the History of Science 27: 443–463.
Campbell, J. (2002) Reference and Consciousness. Oxford: Oxford University Press.
Carpenter, P. A., and Eisenberg, P. (1978) ‘Mental Rotation and the Frame of
Reference in Blind and Sighted Individuals’. Perception and Psychophysics 23:
117–124.
Chalmers, D. J. (1996) The Conscious Mind. Oxford: Oxford University Press.
Chara, P. J., Jr., and Verplanck, W. S. (1986) ‘The Imagery Questionnaire: An
Investigation of its Validity’. Perceptual and Motor Skills 63: 915–920.
Dadds, M., Hawes, D., Schaefer, B., and Vaka, K. (2004) ‘Individual Differences in
Imagery and Reports of Aversions’. Memory 12(4): 462–466.
Dean, G. M., and Morris, P. E. (2003) The Relationship Between Self-reports of
Imagery and Spatial Ability’. British Journal of Psychology 94(2): 245–273.
Dennett, D. C. (1991) Consciousness Explained. New York: Little, Brown.
Dennett, D. C. (1995) ‘Commentary on Block’s “On a Confusion about a Function
of Consciousness”’. Behavioral and Brain Sciences 18(2): 252–253.
Ernest, C. H. (1977) ‘Imagery Ability and Cognition: A Critical Review’. Journal of
Mental Imagery 2: 181–216.
Faw, B. (2009) ‘Conflicting Intuitions may be Based on Differing Abilities’. Journal
of Consciousness Studies 16 (4): 45–68.
Fernald, M. R. (1912) ‘The Diagnosis of Mental Imagery’. Psychological Monographs
14 (58): 1–169.
Finke, R. A. (1989) Principles of Mental Imagery. Cambridge, MA: MIT Press.
Finke, R. A., and Pinker, S. M. (1982) ‘Spontaneous Imagery Scanning in Mental
Extrapolation’. Journal of Experimental Psychology: Leaning Memory and Cognition
8: 142–147.
Flew, A. (1956) ‘Facts and “Imagination”’. Mind 65 (259): 392–399.
Galton, F. (1880) ‘Statistics of Mental Imagery’. Mind 5: 301–318.
Galton, F. (1883/1907) Inquiries into Human Faculty and Its Development. London:
Dent.
Heuer, F., Fischman, D., and Reisberg, D. (1986) ‘Why Does Vivid Imagery Hurt
Colour Memory?’ Canadian Journal Psychology 40 (2): 161–175.
Humphrey, N. (2011) ‘Know Thyself: Easier Said Than Done’. New York Times, 29
July 2011. Review of Schwitzgebel 2011.
James, W. (1890) Principles of Psychology. New York: Dover.
298 Ian Phillips

Jelinek, L., Randjbar, S., Kellner, M., Untiedt, A., Volkert, J., Muhtz, C., and
Moritz, S. (2010) ‘Intrusive Memories and Modality-Specific Mental Imagery
in Posttraumatic Stress Disorder’. Zeitschrift für Psychologie/Journal of Psychology
218 (2): 64–70.
Kanai, R., and Rees, G. (2011) ‘The Structural Basis of Inter-individual Differences
in Human Behaviour and Cognition’. Nature Reviews Neuroscience 12: 231–242.
Katz, A. (1983) ‘What Does It Mean to be a High Imager?’ In J. Yuille (ed.) Imagery,
Memory and Cognition. Hillsdale, NJ: Erlbaum.
Kaufmann, G. (1981) ‘What Is Wrong with Imagery Questionnaires?’ Scandinavian
Journal of Psychology 22: 59–64.
Kerr, N. (1983) ‘The Role of Vision in ‘Visual Imagery’ Experiments: Evidence
from the Congenitally Blind’. Journal of Experimental Psychology: General 112:
265–277.
Kerr, N. H., and Neisser, U. (1983) ‘Mental Images of Concealed Objects: New
Evidence’. Journal of Experimental Psychology: Learning, Memory and Cognition 9:
212–221.
Kosslyn, S. M. (1973) ‘Scanning Visual Images: Some Structural Implications’.
Perception and Psychophysics 14 (1): 90–94.
Kosslyn, S. M. (1980) Image and Mind. Cambridge, MA: Harvard University Press.
Kosslyn, S. M. (1994) Image and Brain: The Resolution of the Imagery Debate.
Cambridge, MA: MIT Press.
Kosslyn, S. M., Ball, T. M., and Reiser, B. J. (1978) ‘Visual Images Preserve Metric
Spatial Information: Evidence from Studies of Image Scanning’. Journal of
Experimental Psychology: Human Perception and Performance 4: 47–60.
Kosslyn, S. M., Brunn, J., Cave, K., and Wallach, R. (1985) ‘Individual Differences
in Mental Imagery Ability: A Computational Analysis’. Cognition 18: 195–243.
Kozhevnikov, M., Kosslyn, S., and Shephard, J. (2005) ‘Spatial Versus Object
Visualizers: A New Characterization of Visual Cognitive Style’. Memory &
Cognition 33: 710–726.
Levine, D. N., Warach, J., and Farah, M. (1985) ‘Two Visual Systems in Mental
Imagery: Dissociation of ‘What’ and ‘Where’ in Imagery Disorder Due to
Bilateral Posterior Cerebral Lesions’. Neurology 35: 1010–1018.
Marks, D. F. (1973) ‘Visual Imagery Differences in the Recall of Pictures’. British
Journal of Psychology 1: 17–24.
Marks, D. F. (1983) ‘Mental Imagery and Consciousness: A Theoretical Review’.
In A. A. Sheikh (ed.) Imagery: Current theory research and applications, 96–130.
New York: Wiley.
Marks, D. F. (1986) ‘The Neuropsychology of Imagery’. In D. F. Marks (ed.) Theories
of Image Formation, 225–241. New York: Brandon House.
Marmor, G. S., and Zaback, L. A. (1976) ‘Mental Rotation by the Blind: Does
Mental Rotation Depend on Visual Imagery?’ Journal of Experimental Psychology:
Human Perception and Performance 2: 515–521.
Martin, M. G. F. (2006) ‘On Being Alienated’. In J. Hawthorne and T. Gendler
(eds) Perceptual Experience, 354–410. Oxford: Oxford University Press.
McKellar, P. (1957) Imagination and Thinking. London: Cohen and West.
McKelvie, S. J. (1995) ‘The VVIQ as a Psychometric Test of Individual Differences
in Visual Imagery Vividness: A Critical Quantitative Review and Plea for
Direction’. Journal of Mental Imagery 19: 1–106.
Nagel, T. (1974) ‘What Is it Like to Be a Bat?’ Philosophical Review 83: 435–456.
Lack of Imagination 299

Nanay, B. (2010) ‘Perception and Imagination: Amodal Perception As Mental


Imagery’. Philosophical Studies 150: 239–254.
Neisser, U. (1967) Cognitive Psychology. New York: Appleton-Century-Crofts.
Neisser, U. (1970) ‘Visual Imagery as Process and as Experience’. In J. S. Antrobus
(ed.) Cognition and Affect, 159–178. Boston: Little, Brown.
Nouchi, R. (2011) ‘Individual Differences of Visual Imagery Ability in the Benefit
of a Survival Judgment Task’. Japanese Psychological Research 53 (3): 319–326.
Paivio, A. (1986) Mental Representations: A Dual Coding Approach. New York:
Oxford University Press.
Palmiero, M., Cardi, V., and Belardinelli, M. O. (2011) ‘The Role of Vividness
of Visual Mental Imagery on Different Dimensions of Creativity’. Creativity
Research Journal 23 (4): 372–375.
Reisberg, D., Culver, L. C., Heuer, F., and Fischman, D. (1986) ‘Visual Memory:
When Imagery Vividness makes a Difference’. Journal of Mental Imagery 10:
51–74.
Reisberg, D., and Leak, S. (1987) ‘Visual Imagery and Memory for Appearance:
Does Clark Gable or George C. Scott have Bushier Eyebrows?’ Canadian Journal
of Psychology 41 (4): 521–526.
Reisberg, D., Pearson, D. G., and Kosslyn, S. M. (2003) ‘Intuitions and
Introspections about Imagery: The Role of Imagery Experience in Shaping an
Investigator’s Theoretical Views’. Applied Cognitive Psychology 17: 147–160.
Richardson, A. (1969) Mental Imagery. London: Routledge and Kegan Paul.
Richardson, J. T. E. (1980) Mental Imagery and Human Memory. London:
Macmillan.
Roessler, J. (2009) ‘Perceptual Experience and Perceptual Knowledge’. Mind 118
(472): 1013–1041.
Schwitzgebel, E. (2011) Perplexities of Consciousness. Cambridge, MA: MIT Press.
Sheehan, P. W. (1967) ‘A Shortened Form of Betts’ Questionnaire Upon Mental
Imagery’. Journal of Clinical Psychology 23: 386–389.
Shepard, R. N., and Cooper, L. (1982) Mental images and their transformations.
Cambridge, MA: MIT Press.
Shepard, R. N., and Metzler, J. (1971) ‘Mental Rotation of Three-dimensional
Objects’. Science 171: 701–703.
Shoemaker, S. (1994) ‘Self-Knowledge and “Inner Sense”’. Philosophy and
Phenomenological Research 64: 249–314.
Slee, J. (1980) ‘Individual Differences in Visual Imagery Ability and the Retrieval
of Visual Appearances’. Journal of Mental Imagery 4: 93–113.
Smithies, D. (Forthcoming) ‘The Phenomenal Basis of Epistemic Justification’. In
M. Sprevak and J. Kallestrup (eds) New Waves in Philosophy of Mind. London:
Palgrave Macmillan.
Thomas, N. J. T. (1989) ‘Experience and Theory as Determinants of Attitudes
towards Mental Representation: The Case of Knight Dunlap and the Vanishing
Images of J. B. Watson’. American Journal of Psychology 102 (3): 395–412.
Thomas, N. J. T. (2003) ‘Mental Imagery, Philosophical Issues About’. In L. Nadel
(ed.) Encyclopedia of Cognitive Science, vol. 2, 1147–1153. London: Nature.
Thomas, N. J. T. (2009) ‘Visual Imagery and Consciousness’. In W. P. Banks (ed.)
Encyclopedia of Consciousness. Oxford: Academic Press / Elsevier, vol. 2, 445–457.
A version with citations and extensive notes is at www.imagery-imagination.
com/viac.htm.
300 Ian Phillips

Thomas, N. J. T. (2012) ‘Mental Imagery’. In The Stanford Encyclopedia of Philosophy


(Winter 2012 edn), E. N. Zalta (ed.) http://plato.stanford.edu/archives/win2012/
entries/mental-imagery/.
Thomas, N. J. T. (N.d.) ‘Are there People who do not Experience Imagery? (And
why does it matter?)’. www.imagery-imagination.com/non-im.htm (accessed
on 18 November 2013).
Thorndike, E. L. (1907) ‘On the Function of Visual Images’. Journal of Philosophy,
Psychology, and Scientific Methods 4 (12): 324–347.
Tye, M. (1991) The Imagery Debate. Cambridge, MA: MIT Press.
Van Leeuwen, N. (2011) ‘Imagination is Where the Action Is’. Journal of Philosophy
108 (2): 55–77.
Watson, J. B. (1913) ‘Psychology as the Behaviorist Views It’. Psychological Review
20: 158–177.
Weiskrantz, L. (1997) Consciousness Lost and Found. Oxford: Oxford University
Press.
Weiskrantz, L., Warrington, E. K., Sanders, M. D., and Marshall, J. (1974) ‘Visual
Capacity in the Hemianopic Field Following a Restricted Occipital Ablation’.
Brain 97: 709–728.
Winch, W. H. (1908) ‘The Function of Images’. Journal of Philosophy, Psychology
and Scientific Methods 5(13): 337–352.
Zeman, A. Z., Della Sala, S., Torrens, L. A., Gountouna, V. E., McGonigle, D. J.,
and Logie, R. H. (2010) ‘Loss of Imagery Phenomenology with Intact Visuo-
spatial Task Performance: A Case of “Blind Imagination”’. Neuropsychologia 48
(1): 145–155.
Zimler, J., and Keenan, J. (1983) ‘Imagery in the Congenitally Blind: How Visual
are Visual Images?’ Journal of Experimental Psychology: Learning, Memory &
Cognition 9: 269–282.
15
A Beginner’s Guide to
Group Minds
Georg Theiner

Conventional wisdom in the philosophy of mind holds (1) that minds


are exclusively possessed by individuals and (2) that no constitutive part
of a mind can have a mind of its own. For example, the paradigmatic
minds of human beings are in the purview of individual organisms and
associated closely with the brain; no parts of the brain that are constitu-
tive of a human mind are considered capable of having a mind.1 Let us
refer to the conjunction of (1) and (2) as standard individualism about
minds (SIAM). Put succinctly, SIAM says that all minds are singular
minds. This conflicts with the group mind thesis (GMT), understood as
the claim that there are collective types of minds that comprise two or
more singular minds among their constitutive parts. The related concept
of group cognition refers to psychological states, processes or capacities
that are attributes of such collective minds.
During the late nineteenth and early twentieth centuries, the GMT
notoriously served as a rallying point for the nascent study of group
phenomena.2 By analyzing the behaviour of groups in mentalistic terms,
its advocates sought to emphasize that groups can function as agents in
their own right, with emergent features that cannot be reduced to the
actions of individuals. However, to its own detriment, the emergence of
group minds was often expressed with biological metaphors that were
borrowed from the vitalist tradition. The vitalists believed that life is
the product of a mysterious organic force (vis vitalis) that is fundamen-
tally different from the physico-chemical principles that govern inani-
mate things. Because of this close association, the demise of vitalism as
a result of the modern evolutionary synthesis in biology meant that the
concept of group minds or group agency was equally banished from the
realm of respectable scientific discourse.

301
302 Georg Theiner

In recent years, the once-discredited concept of group cognition has


shown definite signs of a comeback. A fair amount of recent work in
the cognitive and social sciences has been premised on the idea that
groups can constitute distributed cognitive systems (Hutchins 1995;
Stahl 2006), collective information-processing systems (Larsen and
Christensen 1993; Hinsz et al. 1997), adaptive decision-making and
problem-solving units (Wilson 1997, 2002; Goldstone and Gurecki
2009), collective memory systems (Wegner 1987; Walsh and Ungson
1991) and organizational minds (Sandelands and Stablein 1987; Weick
and Roberts 1993) or that that they can have collective intelligence
(Lévy 1999; Surowiecki 2004), happiness (Haidt et al. 2008), creativity
(Hargadon and Bechky 2006) and emotions (Huebner 2011). At the same
time, some philosophers have argued that groups can be the bearers of
collective intentionality – beliefs and intentions (Gilbert 1989; Ludwig
2007), collective knowledge (Gilbert 2004), collective guilt and remorse
(Gilbert 2002) – and function as unified rational agents that meet the
mark of personhood (List and Pettit 2011).
What all of the above studies have in common – the list could be greatly
expanded – is that they attribute one or more psychological properties
to certain kinds of social groups. In this sense, we may consider them
contemporary versions of the GMT, even though they are all compatible
with a broadly physicalistic world view. However, despite this common
ground, there are important differences between their respective views of
why some psychological property should count as a group level phenom-
enon. If we want to understand these differences, it is critical to develop
a shared ‘lingua franca’ we can use to taxonomize different variants of
group cognition. It is the goal of this chapter to contribute to this larger
enterprise. It is organized as follows. First, I elaborate on the distinction
between singular and group minds and draw a distinction between hive
cognition, collective cognition and socially distributed cognition. Then
I briefly clarify the concept of mind we can plausibly take to be at play
in the present debate. In the rest of the chapter, I sketch an analysis of
the emergent character of socially distributed cognition that is free from
the metaphysical shackles of vitalism. I close with a few remarks on the
idea that there are multiple levels of cognition.

15.1 Group minds vs. singular minds

In his discussion of the GMT, Wilson (2001, S265), distinguishes two


ways in which minds – or mental properties more generally – have been
conceived as emergent properties of groups. According to the multilevel
A Beginner’s Guide to Group Minds 303

conception, the group as a whole has a collective mind that coexists,


albeit on a different level, with the singular minds of the members who
compose the group. This creates the potential for genuine conflicts
between individual and group cognition. Wilson associates the multi-
level conception of the GMT with views that were popular among
many of the foundational figures in social psychology (e.g., McDougall
1920) and sociology (e.g., Durkheim 1898/1953). By contrast, the group-
only conception refers to the mind of a collective which lacks individual
members with minds of their own. A historically prominent version of
the group-only conception can be traced to the work of Wheeler (1920),
the Harvard entomologist who coined the term ‘superorganism’ to
describe the collective behaviour of eusocial insects in such places as
beehives or ant colonies. My goal in this section is to raise a few compli-
cations for Wilson’s binary distinction, which will lead me to suggest
that the two conceptions are neither mutually exclusive nor jointly
exhaustive variants of group cognition.
I begin by unpacking a bit further the content of the multilevel
conception with the help of a contemporary example. Gilbert (1989)
has argued that groups can be the plural subjects of beliefs or intentions
if their members are jointly committed to assume the respective mental
state as a collective body. For Gilbert, the formation of a joint commit-
ment forges a special kind of non-summative unity of groups to which
all members are bound simultaneously and interdependently. Joint
commitments in this sense give rise to distinctive group obligations and
entitlements to which its members ought to adhere, including epistemic
norms to which plural subjects can be held accountable. Extrapolating
from Wilson’s discussion (esp. S265 and S269), Gilbert’s account of collec-
tive belief exemplifies the multilevel conception insofar as (1) belief is a
type of psychological state that can in principle be instantiated by both
individuals and groups; (2) by becoming a member of a group, an indi-
vidual enters into certain psychological interactions with the beliefs of
the group; (3) as a result of these interactions, the individual, qua group
member, accepts a commitment to endorse the beliefs of the group (even
though she may not personally hold any of the beliefs).
What distinguishes the group-only conception of the GMT from
Gilbert’s case of multilevel cognition? The main difference, according
to Wilson, is that we do not have a hierarchy of mental states, realized
at two distinct levels of reality, that can interact with one another. The
suggested lack of singular minds is perhaps most salient in the case of
insect societies. Despite the very limited cognitive abilities of single ants,
colonies are collectively able to achieve complex tasks such as foraging
304 Georg Theiner

for food, allocating resources and selecting appropriate nest sites, often
using close to optimal strategies (Sasaki and Pratt 2011). The collective
behaviour of the hive is also a multilevel phenomenon insofar as the
adaptive success of the hive crucially depends on complex feedback
mechanisms that socially mediate the behaviour of individual ants
(Moussaid et al. 2009). But importantly, at least in the present context,
we do not consider what single ants do (e.g., foraging, scouting, signal-
ling) as manifestations of a singular mind. This differs from the multi-
level conception of the GMT: for example, in Gilbert’s account, what
people think is a constitutive aspect of their role as members of a plural
subject. In contrast, our ground for attributing group-only cognition to
the hive lies in specific feats of collective information processing (e.g.,
sorting, reaching consensus, optimizing, obeying specific rationality
principles) of which single ants or bees are congenitally unable.
The absence of singular minds in individual ants or bees raises the
question whether the cognitive abilities of hives should really count as
genuine cases of group cognition. Eusocial species are traditionally char-
acterized by a strict reproductive division of labour (e.g., with sterile
work castes), overlapping generations living together in the colony at
any given time and cooperative brood care (Wilson 1971). Because of
this highly specialized division of biological labour, it has been argued
that single bees or ants function more like body parts of a larger, func-
tionally integrated unit which has many characteristics of a normal
biological organism (Hölldobler and Wilson 2008). Rather than speak
of a group mind, it may thus be more appropriate to consider a beehive
as a special kind of singular mind, albeit one that is spatially distributed
over many (insect) bodies. Let us reserve the term hive cognition for this
limiting case of group cognition, to distinguish it from collective cogni-
tion according to the multilevel conception.
The distinction between hive cognition and collective cognition can
be brought out further with a fascinating study of ‘colony-level cogni-
tion’ in ants and honeybees (Marshall et al. 2009). Using mathematical
models of optimal decision making, Marshall and colleagues discovered
striking information-processing parallels between the migration deci-
sions made by house-hunting colonies and the neural decision making
which occurs in the primate visual cortex during motion discrimination
tasks. In both systems, different subpopulations act as integrators of noisy
information about available decision alternatives, both rely on quorum
sensing, and both can vary their decision thresholds in response to
speed-accuracy trade-offs. Because of the underlying functional analogy
between individual bees and single neurons, which clearly lack a mind
A Beginner’s Guide to Group Minds 305

of their own, what Marshall calls ‘colony-level’ cognition would seem


to constitute a hive mind rather than a genuine group mind. Otherwise,
or so it might be argued, we would equally have to consider the ‘collec-
tive’ decision making of the visual cortex in the brain as an instance of
a group mind.
But the distinction is not as clear-cut as it may seem. Our case against
counting beehives as genuine group minds rests on the tacit premise that
at least some aspects of beehive cognition are also true of certain neural
mechanisms which underpin the workings of singular minds. But that
premise may well be too strong. To see why, let us consider a frequently
cited example of collective intelligence found in human societies:
markets. Markets are not only regularly used almost worldwide today
to determine the prices of most assets and commodities but increas-
ingly as an instrument for predicting the outcome of future events,
such as elections, product launches or sporting events. Popularized by
Surowiecki (2004), the ‘wisdom of crowds’ effect refers to the ability of
diverse collections of independent decision makers, acting purely on
the basis of local, specialized information, to solve certain types of intel-
lective problems better, faster and more reliably than any single indi-
vidual, including task-specific experts. What general conditions account
for the formation of crowds that are wiser than the sum of their parts?
Bettencourt (2009) has shown that at a certain level of abstraction, the
very same information-theoretic principles that underlie the successful
foraging behaviour of beehives are also at work in the market pricing of
human societies or the collaborative filtering of online recommenda-
tion systems. In each case, the possibility of collective intelligence stems
from the fact that the aggregation of information from many sources
can produce more information (synergy) or less information (redundancy)
than is contained in the sum of its parts. Generally speaking, informa-
tion aggregation will yield synergistic effects if (1) each contribution is
statistically independent from others and (2) they are not conditionally
independent given the state of the relevant target variable. Condition (1)
requires that the individual behaviour of the participants is not deter-
mined by imitation, herding or any other extraneous forms of coordina-
tion. Condition (2) simply requires a suitable information aggregator
(e.g., individual bees performing a ‘dance’ to indicate the location and
quality of a food source or the social signalling of a market).
Bettencourt’s analysis casts doubt on the above argument for
discounting beehive cognition as an instance of group cognition. Our
reason, it will be recalled, was based on the observation of certain
information-processing parallels between beehives and primate brains;
306 Georg Theiner

that is, the neural machinery of singular minds. But now, we have seen
that there are also important information-theoretic commonalities
between beehives and human markets. Clearly, there is a tension. If
the collective information-processing performed by beehives does not
amount to group cognition because of its commonalities with brain-
bound singular cognition, then by the same token the collective infor-
mation processing performed by markets should not count as group
cognition either, because of its commonalities with that of beehives.
Perhaps the way out of this conundrum is to concede that the intel-
ligence of markets is indeed more closely comparable to that of a hive
than that of a group, even though the markets we usually speak of are
obviously collections of people with singular minds. Still, for certain
taxonomic or explanatory purposes that are explicitly driven by, for
example, information-theoretic concerns, we may decide to bracket
that difference. Yet another question remains: what’s so special about
the information processing of neural networks in the brain that it
earns the privilege to be called ‘singular’ cognition? If physicalism is
true about the human mind and mental states or processes are at least
token-identical with neural states or processes, shouldn’t we conclude
that singular cognition is really just another name for hive cognition
performed by the brain?
Alternatively, we could try to discount beehives and markets as cases
of group cognition by refining the concept of a social group. Collections
of individuals come in many forms, but not all of them are sufficiently
integrated to constitute a group (Forsyth 2006). The task at hand, then,
would be to shore up the concept of a social group in a way that is
narrow enough to exclude beehives and markets (not to mention brains)
but broad enough to accommodate all forms of what we have dubbed
collective cognition. Perhaps this can be done; but even so, there remains
a potential pitfall I’d like to point out. Consider, for the sake of illustra-
tion, Gilbert’s (1989) aforementioned analysis of collective intention-
ality, which rests on a fairly idiosyncratic conception of what constitutes
a group. For Gilbert, the concept of a social group really is the concept
of a plural subject. Her account of group formation involves two steps
(ibid., ch. 4): (1) the individuals who are bound to become members of
a group must be conditionally committed to joining forces in doing X
‘as a body’; (2) they must mutually express in conditions of common
knowledge (albeit not necessarily verbally), their commitment to doing
so. Provided that both conditions are satisfied, a joint commitment to
X-ing as a body has been generated, and a plural subject has thereby
come into existence. Plainly, since beehives and markets do not form
A Beginner’s Guide to Group Minds 307

plural subjects in this sense, they aren’t genuine Gilbertian groups and,
a fortiori, they cannot possess a group mind.
Whatever the intrinsic merits of Gilbert’s account, the peculiar notion
of a group on which it rests is clearly too narrow to serve as a common
denominator for a comprehensive taxonomy of group cognition. First,
we should acknowledge that the notion of a group, as it is used in
the social sciences, is a complex theoretical term that involves several
different dimensions, such as the modes of social interactions, goals,
social interdependence, organizational structure and social cohesion (cf.
Forsyth 2006, 10–14). Hence it is doubtful that there can be a single
definition of the term group that fits all of its uses. Second, it is a mistake
to think that the relevant psychological aspects of group cognition are
always directly linked to the social factors which determine whether
some collection of individuals does or does not constitute a group. This
is because the degree of ‘groupness’ that is achieved by a collective is not
always directly proportional to its level of cognitive performance.
An interesting study which brings out this point nicely is Weick and
Roberts’s (1993) analysis of so-called high-reliability organizations (such
as aircraft carriers) that require nearly error-free operations around the
clock so as to avoid catastrophic outcomes. Seeking to avoid the mistake
of traditional versions of the GMT, in which ‘the development of mind
is confounded with the development of the group’ (ibid., 374), Weick
and Roberts formulate an original concept of collective mind that can be
applied to but is conceptually disentangled from that of an organization
or of a social system more generally. Following the work of Asch (1952),
Weick and Roberts conceive of an organization as a social system that
emerges from but at the same time shapes and constrains the actions of
individual agents who themselves understand that the system consists of
their interdependent actions and who subordinate their actions accord-
ingly. Like Asch’s work, their conception reflects a recurring theme of
Gestalt psychology that the whole is not only greater than the sum of
its parts but that the nature of the whole can alter the behaviour of its
parts. Building on the concept of heed in the work of Ryle (1949), Weick
and Roberts’s concept of collective mind is then defined in terms of the
amount of heed that is contained in the social patterns by which the
individual actors’ contributions, representations and subordinations are
interrelated. The basic idea of their approach is to explain variations in
organizational performance, such as the likelihood of severe accidents,
in terms of relative variations of collective mind.
Among other things, their analysis reveals the possibility and signifi-
cance of double dissociations between the degree of ‘groupness’ and the
308 Georg Theiner

development of a collective mind in a given social system. For instance,


the combination of developed group plus undeveloped collective mind
is found in the dysfunctional phenomenon of groupthink (Janis 1982).
Groupthink refers to a situation in which the desire for harmony and
concurrence has become so dominant among the members of a cohe-
sive in-group that group decisions are made without a critical appraisal
of dissenting voices and without considering all available sources of
evidence. Conversely, the combination of undeveloped group plus
developed collective mind can manifest itself in temporary work groups
such as project teams, airline cockpits and improvising jazz groups as a
kind of ‘non-disclosive intimacy’ that ‘stresses coordination of action
over alignment of cognitions, mutual respect over agreement, trust over
empathy, diversity over homogeneity, loose over tight coupling, and
strategic communication over unrestricted candor’ (Weick and Roberts,
375). As Weick and Roberts point out (ibid.), groups are typically more
heedful in earlier stages of their development. Once interrelating has
become routine, automatic and regularized, organizations concerned
primarily with reliability (as opposed to, say, efficiency) are more prone
to errors unless their patterns of interrelatedness are ‘reshuffled’ midway
through their development (Gersick 1988).
Let us summarize the state of our discussion thus far. First, extreme
cases of group-only or hive cognition, such as are found in eusocial
insects, need not conflict with SIAM if we consider insect hives a non-
standard kind of biological individual (Bouchard and Huneman 2013).
Second, even the multilevel conception, according to which singular
minds and group minds co-exist at ontologically distinct levels, is
potentially compatible with part (1) of SIAM if we consider group agents
supraindividuals in their own right (Schmitt 2003). Still, if there are
supra-individuals that are constituted by members with singular minds,
this would suffice to refute part (2) of SIAM. Third, the neat distinction
between the multilevel and group-only versions of the GMT is some-
what complicated by the fact that markets and brains share at least some
collective information-processing properties with biological hives.
This last fact points, I think, to a distinct species of group cognition
that is group-only in that it represents a collective cognitive achievement
that is more than just the sum of singular minds yet is not fully multi-
level insofar as it is constituted, in some organizationally complex way,
by the minds of individuals. Francis Jehl, one of Thomas Edison’s long-
time assistants, once remarked, ‘Edison is in reality a collective noun and
means the work of many men’ (cited after Hargadon and Bechky 2006,
484). An interpretation of Jehl’s remark that would lend support to the
A Beginner’s Guide to Group Minds 309

GMT in this distinctive sense is that the steady stream of creative inven-
tions that are legally attributed to Edison should in fact be attributed to
the collaborative cogitations of the engineers who worked with Edison
at Menlo Park (Milliard 1990). This collaborative type of group cogni-
tion is also known as socially distributed cognition (e.g., Wegner 1987;
Hutchins 1995; Wilson 2002; Stahl 2006). Socially distributed cognition
is neither a singular nor a collective cognitive phenomenon in the clas-
sical sense, both of which put mind and cognition within the purview
of (singular or collective) individuals; rather, it lies somewhere between.
In what sense, then, does it support the GMT? Before we can answer this
question, we must further clarify the relationship between group minds
and group cognition more generally.

15.2 Speaking of minds

Current proponents of the GMT deliberately avoid the group mind idiom.
This is at least partly because our everyday concept of a mind is closely
associated with the possession of consciousness and a privileged first-
person awareness of one’s mental life. But what is it like to be a group?
Can groups experience the collective equivalent of a headache? If we
apply the ‘headache criterion’ for the existence of minds (Harnad 2005),
it seems implausible that groups can have minds (let alone that we knew
about it). Experimental evidence suggests that people readily ascribe the
functional components of agency to a collective entity like Google but
balk at the idea that groups can have phenomenally conscious mental
states (Huebner, Bruno and Sarkissian 2010). Thus, speaking of ‘group
minds’ tout court blurs the distinction between science and cybernetic
fantasies about a technologically driven emergence of collective forms
of consciousness (Heylighen 2011).
The absence of phenomenal consciousness or any other property
X that is deemed to be a central feature of human minds invites the
popular objection that groups cannot have minds because they lack X.3
There are two standard responses to this objection. For one, it seems
suspiciously anthropocentric to insist that our criteria for what consti-
tutes a mind must exactly match those we take to indicate a human
mind. Attributions of minds that stop short of having the full gamut of
properties that human minds have are commonplace. Newborn human
infants, individuals with severe psychological impairments, non-human
animals, divine creatures and certain kinds of machines are widely
recognized as possessing minds of a certain sort, manifesting some but
not all of the psychological states or abilities characteristic of minds of
310 Georg Theiner

normally functioning human adults. The burden of proof, then, lies with
those who insist that attributions of mentality must be an all-or-nothing
affair and do not admit of degrees. Alternatively, it remains open to
proponents of group cognition to settle for the thesis that groups can
constitute collective cognitive systems instead. The explicit use of a
theoretical term that is borrowed from contemporary cognitive science
has the advantage of avoiding the busy associations of our vernacular
conception of minds while leaving us with a wide enough variety of
psychological predicates that have been fruitfully used to characterize
the operation of cognitive systems. In line with the studies cited at the
beginning, a ‘big-tent’ approach to cognition would encompass familiar
folk-psychological predicates, as in discussions of collective belief,
intention or agency, the ascription of psychological capacities such as
memory, decision making or general intelligence, or more theoretically
driven notions of cognition, such as the generation and coherent use
of representations, information processing, adaptive problem solving
or sense making (cf. Theiner and O’Connor 2010; Theiner, Allen and
Goldstone 2010).
Adopting the second response, one may still rightfully ask whether
there isn’t a non-arbitrary threshold that groups must cross in order to
constitute a genuine collective cognitive system. To take an extreme case,
might we consider a group that exhibits only a single type of psycholog-
ical property as a ‘minimal’ cognitive system, as this has been suggested
by Wilson (2004, 290)? Against the validity of such a minimalist crite-
rion, Rupert (2005, n. 4) has objected that minimal-minded groups that
otherwise fail to meet the majority of independently established diag-
nostic features of minds would not warrant a realist construal of the
GMT. In a recent paper, Rupert (2011) returns to this issue in the context
of discussing an argument for group cognition previously proposed by
Theiner, Allen and Goldstone (2010). That argument was based, among
other things, on a study of collective path formation in which people
had to travel to a number of randomly selected destinations in a virtual
environment while minimizing travel costs (Goldstone and Roberts
2006). The study had revealed that the emerging trail system reflects
a compromise between people going to the destinations where they
wanted to go and going where others have previously travelled. As an
individually unintended side effect, the group as a whole was frequently
able to solve the problem of finding, at least approximately, the path
that connects the set of destinations using the minimal amount of total
path length. An intriguing feature of the computational model that
was used to analyze the experimental data is that similar processes may
A Beginner’s Guide to Group Minds 311

be at work in many more abstract cases of collective path formation,


such as the cultural spread of innovations in communities that build on
and at the same time add new wrinkles to the solutions found by their
predecessors. Challenging our argument, Rupert rejects the premise that
the observed patterns of group behaviour should be considered intelli-
gent on the grounds that ‘an instance of behaviour is intelligent only if
produced by a flexible suite of capacities – one that allows the agent or
subject to respond to a variety of changing conditions, balance various
goals against each other, and so on’ (ibid., 632). Since the groups in the
study fall short on this count, there is no reason to posit group cogni-
tion to explain such a ‘haphazard, and easily disrupted’ (ibid.) form of
group behaviour.
We do not need to quarrel with Rupert’s point that real-world intelli-
gence implies a certain amount of cognitive and behavioural flexibility.
But contra Rupert, I am confident that many groups of interest will
comfortably surpass whatever non-arbitrary threshold of intelligence
we eventually agree on. First, many paradigmatic group agents such as
firms, parties, courts or trade unions are clearly immune to Rupert’s criti-
cism. Even a modest set of requirements for group agency surely includes
the capacity of a group to form representations of one’s environment,
to entertain a variety of motivational states and to process them such
that, within feasible limits, it can act rationally (List and Pettit 2011, ch.
1). Second, we should not infer from the standard ‘divide-and-conquer’
methodology of experimental science that there is no underlying unity
to its subject matter. For instance, experimental psychologists tend to
break down the human mind into a bundle of capacities such as percep-
tion, attention and categorization. In any given experiment they rely on
tasks that target only a very specific psychological process. But under-
lying this methodology is the assumption that the mind of a normally
functioning subject is at least in principle capable of exercising most, if
not all, of these psychological traits. I think we should extend the same
courtesy to groups.
As a striking example of cognitive flexibility, Woolley et al. (2010)
found that there is a general collective intelligence factor (‘c-factor’)
which accounts for the performance of small groups on a variety
of cognitive tasks, functioning in much the same way for groups as
Spearman’s g-factor does for individuals. In their study, groups had to
work together on a diversified sample of group tasks which were drawn
from a well-established taxonomy based on the nature of the coordina-
tion processes they require (McGrath 1984). In the experiment, groups
had to solve visual puzzles, engage in brainstorming, make collective
312 Georg Theiner

moral judgments and negotiate over limited resources. It turns out that a
single c-factor extracted from the overall performance of each group was
the best predictor of how the same group solved an unrelated criterion
task (such as playing checkers or solving an architectural design task).
Importantly, the suggested c-factor is not strongly correlated with the
average or maximum individual intelligence of group members. Instead,
it is correlated with the average social sensitivity of group members, the
equality in distribution of conversational turn taking and the propor-
tion of females in the group (although the last factor appeared to be
mediated by social sensitivity). Other factors that are often considered
important determinants of group behaviour – group cohesion, personal
motivation and member satisfaction – did not play a significant role.
As the authors suggest, the study provides evidence that ‘the collective
intelligence of the group as a whole has predictive power above and
beyond what can be explained by knowing the abilities of the individual
group members’ (687). The upshot of these considerations, then, is that
we should not quell appeals to group cognition prematurely on the basis
of questionable intuitions about intelligence.

3 Socially distributed cognition and


the ‘entwinement’ thesis

Let us now return to the question of whether socially distributed cogni-


tion supports the GMT. Perhaps the best way to broach our topic is by
contrast with yet another sense of socially ‘shared’ cognition which, as
far as I can tell, does not amount to a genuine type of group cognition.
Consider, by way of an example, the joy and elation that is felt by sports
fans when they watch their favourite team upset a stronger opponent
or the anger and sadness when their team gets beaten. Such emotions
are experienced very strongly, despite the fact that the outcome of the
game has typically no direct bearing on the personal lives of the fans.
They are derived almost entirely from seeing oneself as a part of one’s
favourite team, which imbues the self of the fan with a distinctive sense
of collective identity (Tajfel 1978). In a recent experimental study of this
phenomenon, Seger and Mackie (2007) propose four criteria to distin-
guish what they call ‘truly group level emotions’ from regular individual
emotions. Group-level emotions must (1) be distinct from individual-
level emotions that are merely sparked by more fleeting social interac-
tions; (2) depend on a person’s level of identification with the group; (3)
be socially shared within a group; and (4) contribute to motivating and
regulating intragroup and intergroup attitudes and behaviour. As Seger
A Beginner’s Guide to Group Minds 313

and Mackie convincingly demonstrate, people experience different


group-level emotions depending on their contextually relevant intra-
group identification, and this difference reliably predicts group-relevant
action tendencies.
How exactly should we understand Seger and Mackie’s appeal to group-
level emotion? There are at least three possible interpretations on offer.4
The standard individualist move is simply to say that the participation
in various group activities can lead individual members of that group
to experience emotions of a distinctive type that they would otherwise
not have. In this case, one’s group membership is conceptualized as a
special source of input or a context which precipitates a unique range of
emotional experiences. But beyond the role of an external trigger, those
social structures play no constitutive role for the psychological processes
inside the head which underlie those emotions. A less individualistic,
‘externalist’ interpretation would be to grant that the social scaffolding
provided by one’s identification and ongoing reliance on certain group
structures is itself a constitutive aspect of the experience of group-level
emotions. If those structures were not around or would cease to make the
distinct social and psychological contributions established by Seger and
Mackie, people would be unable to experience this type of emotion. The
goal of defending such an interpretation is to challenge the strict demar-
cation between ‘inner’ psychological states and ‘outer’ social processes
which continues to dominate mainstream psychological theorizing.
Dubbed the ‘social manifestation thesis’ (SMT) by Wilson (2001, 2004),
it accords the social relationships between people or between people
and groups a more enduring, active role in sculpting and driving along
the manifestation of certain psychological processes. Consequently,
those group structures should not count as any less constitutive vehi-
cles of cognition than our neural machinery inside the head. Socially
manifested cognitive processes in this sense would still be properties
of individuals rather than of groups, but their total physical realization
stretches beyond the boundary of those individuals and includes their
social (and possibly material) environment. An SMT-friendly interpreta-
tion of Seger and Mackie’s group-level emotions seems eminently plau-
sible whenever the persistence conditions of the group as an enduring
collective entity directly hinge on this kind of psychological engage-
ment. In such a case, the experience of group-level emotions can be
truly said to constitute a functionally indispensable component of a self-
sustaining identity regulation process which continuously loops back
and forth between the social persistence of the group and the psycho-
logical selves of its members.
314 Georg Theiner

However, it seems to me that even an SMT-interpretation of Seger and


Mackie’s group-level emotions would still fall short of providing support
for the GMT. It remains an extra step to claim that it is the group as a
whole, rather than its members, that should be viewed as the propri-
etary subject of emotional experience.5 When we count the number
of (socially manifested) group-level emotions that are experienced by
three soccer fans relishing a big win of their favourite team, we count to
three. Contrast this situation with a multilevel version of the GMT: for
example, according to Gilbert, when a three-member group collectively
believes that P, over and above the personal beliefs of its members, we
ought to count four mental states, not three. This is essentially the point
made by Wilson (2001, S265–S266), when he argues that the truth of the
SMT does not logically imply the truth of the GMT. So what does it take
to close the suggested inferential gap? Why think that socially distributed
cognition is a variety of group cognition?
An interesting observation by Wilson (2001, S272) points us towards
an answer to this question. Wilson criticizes the standard attitude
towards multilevel views of selection, in which the relationship between
individual and group-level selection is frequently viewed as a tug of war
between two discrete evolutionary forces pulling in opposite directions.
Instead, he suggests that if individual-level and group-level adaptations
become sufficiently ‘metaphysically entwined’ akin to the SMT, natural
selection may not be a fine-grained enough mechanism to distinguish
between them. In particular, group cognition and socially manifested
individual cognition would both be parts of a co-evolutionary process
that forms a ‘mutually reinforcing causal loop’ (ibid.) rather than two
opposing forces. Couched in the language of evolutionary biology is, I
think, a key insight that will help us better understand the concept of
socially distributed cognition more generally.
To drive this point home, consider the remarkable ability of Brazilian
fire ants to assume a raftlike formation which can stay afloat on water for
days, allowing the colony to migrate safely to drier land in the event of
rainstorms. Individual ants have an exoskeleton which naturally repels
water, which explains why single ants can float for a while because of
their small size. But how does this work for an entire colony? In a recent
study, Mlot et al. (2011) observed that when an entire clump of ants is
dumped into water, the ants on the bottom of a raft immediately cling
to one another with their claws, mandibles and adhesive pads at the
end of their feet, forming a stable base. The ants on top slowly expand
the edges of the raft until it forms a solid, pancake-shaped surface akin
to a raft. With their hairy bodies all entangled, the ants at the bottom
A Beginner’s Guide to Group Minds 315

of the raft now trap an even bigger layer of air, thereby enhancing the
natural water repellency of their bodies; in addition, the trapped air
allows the bottom ants to breathe and adds buoyancy to the raft. Mlot
and colleagues show that what looks like a fine example of cooperative
behaviour is in fact an emergent result of a ‘random walk’ process on the
level of individual ants who either turn around when they arrive at the
edge of the raft or are forced down by other ants pushing from behind.
What lessons can we draw from this study of collective animal behav-
iour? Surely the term ‘ant raft’ ought to qualify as a collective noun in
at least some of the same sense in which ‘Edison’ does. Let us, there-
fore, temporarily bracket the contentious notion of cognition and
consider what it means for the behaviour of individuals and collectives
to become ‘metaphysically entwined’ in a socially distributed fashion.
A philosophically revealing gloss of ants in raft formation is to say that
they are causally coupled so as to form an integrated system with functional
gains. The three main aspects of this characterization can be unpacked as
follows. (1) Two (or more) elements are causally coupled just in case there
are reliable, two-way causal connections between them. For instance,
the ants need to be capable of reversibly attaching to each other and
also of climbing on top of one another so that ants at the edge can
be coerced into ‘cooperative’ behaviour. (2) Two (or more) coupled
elements form an integrated system in situations in which they operate
as a single causal whole – with causes affecting the resultant system as
a whole and the activities of that system as a whole producing certain
effects. A specific example of this would be the enhanced water repel-
lency of ants in raft formation, which changes their fluid dynamics. (3)
An integratively coupled system shows functional gain just when it either
(a) enhances the existing functions of its coupled parts or (b) manifests
novel functions as a whole relative to those possessed by any of its parts.
An example of (a) would be the raft-building talents of individual ants.
An example of (b) would be the colony’s capacity to stay afloat as a
unit, so that the colony can survive. Note that the functional ascrip-
tions manifest in (a) and (b) are ‘metaphysically entwined’: ant colonies
would not survive without the raft-building behaviours of its members,
and individual ants which do not become parts of an ant raft would not
be building rafts. This makes (a) an instance of a socially manifested
trait in the sense of Wilson, whereas (b) is an instance of a genuine
group-level trait. Rather than be conceived as two opposing forces, they
stand in a mutually reinforcing relationship as real and causally relevant
features of a multilevel system. Let’s now apply this characterization to
the analysis of socially distributed human cognition.
316 Georg Theiner

4 The emergence of socially distributed cognition

The goal of this section is to refine the sense in which socially distrib-
uted cognition is an emergent group-level phenomenon, albeit in a
way that straddles Wilson’s distinction between multilevel and group-
only conceptions of the GMT.6 I illustrate my analysis with reference
to Larson and Christensen’s (1993) cognitivist analysis of groups as
problem-solving units. As they explicitly state, ‘[w]e refer to group-level
cognitive activity as social cognition, a term that we apply collectively
to those social processes involved in the acquisition, storage, transmis-
sion, manipulation, and use of information for the purpose of creating a
group-level intellective product. In this context, the word ‘social’ is used
to denote how cognition is accomplished, not its content’ (5). Later, they
add that ‘at the group level of analysis, cognition is a social phenom-
enon’ (6). Using a generic information-processing model of problem
solving that has been used to explain the actions of individuals, they
detail a large number of social-interactional processes that ‘help account
for a parallel category of group-level action – the generation of group
decisions and group problem solutions’ (7). For instance, groups first
must identify and conceptualize the problems they have to solve; during
the acquisition stage, groups must decide how to distribute their atten-
tion to certain kinds of information, which is helped by discussing the
informational needs of the group, allocating members’ cognitive and
material resources or lending assistance and backing one another up
as information is being gathered (14). The group-cognitive functions
that are fulfilled by discussions serve to bring problem-relevant infor-
mation to light, influence individual cognitive processes and serve as
mechanisms by which members’ perceptions, judgments and opinions
are combined to generate a single group solution (22).
Intuitively, what makes us refer to the above analysis as an emergent
case of problem solving is the fact that the observed group outcome
does not simply result from the unstructured aggregation of individual
cognition but depends on an organized division of cognitive labour
among its members. More precisely, we can say that the sense in which
socially distributed cognition is emergent can be conceived of as a failure
of ‘aggregativity’ in the sense of Wimsatt (1986).7 Let P(S) be an aggrega-
tive cognitive property P of a group S with respect to a decomposition of
S into its members if P(S) is invariant with respect to the following four
conditions: (1) intersubstitutability of members, (2) qualitative similarity
with a change in the number of members, (3) stability under decompo-
sition and reaggregation of members, (4) no cooperative or inhibitory
A Beginner’s Guide to Group Minds 317

interactions among members. Using aggregativity as a benchmark,


emergence can then be defined indirectly in terms of how many of these
four conditions a given P(S) fails to meet. Emergence sensu Wimsatt is
not an all-or-nothing phenomenon: properties of a whole can fail to be
invariant under some of the suggested operations but not others, under
different boundary conditions and within specified levels of perform-
ance tolerance. Aggregative properties are a special, albeit rare case of
fully decomposable system properties (Simon 1962), because their behav-
iour is effectively independent of organizational structure. Collective
cognitive systems which satisfy conditions 1–3 tend to exhibit, at least
in the long run, a modular organization that can often be approximated
as a linear combination of individual members’ cognitive contributions.
Such systems are nearly decomposable, to the extent that the causal inter-
actions within each individual cognitive system are more important in
determining the cognitive behaviour of the group as a whole than are the
causal interactions between individual cognitive systems. For a collective
cognitive system to be minimally decomposable and thus fully emergent
in the sense of Wimsatt, the above situation has to be reversed. In that
case, the individual cognitive systems become so densely coupled, func-
tionally entwined and codependent on one another that their reciprocal,
boundary-crossing social and communicative interactions act as the
primary determinants of group-level cognitive activities. Consequently,
whether a given cognitive property of a group is an emergent phenom-
enon or not is a thoroughly empirical question – not one that can be
decided from the philosophical armchair.
Emergent cognitive properties of groups which violate conditions
1–4 vaguely correspond to the ‘holist’ slogan, in which group cogni-
tion is said to be ‘more than the sum of its members’. In small-group
research, such an outcome is known as an assembly bonus effect because
‘the group is able to collectively achieve something which could not
have been achieved by any member working alone or by a combina-
tion of individual efforts’ (Collins and Guetzkow 1964, 58; cf. Laughlin
2011). The potential for assembly bonus effects has often been hailed
as the holy grail of group cognition. However, we must be careful not
to overstate the relevance of such a claim. Consider, for example, the
well-documented fact that groups often perform worse than the sum of
its members because they fail to capitalize on the cognitive resources of
its members, including unshared memories (Harris, Peterson and Kemp
2008), creative ideas (Paulus and Brown 2007), unique information
(Stasser and Titus 2003) or the diversity of judgments (Surowiecki 2004).
These results might be taken as the premise of an empirical argument
318 Georg Theiner

against group cognition, based on the fact that assembly bonus effects
are relatively rare. But this objection misses the point of our analysis.
The reason why socially distributed cognition can be said to be emer-
gent does not rest on the assumption that group cognition must always
yield optimal results or that group minds must always know more than
the sum of individual minds. Instead, it is meant to indicate the level
of influence that organizational structures and interactive group proc-
esses have on the collaborative production of a group-level cognitive
outcome. In principle, a collaborating group could significantly under-
perform the sum of its parts (e.g., based on a comparison with aggregate
data from nominal groups), and yet its outcome would be classified as
emergent in the sense we have outlined. 8

5 Final thoughts

We began our discussion by stating the standard individualist view


(SIAM) that all cognition is that of singular minds. I find this view
implausible. This is not because I advocate a return to vitalism. Quite the
contrary: hasn’t physicalism taught us that the capacity for mentality is,
at bottom, a question of how to configure matter in ways that support
intelligent behaviour? Consequently, if singular cognition can emerge
from the complex interactions of neurons in the brain, shouldn’t physi-
calists warm up more to the possibility that group cognition can simi-
larly emerge from the complex interactions of people in groups? From
the standpoint of complex systems theory, emergent properties are fairly
common in nature. As Theiner, Allen and Goldstone (2010, 383–384)
have pointed out, we should be more puzzled by the inverse possibility
of ‘demergence’ – that is, the discovery of phenomena that pop into
existence at a certain level of organizational structure but then disappear
from higher levels of organization except insofar as they are found in
the parts. From this perspective, the indivisibility argument for mind-
body dualism can also be seen as an argument that the ability to think
is a demergent property. Perhaps the continued popularity of SIAM is
among the lingering effects of dualism. But if I may venture a guess, I
think that emergent cognition will turn out to be far more common in
our universe than demergent cognition.
Another implication of our discussion is that the notion of singular
cognition may well be fuzzier than the standard view assumes. The
biological individual is not a metaphysically sacrosanct boundary of
cognition. Discussing the fallacies of reductionism, Wimsatt (2006, n.
19) once speculated that ‘in the ultimate view, persons might be shaved
A Beginner’s Guide to Group Minds 319

or thinned a bit, referring some properties to lower levels of organiza-


tion, but more importantly, referring some properties to higher levels’.
This downward trickle is already at work in cognitive neuroscience,
where attributions of cognitive properties to neural systems that are
proper parts of the brain, all the way down to molecular cognition
(Bickle 2006), have become quite common. At the same time, propo-
nents of the ‘extended mind’ thesis have focused on the incorporation
of tools and artefacts into supraindividual minds that include organisms
among their proper parts (Clark 2008). In this chapter, I have explored
the collective dimension of cognitive extension, in particular the vari-
eties of socially manifested cognition, socially distributed cognition,
collective cognition and hive cognition. They all reveal how cognition
extends upwards beyond the boundary of the biological individual.9

Notes
1. For a classical discussion of this assumption in the context of split-brain
patients, see Nagel (1971).
2. For historical overviews, see, e.g., Allport (1968), Runciman (1997), Bar-Tal
(2000, ch. 2) and Wilson (2004, ch. 11).
3. Variations of this objection are discussed, among other places, by Sandelands
and Stablein (1987, 149), Gilbert (2004, 10), Rupert (2005, §§1–2), Harnad
(2005, passim), and Giere (2004, 768–772).
4. My tripartite distinction closely mirrors the discussion of socially distributed
remembering by Barnier et al. (2008, esp. 37–38).
5. For an argument in favour of group level emotions in this sense, see Huebner
(2011).
6. For a theoretical comparison of several different ways in which group cogni-
tion is said to be emergent, see Theiner and O’Connor (2010).
7. For an application of Wimsatt’s classification to distributed cognition, see
also Poirier and Chicoisne (2006); for an application to cultural evolution, see
Smaldino (forthcoming).
8. For a detailed discussion of this point in the context of group memory, see
Theiner (2013).
9. I would like to thank Bryce Huebner and Orestis Palermos for helpful
comments on an earlier version of this chapter.

References
Asch, S. (1952) Social Psychology. Englewood Cliffs, NJ: Prentice Hall.
Barnier, A. J., Sutton, J., Harris, C. B., and Wilson, R. A. (2008) ‘A Conceptual
and Empirical Framework for the Social Distribution of Cognition: The Case of
Memory’. Cognitive Systems Research 9 (1): 33–51.
Bar-Tal, D. (2000) Shared Beliefs in a Society: Social Psychological Analysis. Thousand
Oaks, CA: Sage.
320 Georg Theiner

Bettencourt, L. M. A. (2009) ‘The Rules of Information Aggregation and Emergence


of Collective Intelligent Behavior’. Topics in Cognitive Science 1: 598–620.
Bickle, J. (2006) ‘Reducing Mind to Molecular Pathways: Explicating the
Reductionism Implicit in Current Cellular and Molecular Neuroscience’.
Synthese 151 (3): 411–434.
Bouchard, F., and Huneman, P., (eds) (2013) From Groups to Individuals: Evolution
and Emerging Individuality. Cambridge, MA: MIT Press.
Clark, A. (2008) Supersizing the Mind: Embodiment, Action, and Cognitive Extension.
New York: Oxford University Press.
Collins, B. E., and Guetzkow, H. (1964) A Social Psychology of Group Processes for
Decision-Making. New York: Wiley.
Durkheim, É. (1953) ‘Individual and Collective Representations, trans. by D. F.
Pocock. In Sociology and Philosophy, 1–34. Glencoe, IL: Free Press.
Forsyth, D. (2006) Group dynamics (5th edn). Belmont, CA: Wadsworth.
Gersick, C. G. (1988) ‘Time and Transition in Work Teams: Toward a New Model
of Group Development’. Academy of Management Journal 31: 9–41.
Giere, R. (2004) ‘The Problem of Agency in Scientific Distributed Cognitive
Systems’. Journal of Cognition and Culture 4 (3–4): 759–774.
Gilbert, M. (1989) On Social Facts. London: Routledge.
Gilbert, M. (2002) ‘Collective Guilt and Collective Guilt Feelings’. Journal of Ethics
67 (6): 2.
Gilbert, M. (2004) ‘Collective Epistemology’. Episteme 1 (2): 95–107.
Goldstone, R., and Gureckis, T. (2009) ‘Collective Behavior’. Topics in Cognitive
Science 1 (4): 412–438.
Goldstone, R., Jones, A., and Roberts, M. (2006) ‘Group Path Formation’. IEEE
Transactions on System, Man, and Cybernetics, Part A, 36: 611–620.
Haidt, J., Seder, P., and Kesebir, S. (2008) ‘Hive Psychology, Happiness, and Public
Policy’. Journal of Legal Studies 37: 133–156.
Hargadon, A. B., and Bechky, B. A. (2006) ‘When Collections of Creatives Become
Creative Collectives: A Field Study of Problem Solving at Work’. Organization
Science 17 (4): 484–500.
Harnad, S. (2005) ‘Distributed Processes, Distributed Cognizers and Collaborative
Cognition’. Pragmatics & Cognition 13 (3): 501–514.
Heylighen, F. (2011) ‘Conceptions of a Global Brain: An Historical Review’. In L.
E. Grinin, R. L. Carneiro, A. V. Korotayev and F. Spier (eds), Evolution: Cosmic,
Biological, and Social, 274–289. Volgograd: Uchitel.
Hinsz, V. B., Tindale, R. S., and Vollrath, D. A. (1997) ‘The Emerging
Conceptualization of Groups as Information Processors’. Psychological Bulletin
121 (1): 43–64.
Hölldobler, B., and Wilson, E. O. (2008) The Superorganism: The Beauty, Elegance,
and Strangeness of Insect Societies. New York: Norton.
Huebner, B. (2011) ‘Genuinely Collective Emotions’. European Journal for the
Philosophy of Science 1 (1): 89–118.
Huebner, B., Bruno, M., and Sarkissian, H. (2010) ‘What Does the Nation of China
Think About Phenomenal States? Review of Philosophy and Psychology 1: 225–243.
Hutchins, E. (1995) Cognition in the Wild. Cambridge, MA: MIT Press.
Janis, I. (1982) Groupthink (2nd edn). Boston: Wadsworth.
A Beginner’s Guide to Group Minds 321

Larson, J. R., and Christensen, C. (1993) ‘Groups as Problem-solving Units:


Toward a New Meaning of Social Cognition’. British Journal of Social Psychology
32 (1): 5–30.
Lévy, P. (1999) Collective Intelligence: Mankind’s Emerging World in Cyberspace, trans.
by R. Bononno. New York: Perseus.
List, C., and Pettit, P. (2011) Group Agency: The Possibility, Design, and Status of
Corporate Agents. Oxford: Oxford University Press.
Ludwig, K. (2007) ‘Collective Intentional Behavior from the Standpoint of
Semantics’. Noûs (41): 355–393.
Marshall, J., and Franks, N. (2009) ‘Colony-level Cognition’. Current Biology 19
(10): R395–R396.
McGrath, J. E. (1984) Groups: Interaction and Performance. Englewood Cliffs:
Prentice-Hall.
Millard, A. J. (1990) Edison and the Business of Innovation. Baltimore: Johns Hopkins
University Press.
Mlot, N. J., Tovey, C. A., and Hu, D. L. (2011) ‘Fire Ants Self-assemble into
Waterproof Rafts to Survive Floods’. Proceedings of the National Academy of
Sciences of the U.S.A. 108 (19): 7669–7673.
Moussaid, M., Garnier, S., Theraulaz, G., and Helbing, D. (2009) ‘Collective
Information Processing and Pattern Formation in Swarms, Flocks, and Crowds’.
Topics in Cognitive Science 1 (3): 469–497.
Nagel, T. (1971) ‘Brain Bisection and the Unity of Consciousness’. Synthese 22:
396–413.
Paulus, P. B., and Brown, V. R. (2007) ‘Toward More Creative and Innovative Group
Idea Generation: A Cognitive-social Motivational Perspective of Brainstorming’.
Social and Personality Compass 1 (1): 248–265.
Poirier, P., and Chicoisne, G. (2006) ‘A Framework for Thinking about Distributed
Cognition’. Pragmatics & Cognition 14 (2): 215–234.
Runciman, D. (1997) Pluralism and the Personality of the State. Cambridge:
Cambridge University Press.
Rupert, R. (2005) ‘Minding One’s Cognitive Systems: When Does a Group of
Minds Constitute a Single Cognitive Unit?’ Episteme 1 (3): 177–188.
Rupert, R. (2011) ‘Empirical Arguments for Group Minds: A Critical Appraisal’.
Philosophy Compass 6 (9): 630–639.
Ryle, G. (1949) The Concept of Mind. London: Hutchinson.
Sandelands, L. E., and Stablein, R. E. (1987) ‘The Concept of Organization Mind’.
In S. Bacharach and N. DiTomasco (eds), Research in the Sociology of Organizations,
135–161. Greenwich, CT: JAI Press.
Sasaki, T., and Pratt, S. C. (2011) ‘Emergence of Group Rationality from Irrational
Individuals’. Behavioral Ecology 22 (2): 276–281.
Schmitt, F. (2003) ‘Joint Action: From Individualism to Supraindividualism’.
In F. Schmitt (ed.), Socializing Metaphysics, 129–166. Oxford: Rowman and
Littlefield.
Simon, H. A. (1962) ‘The Architecture of Complexity’. Proceedings of the American
Philosophical Society 106: 467–482.
Smaldino, P. (forthcoming) ‘The Cultural Evolution of Emergent Group-level
Traits’. Behavioral and Brain Sciences.
322 Georg Theiner

Smith, E. R., Seger, C. R., & Mackie, D. M. (2007) ‘Can Emotions be Truly Group
Level? Evidence for Four Conceptual Criteria’. Journal of Personality and Social
Psychology 93, 431–446.
Stahl, G. (2006) Group Cognition: Computer Support for Building Collaborative
Knowledge. Cambridge, MA: MIT Press.
Stasser, G., and Titus, W. (2003) ‘Hidden Profiles: A Brief History’. Psychological
Inquiry 14 (3–4): 304–313.
Surowiecki, J. (2004) The Wisdom of Crowds. New York: Anchor.
Tajfel, H. (1978) Differentiation Between Social Groups: Studies in the Social Psychology
of Intergroup Relations. London: Academic Press.
Theiner, G. (2013) ‘Transactive Memory Systems: A Mechanistic Analysis of
Emergent Group Memory’. Review of Philosophy and Psychology 4 (1): 65–89.
Theiner, G. (2013) ‘Onwards and Upwards with the Extended Mind: From
Individual to Collective Epistemic Action’. In L. Caporael, J. Griesemer and
W. Wimsatt (eds) Developing Scaffolds. Vienna Series in Theoretical Biology,
191–208. Cambridge, MA: MIT Press.
Theiner, G., Allen, C., and Goldstone, R. (2010) ‘Recognizing Group Cognition’.
Cognitive Systems Research 11: 378–395.
Theiner, G., and O’Connor, T. (2010) ‘The Emergence of Group Cognition’. In A.
Corradini and T. O’Connor (eds), Emergence in science and philosophy, 78–117.
New York: Routledge.
Walsh, J. P., and Ungson, G. R. (1991) ‘Organizational Memory’. Academy of
Management Review 16: 57–91.
Wegner, D. M. (1987) ‘Transactive Memory: A Contemporary Analysis of the
Group Mind’. In B. Mullen and G. R. Goethals (eds), Theories of group behavior,
185–208. New York: Springer Verlag.
Weick, K., and Roberts, K. (1993) ‘Collective Mind in Organizations: Heedful
Interrelating on Flight Decks’. Administrative Science Quarterly 38: 357–381.
Wheeler, G. (1920) ‘The Termitodoxa, or Biology and Society’. Scientific Monthly
10: 113–124.
Wilson, E. O. (1971) The Insect Societies. Cambridge, MA: Belknap Press.
Wilson, D. S. (1997) ‘Incorporating Group Selection into the Adaptationist
Program: A Case Study Involving Human Decision Making’. In J. A. Simpson
and D. T. Kenrick (eds), Evolutionary Social Psychology. Mahwah, NJ: Erlbaum,
345–386.
Wilson, D. S. (2002) Darwin’s Cathedral: Evolution, Religion, and the Nature of
Society. Chicago: University of Chicago Press.
Wilson, R. (2001) ‘Group-level Cognition’. Philosophy of Science 68 (supp.):
262–273.
Wilson, R. (2004) Boundaries of the Mind: The Individual in the Fragile Sciences –
Cognition. Cambridge: Cambridge University Press.
Wimsatt, W. C. (1986) ‘Forms of Aggregativity’. In M. G. Grene, A. Donagan,
A. N. Perovich and M. V. Wedin (eds), Human Nature and Natural Knowledge,
259–291. Dordrecht: Reidel.
Wimsatt, W. C. (2006) ‘Reductionism and its Heuristics: Making Methodological
Reductionism Honest’. Synthese 151 (3): 445–475.
Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., and Malone, T. W. (2010)
‘Evidence for a Collective Intelligence Factor in the Performance of Human
Groups’. Science 330 (6004): 686–688.
Index

access internalism, 99, 113–20 121nn.18/22, 153, 154, 157,


acquaintance, 46, 51n.9, 162–6, 170, 159, 169, 175, 176, 176n.2,
173, 174, 176n.2, 178n.13, 290 178nn.9/12/15, 179n.16, 200, 282
action, potential, theory, 215–7, change blindness, 187–8
223n.3 clairvoyance, 101, 104, 106, 114,
active externalism, 83–4, 153, 165, 120n.9
168, 171, 174, 175, ch.8 passim cognition,
adverbialism, 38, 43, 44, 164, 166 boundaries, 185, 201–2, 313, 317,
architecture, 318–9
massively modular, 245, 259n.7 central, 244–5
Armstrong, David, 5, 51n.5, 120n.3 collective, 302, 304–12, 317, 319
attention, 106, 125, 192, 193, 195, distributed, 302, 304, 309, 312–18,
311 319
autonomy, extended, 79, 83, 87, 91, 95, 200–2,
of special sciences, 146–8, 149n.14 319
of concepts, 248, 250–2, 259n.9 emergent, 316–8, 319n.7
awareness, conscious, 285, 287–90, group, ch.15 passim
295n.18 hive, 302–9
singular, 306, 318
Bayesian, epistemology, 57, 244 cognitive,
Bennett, Karen, 32 neuroscience, 222, 227, 229, 262–3,
blindsight, 102, 103–8, 109, 117–19, 264–6, 269, 271, 276n.3, 319
120n.19, 287–9 phenomenology, 196
BOLD signal, 264–71, 275 science, 102, 116, Part II passim eps.
ch.9
Carnap, Rudolf, 73n.6 compatibilism, 61
categorization, 61, 237n.11, 244, 311 completeness of the physical, 22,
causal, 30–1,
exclusion, 22–3, 29–33, 35, computation, 40, 102, 110, 116, 136,
36nn.8/9, 133, 210–1 144–5, 193, 247, 310–11
overdetermination, 30–3, 35, computer, 128, 131, 138, 140–1, 144,
36nn.7/9/10 262
power, ch.7 passim conceivability, 4, 7, 9–10, 15, 19n.9,
causal theory of reference, 68–9, 22–9, 35, 170, 179n.16
73n.12, concept,
causation, 29–33, 36nn.8/10, 81, 174, mental, 5–8, 12–17, 18n.2, 46
175, 196, 203, 235n.5 natural kind, 33, 34, 51n.9, 69–70,
Chalmers, David, 22, 23–9, 31, 78, 168, 201, 229, 235n.3
34, 35n.1, 36nn.3/5/6, 71, philosophical, 54, 65
73n.13, 74nn.18/19, 83–4, opaque, 10, 15
86, 90, 95nn.6/9, 120nn.1/4, transparent, 10, 15

323
324 Index

concept – continued substance, 12, 21, 39


translucent, 15–16
functional, 13, 16 Ebbinghaus illusion, 232–3, 236 n.10
phenomenal, 105 epiphenomenalism, 22, 279, 280–6,
conceptual analysis, ch.14 passim
intuitive, 54, 55, 59, 62–71, 74 n.18 ethics, 158, 226
pragmatic, 55, 58–67, 70, 72, 73 Evans, Gareth, 68, 69, 110, 253–4
nn.7/9/10, 74nn.15/16/17 expectation, 191, 198
naturalized, 55, 59, 68–70 explanation,
consciousness, chemical, 28, 254, 301
phenomenal, 22–9, 36n.5, 98–9, mechanistic, 38, 45, 50n.1, 129–36,
102, 105, 108–11, 113–20, 141–2, 146, 148, 149, 201–2
120n.1, 153–4, 160, 174, 187–91, naturalistic, 39, 44–6, 50, ch.10
233, 262, 309 passim, ch.11 passim
perceptual, 280, 284, 287, 288, 293 extended mind, ch.5 passim, see also
imagistic, 283–93 extended cognition
access, 105–7, 117, 233 extromission, 162–3, 165, 167, 173
metacognitive, 105–7, 118
constitution, 27 false positive, 269–70, 272, 273, 275,
contextualism, 71 276n.5
corkscrew, 129–36, 141–2, 145, 149 n.9 free-energy principle, 194
counterfactuals, 32, 86, 98, 100, 212, fMRI, 262, 263, 269–71
215, 216, 251–2 Fodor, Jerry, 34, 36n.11, 68, 69, 96n.7,
120n.2, 125, 126, 146, 175,
data, first-person, 196–9, 178n.15, 188, 223, 244–5, 262,
Davidson, Donald, 39, 109, 176n.2 function, biological, 146, 155, 160–1,
defeater, 173, 174, 177n.7
undercutting, 161 functionalism, analytic, 4–8, 11–15, 34
rebutting, 161
demonstrative, general linear model, 265–6
knowledge, 288–9 Generality Constraint, 253–5
reference, 78, 290 Gettier cases, 63, 67
thought, 280, 284, 287, 293 Goldilocks principle, 210
Dennett, Daniel, 68, 102, 195, 197, Grice, Paul, 212
199, 203, 282, 288, 289, 296 n.22 group, agency, minds, 301–19, ch.15
Descartes, Rene, ch.1 passim, 39, 40 passim
descriptive theory of reference, 54,
55, 63–5, 73n.12 habituation, 256
designator, hallucination, 6, 43–4, 45, 48–9, 154,
non-rigid, 5 155, 157, 162, 164–6, 176n.3,
rigid, 9, 34, 94 Haslanger, Sally, 60
determinism, 60–65 Hempel, Carl, 217–18
disjunctivism, 43–4, 45, 162, hierarchy of thought, 242, 243–7,
Dretske, Fred, 39, 62, 120nn.2/3/4, 255–8, 303
121n.18, 153–4, 155, 160, 161–2, Hodgkin–Huxley model, 215–17,
171, 173–6, 176n.1, 177n.7, 219–20, 223n.3
178n.15, 188, 235n.2
dualism, 158, 177–8n.8, 318 identity theory, 39, 40, 50n.3, 156,
property, 12, ch.1 passim, 51n.3 170, 178n.15
Index 325

imagery, Lewis, David, 5–7, 18nn.3/5/6, 30–2,


conscious, 280, 285–92, 295n.14, 69, 71, 73n.13, 176n.2, 213, 218
296n.27 Leibniz, Gottfried W., 24–7, 34,
experimental, ch.13 passim 35n.2, 36n.4
mental, ch.14 passim
reported, 279, 280–1, 294n.6 materialism, 4, 8, 23–9, 157–8, 161,
representational, 286, 291, 293, 169, 170–2, 174, 177n.7, 179n.16
296n.27 type-B, 28–9
inattentional blindness, 38, ch.3 measurement, 186, 196–9, 207, 212
sec.1 passim, 106, 187 mechanism, 31–3, 34, 77–9, 83, 87,
individualism, 81, 301 104, ch.7 passim, 197–8, 200–2,
induction, 104, 109, 145, 243 215–6, 223n.3, 219, 282–3,
inscrutability, 279, 280–7, 289 314–5, 316
intensions, 58, 73n.4 memory, 79, 90, 10 0, 101, 119, 131, 138,
primary, 23–26 144, 2 0 0–1, 242–3, 302, 319n.8
secondary, 23–26 episodic, 278, 291–2
intentionalism, 111–12, 120n.4, iconic, 187–8
121nn.18/21/22 sensory, 187–91
strong, 155 mentalism, ch.6 passim, 208, 217
tracking, ch.8 passim phenomenal, ch.6, sec.3, 4 passim
introspection, 113–19, 153, 175, 177, access, 105
198, ch. 11 passim, 282–4, 293n.5 introspective, 115–16
Inverted Earth, 160–4, 178n.10 metaphysics of mind, Part 1 passim
Millikan, Ruth, 56, 62, 72n.1, 120n.2,
Jackson, Frank, 62–6, 70, 71, 72, 230, 236n.8
73n.14, 74nn.15/18, 82, 91, 164 mind-body problem, ch.1 passim,
justification, 159, 167, 290–3 ch.2 passim, 126
epistemic, 36n.12, ch.6 passim mindreading, 263–4, 268
non-inferential, 103 mirror neuron, 269
models, 35, 145, 193–5, 218–23, 244,
Kim, Jaegwon, 22, 31, 36n.8, 126, 245, 247, 254, 262, 244–7, 304
130, 133, 213, 217 modularity, 34, 244–5, 259n.7,
kinds, 267–8, 275, 317
mental, 34, 35, 40, 42, 48 monism,
physical, 21–4, 30–1, 36n.8 Russellian, 23–4, 36n.3
psychological, 22–3, 29–31, 34, Müller-Lyer illusion, 232, 236n.10
36n.8, 255–8 mutual manipulability, 201
knowledge, tacit, 102, 110–1, 116
Kornblith, Hilary, 68 Nagel, Thomas, 18, 283, 319n.1
Kriegel, Uriah, 34, 46–8, 51nn.8/10, natural selection, 194, 314
52n.15, 119, 153, 156, 157, 159, necessity,
167, 168, 176nn.2/4, 177nn.6/8, metaphysical, 26–35, 90, 158–9
178n.9 nomological, 27–33, 35, 83, 156
Kripke, Saul, 21, 28, 68, 69, 73n.12 neuroimagery, ch.13 passim
Kuhn, Thomas, 57 neurophenomenology, 196
neuroscience, 11, 16–17, 19n.19, 56,
law, 144, 176, 185, 200, 210–11, 214,
neurophysiology, 167–8 215–7, 221–2, 223n.3, 227–9,
psychophysical, 163–7, 177n.8 245, 262–6, 269, 271–6, 319
326 Index

Newtonian mechanics, 219 mental, 11–17, 80–1, 125–6, 302


nominalism, 154 neurophysiological, 12–13, 16–17
null hypothesis, ch.13 passim perceptible, 166
phenomenal, 111–12, ch.8 passim
observer-relativity, 130 physical, 11, 13, 39, 126
Occam’s Razor, 28, 31 relational, ch.8 passim, 250
ontological commitment, ch.10 passim spatial, 154, 159
psychology,
Papineau, David, 19nn.10/13, 62 developmental, 107
parity principle, 87, 200 folk, 56, 228, 234
perception, 13, 24, 38, 42, 44, 107, Gestalt, 307
111–12, 119, 154, 162, 193, social, 303
227–9, 235n.2, 237n.11, 243–5, Putnam, Hilary, 73n.12, 80, 83,
249–50, 290, 295n.21, 296n.23, 95n.10, 125, 126, 128, 174, 209,
316 210, 221
perceptual defence, 197
persistence, Quine, Willard V. O., 68, 110, 203,
informational, 189–90 228
visible, 189
PET, 264 rationalism,
phenomenal, modal, 35, 179n.16
character, ch.6 passim, 161, 169–70 rationality, 154, 176n.2, 193, 304
content, 188 realism,
externalism, ch.8 passim naïve, 51n.9, 153, 162–8 173,
intentionality program, 153–4, 175 175–6, 178n.13, 179nn.16/17
internalism, ch.8 passim realization, 27–8, 34–5, 82, 85, 86,
localism, 172–5 91–4, 313
phenomenology, 4, 34, 121 n.19, dimensioned view, 132–4, 141
ch.8 passim, 197 flat view, 131–3
cognitive, 196 multiple, 40, ch.7 passim
perceptual, 227 reasoning, 103, 105, ch.12
philosophy, passim
of biology, 200, 202 reduction, 10, 19n.10, 49, 112,
of science, ch.9 passim, ch.10 125–6, 146–8, 156, 158, 176n.1,
passim 196, 203, 224
physicalism, ch1. passim, ch.2 passim, theoretical, 35, 222
ch.3 passim, 262, 302, 306, 318 reductionism, 112, 125–6, 146–7,
a posteriori, 8–15 176n.1, 217, 318
a priori, 5–9 reductive externalism, ch.8 passim
non-reductive, 21, 31–2, 50 n.3, reliabilism, 100–1, 114–5, 120nn.3/7,
210–11 291
pragmatic representation, ch.11 passim Russell, Bertrand, 23–4, 64, 167
properties,
causal, 94, 213–20 Schwitzgebel, Eric, 196, 198, 199,
emergent, 302, 318 280–6, 291–2, 293nn.4/5,
extrinsic, 5, 12 294nn.6/8/9, 295nn.12/14/21
functional, 11–13, 16–17 science,
intrinsic, 112, 155, 159, 164, 166, higher-level, 33, 147
172 special, 33, 35, ch.7 passim, 262
Index 327

scientific theories, Strawson, Peter, 64, 121n.20


axiomatic view, 208, 217–223, super-blindsight, 106–7, 117–20,
223nn.1/5 120n.14, 287–9
semantic view, ch.10 passim supervenience, 35, 49, 77, 81–2,
Searle, John, 34, 230 85–93, 95n.11, 99, 153, 159, 163,
semantic, 165, 172–3, 178n.13
externalism, 13–15 Swampman, 63, 64, 160
internalism, 45
theories, 41, ch.4 passim testimony, 100–1, 104, 107,
sense datum, 42, 163–6, 173–5, 119, 212
177n.8, 178n.13 thought experiment, 34, 99, 160
Shoemaker, Sydney, 91, 92, 94, 117, Twin Earth, 63, 73n.12, 77, 95n.8,
129, 132, 153, 169, 170–1, 175, 112, 179n.16
282, 283 two-dimensional semantics, 35
signal detection theory, 197–8
significance testing, ch.13 passim van Fraassen, Bas, 211, 219–20
Sperling, George, 188–90 verbialism, ch.3 passim
Stanley, Jason, 71 vitalism, 301–2, 318

Você também pode gostar