Escolar Documentos
Profissional Documentos
Cultura Documentos
Titles include:
Mark Sprevak
School of Philosophy, Psychology and Language, University of Edinburgh, UK
and
Jesper Kallestrup
School of Philosophy, Psychology and Language, University of Edinburgh, UK
Selection and editorial matter © Mark Sprevak and Jesper Kallestrup 2014
Chapters © Individual authors 2014
Softcover reprint of the hardcover 1st edition 2014 978-1-137-28671-0
All rights reserved. No reproduction, copy or transmission of this
publication may be made without written permission.
No portion of this publication may be reproduced, copied or transmitted
save with written permission or in accordance with the provisions of the
Copyright, Designs and Patents Act 1988, or under the terms of any licence
permitting limited copying issued by the Copyright Licensing Agency,
Saffron House, 6–10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorized act in relation to this publication
may be liable to criminal prosecution and civil claims for damages.
The authors have asserted their rights to be identified as the authors of this work
in accordance with the Copyright, Designs and Patents Act 1988.
First published 2014 by
PALGRAVE MACMILLAN
Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited,
registered in England, company number 785998, of Houndmills, Basingstoke,
Hampshire RG21 6XS.
Palgrave Macmillan in the US is a division of St Martin’s Press LLC,
175 Fifth Avenue, New York, NY 10010.
Palgrave Macmillan is the global academic imprint of the above companies
and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States,
the United Kingdom, Europe and other countries
ISBN 978-1-137-28672-7 ISBN 978-1-137-28673-4 (eBook)
DOI 10.1057/9781137286734
This book is printed on paper suitable for recycling and made from fully
managed and sustained forest sources. Logging, pulping and manufacturing
processes are expected to conform to the environmental regulations of the
country of origin.
A catalogue record for this book is available from the British Library.
A catalog record for this book is available from the Library of Congress.
Contents
Preface viii
Notes on Contributors x
5 Entangled Externalisms 77
Mark Sprevak and Jesper Kallestrup
v
vi Contents
Index 323
Series Editors’ Preface
The aim of the New Waves in Philosophy series was to gather the young and
up-and-coming scholars in philosophy to give their view of the subject
now and in the years to come and to serve a documentary purpose –
that is, ‘this is what they said then, and this is what happened’. These
volumes provide a snapshot of cutting-edge research that will be of
vital interest to researchers and students working in all subject areas of
philosophy. Our goal was to have a New Waves volume in every one of
the main areas of philosophy, and with this volume on the philosophy
of mind, we believe that this goal has been achieved. Accordingly, this
volume is the final book in this series. The principles that underlie the
New Waves in Philosophy series will live on, however, in the new Palgrave
Innovations in Philosophy series.
vii
Preface
viii
Preface ix
x
Notes on Contributors xi
since 2012, and he was awarded the Stanton Prize by the Society for
Philosophy and Psychology in 2013.
3
4 Philip Goff
that plays the pain role in humans and can infer from this information
that Jennifer is in pain. In either case, the mental facts can be deduced
from the physical facts.
The second meditation provides the resources for a decisive refuta-
tion of both of these forms of analytic functionalism. By the end of the
second meditation I have doubted the existence of my body and my
brain and of the entire physical world around me. For all I know for
certain, my apparent experience of all these things might be an espe-
cially vivid hallucination instigated by an omnipotent evil demon. This
demon might have brought me into existence just a moment ago – with
false memories of a long history and expectations of a similar future –
and may destroy me a moment hence. I discover that the only thing the
demon cannot be deceiving me about is my own existence as a thinking
thing: no matter how much the demon is deceiving me, I must exist as
a thinking thing in order to be deceived.
At the end of this guided meditation, when I have doubted the exist-
ence of anything physical whilst at the same time enjoying the certain
knowledge that I exist as a thinking thing, I find I am conceiving of
myself as a pure and lonely thinker: a thing that has existence only in the
present moment, and that has no characteristics other than its present
mode of thought and experience.4 The fact that I can conceive of myself
as a pure and lonely thinker is inconsistent with the analytic func-
tionalist analysis of mental concepts. For the straightforward analytic
functionalist, it is a priori that something has a given mental state if
and only if it has the higher-order state of having some other state that
plays the relevant causal role. However, a pure and lonely thinker has
no states other than the mental states themselves: its mental states are
not realized in anything more fundamental. If straightforward analytic
functionalism is true, a pure and lonely thinker is inconceivable. And
yet a pure and lonely thinker is not inconceivable; the second medita-
tion guides us to its conception.
For the Australian analytic functionalist, it is a priori that something
is in pain if and only if it has the state that plays the pain role in its
population. But a pure and lonely thinker does not have a population;
it is alone in its world. If Australian analytic functionalism were true, a
pure and lonely thinker would be inconceivable. Yet by the end of the
second meditation we end up conceiving of one.
Lewis does suggest at one point that the population relevant to deter-
mining the application of mental concepts might be the concept user’s
population rather than the population of the creature the concept
is being applied to.5 However, when I reach the end of the second
The Cartesian Argument against Physicalism 7
conceiving of a scenario in which nothing plays the pain role and yet
‘pain’ still has application.
In none of this discussion have we moved from the epistemological
to the metaphysical. Analytic functionalists make certain claims about
mental concepts, claims that have implications for what it is coherent
to suppose. Those claims are inconsistent with the state of conceiving
we end up in at the end of the second meditation. We are able to
refute analytic functionalism by refuting its epistemological elements.
Descartes admits in the second meditation that physical things – ‘these
very things which I am supposing to be nothing, because they are
unknown to me’ – may ‘in reality be identical with the “I” of which I
am aware’.7 The leap from the epistemological to the metaphysical must
wait until the sixth meditation.
2.1 Premise 1
When I was a first-year philosophy undergraduate, I was taught that
premise 1 of this argument could be swiftly refuted with the counterex-
ample of water existing independently of H2O. It seems that we can
conceive of a scenario in which water exists in the absence of H2O – for
example, a scenario in which experiments reveal water to have some
other chemical composition. Yet if we infer from this the real possibility
of water existing in the absence of H2O, we are quickly led to the non-
identity of water and H2O, contrary to what is in fact the case.
This rejection of premise 1 is far too quick. Descartes doesn’t say that
any old conceiving implies possibility, only that a clear and distinct
conception implies possibility. I take it that whatever else having a clear
and distinct conception involves, it involves understanding what you’re
conceiving of. Suppose I think of electric charge as ‘that thing Dave (my
physicist chum) was talking about the other night’ (where I use this
description as a rigid designator) but have zero understanding of the
defining characteristics of negative charge. It is clear that such a concep-
tion of negative charge is not clear and distinct. I can refer to negative
charge, but there is a clear sense in which I don’t know what it is: I have
no idea what it is for something to be negatively charged. I don’t have
the understanding of the nature of negative charge that, let us suppose,
my physicist chum Dave has. Although I can involve negative charge in
what I am conceiving of, to the extent that I do I bring opacity into my
conception.
Such opacity brings in its wake coherent conceivability without possi-
bility. I can coherently conceive of all sorts of scenarios in which ‘nega-
tive charge’ features – I might suppose that negative charge is what
underlies a wizard’s ability to teleport – without this implying that
negative charge really could be as I am supposing. My ignorance of the
nature of negative charge licences a conceptual free-for-all.
Our concept ‘water’ is also opaque in this sense. For something to be
water is for it to be H2O. But this is not apparent or a priori accessible to
10 Philip Goff
2.2 Premise 2
When I reach the end of the second meditation, when I have stripped
away everything it is possible to doubt and alighted upon the certain
knowledge of my existence as a thinking/experiencing thing, I end up
conceiving of my mind existing in the absence of anything physical. But
is this conception clear and distinct? If either the concept of my mind or
the general concept of the physical is not fully transparent, the resulting
conception will fail to be clear and distinct.
Arnauld complained that Descartes had not demonstrated that our
concepts of body and mind were adequate. Certainly they seem to reveal
The Cartesian Argument against Physicalism 11
something of the nature of the substance they denote, but how can we
know that they reveal its entire nature? Arnauld supports his argument
by means of an analogy. Two things are worth noting about Arnauld’s
analogy: (i) it involves properties rather than substances (as Descartes
notes in his reply);11 (ii) it involves subtle a priori knowledge concerning
those properties:
... what then am I? A thing that thinks. What is that? A thing that
doubts, understands, affirms denies, is willing, is unwilling, and
also imagines and has sensory perceptions [by ‘sensory perceptions’
14 Philip Goff
Who doubts that you are thinking? What we are unclear about, what
we are looking for, is that inner substance of yours whose property is
to think. Your conclusion should be related to this inquiry, and should
tell us not that you are a thinking thing, but what sort of thing this
‘you’ who thinks really is. If we are asking about wine, and looking
for the kind of knowledge which is superior to common knowledge,
it will hardly be enough for you to say ‘wine is a liquid thing, which
is compressed from grapes, white or red, sweet, intoxicating’ and so
on. You will have to attempt to investigate and somehow explain its
internal substance, showing how it can be seen to be manufactured
from spirits, tartar, the distillate, and other ingredients mixed together
in such and such quantities and proportions. Similarly, given that you
are looking for knowledge of yourself which is superior to common
knowledge (that is, the kind of knowledge we have had up till now),
you must see that it is certainly not enough for you to announce that
you are a thing that thinks and doubts and understands, etc. You
should carefully scrutinise yourself and conduct, as it were, a kind
of chemical investigation of yourself, if you are to succeed in uncov-
ering and explaining to us your internal substance.15
A. I can coherently suppose that nothing exists other than myself and
my conscious experience, from which I can infer that analytic func-
tionalism is false.
B. I am having a rich and substantive conception of my nature, from
which I can infer that the semantic externalist model of mental
concepts favoured by most contemporary physicalists is false.
physical brain properties such that their nature can be entirely captured by
neuroscience or by neuroscience in conjunction with more basic physical
sciences.19 At any rate, if we can get an argument that refutes the kind of
‘physicalist’ who believes that the nature of mental properties can be fully
revealed by the physical sciences, we have a significant argument. I there-
fore stipulate that by ‘neurophysiological properties’ I mean those proper-
ties which are transparently revealed to us by brain science (or brain science
in conjunction with more basic sciences of matter – see note 19).
We have therefore demonstrated that all the concepts involved in the
conception referred to in premise 2* are transparent. The physicalist
might continue to insist that the conception we reach at the end of
meditation 2 is in some way obscure, confused or incoherent. But she is
obliged to show this, and until she does we are entitled to suppose that
it is clear and distinct, as indeed it seems to be.
I take it that premise 3* is almost entirely uncontroversial, and hence
we have a sound argument, from the resources of the Meditations, for the
falsity of physicalism understood as the view that mental (or mental*)
properties are either (i) identical with properties the nature of which is
entirely revealed to us by neurophysiology or (ii) identical with func-
tional properties that are realized by properties the nature of which is
entirely revealed to us by neurophysiology:
Notes
1. To make the claim slightly more carefully: for each proposition that you can
grasp, you would be able to work out its truth value.
2. Perhaps someone who had never seen red or tasted a lemon would not have
a mental concept of what it’s like to see red or taste a lemon. But it seems
that even if you did have a full concept of, say, what it’s like to taste a lemon,
perhaps gained through tasting lemons whilst blindfolded, you would not be
able to know from a neurophysiological description of a lemon experience
that it satisfied that concept.
3. Armstrong (1968) and Lewis (1966, 1970, 1980, 1994).
4. Descartes classes sensory experiences as a kind of thought.
5. Lewis (1980).
6. Lewis (1980).
7. Descartes (1645, 18).
The Cartesian Argument against Physicalism 19
References
Armstrong, D. (1968) A Materialist Theory of Mind. London: Routledge and Kegan
Paul.
Bayne, T., and Montague, M. (eds) (2011) Cognitive Phenomenology. New York:
Oxford University Press.
Descartes, R. (1645[1996]) Meditations on First Philosophy. Reprinted in Meditations
on First Philosophy, rev. edn, J. Cottingham (ed.), Cambridge: Cambridge
University Press.
Goff, P. [n.d.]. ‘Consciousness and Fundamental Reality’. Manuscript.
Goff, P., and D. Papineau (forthcoming ) ‘What’s Wrong with Strong Necessities?’
Philosophical Studies.
Kriegal, U. (2013) Phenomenal Intentionality. New York: Oxford University Press.
Lewis, D. (1966) ‘An Argument for the Identity Theory’. Journal of Philosophy,
63(1), 17–25.
20 Philip Goff
21
22 Eric Funkhouser
This is true to a large extent. But I caution against going too far in
the other direction. Empirical data alone typically do not settle kind
questions, but neither does a priori speculation alone. In the next two
sections I offer ‘big picture’, yet I also think effective, criticisms of the
two major a priori–based arguments from the last couple of decades –
the zombie argument and the exclusion argument.
The gist of the argument is as follows. If we had all the physical facts
before us and we were ideally rational, we could conceive of the physical
facts obtaining but not phenomenal consciousness (or, at least, we could
not rule this out). If this is ideally conceivable, it is possible in the same
sense, 1-possibility, that it is possible that water is not H2O. Namely,
if we were to consider the imagined world actual, ‘water is not H2O’
would be true. But because there is no appearance/reality distinction
for phenomenal consciousness, its primary intension is the same as its
secondary intension. Unlike the case of water, if P&~Q is 1-possible,
then it is also 2-possible. (Or Russellian monism is true. However, I treat
this as more of an appended qualification – the thrust of the argument is
for dualism.) But then the physical truths do not metaphysically neces-
24 Eric Funkhouser
for thinking that water metaphysically necessitates H2O (e.g., that water
is identical to H2O, on the assumption that such an identity must be
metaphysically necessary) on largely empirical grounds – combined with
a priori principles of good (scientific) reasoning – and without knowl-
edge of (or optimism for) any conceptual entailment. We then work
backwards – in this special case in which the primary and secondary
intensions are assumed to be identical – to derive a claim about what is
either 1-possible or ideally conceivable. In this case, we accept E because
we accept F and deny an appearance/reality distinction.
If we begin with an empirically informed conviction that P&~Q is not
2-possible and we assume that the primary and secondary intensions of
Q are identical, we ought to conclude that either P&~Q is not 1-possible
or that P&~Q is not ideally conceivable. We just considered the possi-
bility that P&~Q is ideally conceivable but not 1-possible. The important
point here is that if this is the case, we have an explanation of the failure
of conceivability to track possibility. If we begin with a conviction that
P&~Q is not 2-possible, the lack of an appearance/reality distinction
explains why it is not 1-possible either. And when investigating contin-
gent kinds that seem like appropriate objects of scientific investigation,
why not put greater stock in our empirically grounded speculations (e.g.,
that P&~Q is not 2-possible) than we do in our ability to track possibility
with conceivability (or for that matter, in claims about what is ideally
conceivable)? We can ask which is more unlikely – that this is not a
metaphysical necessity or that here conceivability fails to track possi-
bility? We know that exceptions to the conceivability-possibility thesis
would be unexpected in general. But if this is the particular exception,
then there is an explanation as to why it is not 1-possible. Of course, we
might still wonder why P&~Q is ideally conceivable. For this reason, I
prefer going a different route.
Rather than argue that phenomenal consciousness is a principled
exception to conceivability tracking possibility, we could argue that the
sameness of primary and secondary intensions explains the ideal incon-
ceivability of P&~Q. This is an appealing alternative, as it preserves the
connection between ideal conceivability and 1-possibility. As before, we
start with our conviction that P&~Q is not 2-possible. Given the same-
ness of primary and secondary intensions, we conclude that it is not
1-possible either. But we still have a firm belief that ideal conceivability
tracks 1-possibility. We then deny D and conclude that P&~Q must not
be ideally conceivable after all. This is not such a bad conclusion, as
ideal conceivability is something of an epistemic pipe dream anyway –
for example, Goldbach’s conjecture, for all I know, could be ideally
A Call for Modesty 27
attitudes like belief and desire. Such exclusion arguments have this
form:
(O1) if c1 had happened without c2, e would still have happened: (c1
& ~c2) ☐→ e, and
(O2) if c2 had happened without c1, e would still have happened: (c2
& ~c1) ☐→ e. (Bennett 2003, 476)
This has the result that only what I have called independent overdeter-
mination is overdetermination at all. The rest, such as mental causation
on the non-reductive physicalist’s account, avoids the exclusion argu-
ment. But the counterfactual test, as well as the particularly nuanced
way that she argues we should evaluate these counterfactuals, is simply
tailored to capture the good/bad, non-coincidental/coincidental distinc-
tion. Her test likely gets the results right – like a Lewisian account of
causation it is designed just for that task – but it does not draw our
attention to what is most fundamental. It is better to simply point out
the shared mechanisms and metaphysical necessities that show mental
and physical causal co-occurrence to be unproblematic. This is the real
explanation. The contrived counterfactual test is correct to the extent
that it indicates this more fundamental explanation. It is not because
the non-reductive physicalist’s mental and physical causes fail O1 or O2
that they are not bad overdeterminers.
A Call for Modesty 33
Notes
1. Chalmers (2010, 152).
2. This is the version as found in The Monadology, §17 (Leibniz 1991, 70). A very
similar version of this argument is also presented in his New Essays, 66–67.
36 Eric Funkhouser
References
Bennett, K. (2003) ‘Why the Exclusion Problem Seems Intractable, and How, Just
Maybe, to Tract It’. Nous, 37(3), 471–497.
Bennett, K. (2008) ‘Exclusion Again’. In Being Reduced: New Essays on Reduction,
Explanation, Causation. Kallestrup and Hohwy (eds), New York: Oxford
University Press.
Chalmers, D. (1996) The Conscious Mind. New York: Oxford University Press.
Chalmers, D. (2010) The Character of Consciousness. New York: Oxford University
Press.
A Call for Modesty 37
38
Verbs and Minds 39
activities.3 But I also distinguish two distinct tasks within the project of
naturalization. One is to explain content using only naturalistic ingre-
dients (e.g., Dretske 1988, 1995); another is to articulate the ontological
scaffolding under any naturalistic theory of content (and consciousness,
if it is not already accounted for within the semantic task). The semantic
task has been treated as the only task once physicalism of some sort is
accepted. But physicalism in general has not been fully articulated in
a critical sense, to the detriment of our theories of content; the meta-
physical task is to complete the job.
In the first section, I show how neglecting the metaphysical task has
hampered theorizing about the mind. In the second section, I show how
the verbialist answer alters our approach to the semantic task. In the
third section, I sketch a method for addressing the semantic task within
the verbialist framework.
the point of holding a relational view of the attitudes, after all, is that
these elements can be freely recombined without changing their iden-
tity qua attitude or proposition. But a relational view requires relata,
and if mental symbols are not objects then we need particular activi-
ties as relata that are manipulated (e.g., believed, desired and so on for
each role in one’s mental economy). But to manipulate an activity is to
modify it – to change how, when, or where it occurs – and these changes
are typically type-relevant. For example, modifying the flow of water by
opening or closing a valve, or by changing the temperature, results at
the very least in a distinct species (or determinate) of flowing from what
was occurring before. These differences are marked in language in ways
that include but are not limited to adverbial modifiers. So if attitudes
are activities – for example, if believing is a complex doing, a functional
role – and mental symbols are also activities, then the attitude/propo-
sitional-attitude relation is not adequately seen as involving independ-
ently recombinable elements. Assuming that manipulating an activity
doesn’t invariably yield a different kind of occurrence altogether – that
is, assuming activity kinds are sufficiently robust to allow for some
continuity through change – the relation plausibly is, or often is, one of
genera/species (or determinable/determinate), such that individuating
the attitude is necessarily part of individuating the propositional-atti-
tude.6 Thus, one can have a relational view of the attitudes if mental
symbols are object-like, or a non-relational view if they are activities.
But what one cannot do is pretend that a relational view of the attitudes
is neutral regarding the ontological category of mental representations.7
Activities are not objects.
Standard naturalistic theories of content do treat mental symbols as
object-like – they assume the continuity of type through change that
is normal for objects but not for activities. The vehicles of content are
often described as neural firing patterns that occur, in easy cases and
initially spontaneously, in response to real external objects or states-of-
affairs. But when the firing pattern is embedded in a circuit of internal
activities leading to behaviour, the new connections to other activities
cannot alter the firing pattern’s response profile to external objects in
any type-relevant way. The semantic theory must maintain that the
result of embedding an activity into other activities makes no semantic
difference: once an O-indicator, always an O-indicator. This presumed
identity of spontaneous-indication activity type from indication-in-a-
control-circuit activity type makes it possible to derive the semantic
value assigned to the latter from the indication value assigned to the
former. This is plausible if one thinks of firing patterns as object-like in
42 Carrie Figdor
(are ‘unowned’) at all, and are related to them in some other way. These
distinct occurrence conditions are distinguished in ordinary language
when verbs are used transitively (direct object: The farmer smells the
rotten corn), intransitively (no direct object: The rotten corn smells) or
in a linking manner (subject complement: The corn smells fresh). The
same verbs can be used to pick out activities with different occurrence
conditions on different occasions of use.
These different occurrence conditions entail that the ontological
category of activities cross-cuts the available categories of first-order
predicate logic (variable and n-place predicate, usually taken to range
over objects and properties/relations respectively). This logical diaspora
does not entail that the ontological category is illegitimate (and the fact
that we do not use predicates like ‘is-pegasizing’ shows that the onto-
logical category of object is implicitly legitimate). To the contrary, in the
sciences, and in mechanistic explanation in particular, the category of
activity usefully groups phenomena that play explanatory roles at least
as essential as those of objects. This justifies favouring unity of explana-
tory role over unity of formal logical role, especially for the purpose of
naturalization.
Verbialism does not restrict the types of occurrence conditions that
mental activity kinds may involve. For example, think may not have one
kind of occurrence condition, and perceive or experience may pick out activ-
ities like rotate or hit depending on context.9 It does claim that general
kinds of activities (e.g., perceiving) can be modified in various cognitively
relevant ways, which adverbialists posited as special cases of general kinds
(e.g., perceiving-redly). These content-modified special cases may involve
implicit or explicit reference to external objects if the general kind allows
them (e.g., thinking) or requires them (seeing, used transitively) but not
if the general kind precludes them (e.g., hallucinating). This occurrence-
condition neutrality of verbialism explains why it is consistent with
adverbialism that being appeared-to F-ly could be a way of representing
that something is F (Siegel 2013; although the adverbialist would call it
a way of representing-F-ly). If content is accuracy conditions, where to be
accurate is to correctly represent some aspect of the external world, there
is no reason why adverbial content might not be determined by causal
relations to external objects: for from the verbialist perspective it depends
on the nature of the general activity kinds and how they are modified. For
the same reason, verbialism is neutral regarding disjunctivism, semantic
internalism and externalism, and the bounds of cognition.
Second, verbialism directs us to examine which modifications of
general activities yield contentful special cases – that is, the mental
46 Carrie Figdor
4 Concluding remarks
Saying what the mind is not, as physicalists have done, does not suffice
for a firm ontological framework for naturalization. We also need to say
what it is. Verbialism provides this positive account. Moreover, once
we pay attention to the nature of activities, we discover problems in
our current theories of mind and new ways to approach the semantic
task. Verbialism leaves to further empirical and philosophical investiga-
tion many of the details of understanding activity kinds and mental
activity kinds. But the success of this research depends on getting the
basic metaphysics right.13
Notes
1. The term ‘occurrent’ (Simons 1987, 2000) is familiar in metaphysics, but in
this context I follow Machamer, Darden and Craver (2000) and use ‘activity’
to refer to activities, events, processes, performances, actions and any other
item in the general metaphysical category whose standard contrast class is
that of continuants or objects. (I use the term ‘process’ or ‘event’ in the same
general sense for variety.) This chapter is neutral on whether four-dimen-
sionalism is true for objects; in any case, I agree with MDC that mechanistic
explanations quantify over both objects and activities.
2. The term ‘state’ does not denote a stable ontological category (Steward 1997;
Marcus 2006, 2009). But while it has a connotation of ‘unchanging’ or ‘undy-
namic’, it is used in the sciences to denote equilibria, which require main-
tenance in open systems, or idealized momentary time slices of dynamic
phenomena. I use ‘state’ when customary usage demands it but only as an
ontologically neutral term equivalent to ‘phenomenon’.
3. The language here is intended to be neutral among all views which attempt to
naturalize the mind by linking the mental to the physical with some degree
of modal strength. What is ruled out is any view in which the mind has
no necessary connection of any sort to the physical. I define non-reductive
Verbs and Minds 51
11. Lakoff and Johnson (1980) claim that many folk explanations utilize a
limited set of primitives, including activity primitives derived from bodily
and cognitive activities. Their work is obviously relevant here, but I deny
their assumption that the extensions are metaphorical. Scientific explana-
tions and hypotheses obey similar intelligibility constraints, but the meta-
physics of activities shows that they can also be literally true.
12. Although here I discuss only kinds or types, naturalization is often discussed
in terms of tokens: Does a mental token fall under an F-ly representational
type because it stands in a causal relation to F-things (to instantiations of
F-ness)? On my view, it is one thing to individuate a kind or type and another
to determine how a token may meet the conditions for falling under the
type. (One can become an American citizen in more than one way.) Thus,
what experiencing and experiencing trumpetly are as kinds and what it is
for a token occurrence to be of the experiencing-trumpetly type are separate
issues.
13. Many thanks to Mark Sprevak, Justin Fisher, Phil Woodward, Colin Klein,
Uriah Kriegel and Mazviita Chirimuuta for comments on and responses to
drafts of this chapter.
References
Anderson, J. R. (2007) How Can the Human Mind Occur in the Physical Universe?
Oxford: Oxford University Press.
Bechtel, W. (2005) ‘The Challenge of Characterizing Operations in the Mechanisms
Underlying Behavior’. Journal of the Experimental Analysis of Behavior, 84,
313–325.
Bechtel, W. (2008) Mental Mechanisms. New York: Erlbaum.
Campbell, J. (2002) Reference and Consciousness. Oxford: Clarendon Press.
Crane, T. (2011) ‘The Problem of Perception’. In The Stanford Encyclopedia of
Philosophy, Edward N. Zalta (ed.), http://plato.stanford.edu/archives/spr2011/
entries/perception-problem/.
Davidson, D. (1967) ‘The Logical Form of Action Sentences’. In The Logic of Decision
and Action, N. Rescher (ed.), Pittsburgh: University of Pittsburgh Press.
Davidson, D. (1970) ‘Events as Particulars’. Nous, 4(1), 25–32.
Dretske, F. (1988) Explaining Behavior. Cambridge, MA: MIT Press.
Dretske, F.. (1995) Naturalizing the Mind. Cambridge, MA: MIT Press.
Figdor, C. (n.d.) Making Physicalism Matter. Manuscript.
Field, H. (1978) ‘Mental Representation’. Erkenntnis, 13(1), 9–61. Reprinted in vol.
1 of Readings in the Philosophy of Psychology, N. Block (ed.), Cambridge, MA:
Harvard University Press, 1980.
Jackson, F. (1977) Perception: A Representative Theory. Cambridge: Cambridge
University Press.
Kriegel, U. (2007) ‘The Dispensability of (Merely) Intentional Objects’. Philosophical
Studies, 141, 79–95.
Kriegel, U. (2012) The Sources of Intentionality. New York: Oxford University
Press.
Lakoff, G., and M. Johnson. (1980) Metaphors We Live By. Chicago: University of
Chicago Press.
Verbs and Minds 53
54
Meanings and Methodologies 55
no yes
Free-Standing
Theory Conceptually Anchored Theory Construction
Construction
proposed explication can’t depart too far, lest it lose its claim to capture
the same thing the pre-existing concept was supposed to capture. This
opens a question: which features of a pre-existing concept must a proposed
explication retain if it is to count as a good explication of that concept?
One tempting answer is to say that an explication should retain the
extension of the explicated concept – the set of things to which that
concept could be correctly applied. In what follows I often abbreviate
‘correct application conditions’ for a concept to the ‘meaning’ of that
concept. However, it is worth emphasizing that this chapter focuses
upon purely extensional aspects of concept meaning. Some people think
there are finer-grained, or ‘intensional’, aspects of concept meaning – for
example, aspects upon which HESPERUS and PHOSPHORUS differ in meaning.
Such ‘intensional’ aspects are beyond the scope of this paper.4 It will be
hard enough to determine how various methodologies relate to exten-
sional aspects of meaning without worrying about further intensional
aspects.
Unfortunately, there are at least two compelling reasons not just to
say that the goal of explication is to preserve the extension or meaning
of explicated concepts. First, it is controversial which theory of concept
meaning is correct. Different semantic theories say that the meanings
of our concepts depend on different factors, and hence these theories
would identify different features of our concepts as the ones that would
need to be preserved if our explications are to preserve concept meaning.
Second, even after we adopt a particular theory of concept meaning, the
question why we should want to seek explications that preserve this
sort of meaning will remain. Might it not be the case that we should
sometimes adopt semantically revisionary explications – explications that
are worth adopting despite the fact that they require us to change what
some of our concepts mean?
We can forestall these worries by considering specific proposals
regarding which features should be preserved in explicating a concept.
Then we can ask which (if any) of these specific proposals preserve the
features that are determinative of concept meaning. (Our answer will
depend, of course, upon which semantic theory we think is correct.) And
we can ask which (if any) of these specific proposals provide explications
that are worth adopting, even if (at least according to some semantic
theories) doing so requires semantic revision. I give special attention to
three specific proposals:
I say a great deal elsewhere about the various particular ways in which
one could and should develop this sort of methodology. But we can get
a good sense of the general way in which Pragmatic Conceptual Analysis
works without going into excessive detail.
As an example let us consider a classic philosophical question: is free
action compatible with determinism? The answer to this question depends,
of course, upon which possible actions are to be counted as falling under
our concept FREE ACTION. This is something that Pragmatic Conceptual
Analysis can help determine.
As with other methodologies, Pragmatic Conceptual Analysis may
naturally be divided into two steps. In the first step the desiderata that
will constrain our choice of explications are articulated. For Pragmatic
Conceptual Analysis, the primary desideratum embodies a job description
that outlines the regular ways in which our use of a concept has regu-
larly delivered benefits. For present purposes, we may define ‘benefit’
as anything the person using the concept in question has practical
reason to pursue.7 I leave it to other philosophers to determine exactly
what the pursuitworthy benefits are, but I presume that there are at
least some uncontroversial examples of these, including many instances
of achieving happiness or satisfaction and many instances of avoiding
pain, injury or death. It is an empirical question how our concepts have
delivered such benefits, and hence it is an empirical question what job
description will be delivered by a reverse engineering analysis of our use
of a shared concept.
Meanings and Methodologies 61
[It is] only sensible to seek a different but ‘nearby’ conception that
does, or does near enough, the job we give [to the concept being
analyzed] in governing what we care about, our personal relations,
our social institutions of reward and punishment, and the like, and
which is realized in the world. (Jackson 1998, 45)
I have just argued that we should not use our intuitions surrounding
a concept as the final arbiter in determining what explication of that
concept to adopt. But it is worth noting that there are at least three
limited ways in which intuitions might play a central role in philosoph-
ical analysis, even on my view.
First, as just noted, our intuitions surrounding a concept may serve
as a good source for initial hypotheses regarding the usefulness of
that concept, hence as a good starting point for Pragmatic Conceptual
Analysis. But they are just a starting point. When we discover that our
intuitions are mistaken regarding the useful work a concept has been
doing, then, on my view, we no longer have any reason to allow these
intuitions to continue to constrain our theorization.
Second, if we are to have reason to embrace the explications produced
by Pragmatic Conceptual Analysis, we will want to use a version of this
methodology which is defined in terms of a notion of benefit that we
have reason to pursue. For all I know, it may be that intuitions about
what sorts of things are worth pursuing will play a large role in helping
to choose this notion of pursuitworthy benefit. But notice that these
are intuitions about what consequences are worth pursuing rather than
about the concept(s) being analysed (e.g., FREE ACTION). This fact sets this
approach apart from Intuitive Conceptual Analysis.
Third, Pragmatic Conceptual Analysis draws upon empirical accounts
of how various concepts have regularly delivered benefits. In producing
such empirical accounts, we might draw upon the sorts of intuitions
scientists use to determine which empirical generalizations are supported
by empirical evidence. These are quite different from the intuitions
called upon by familiar forms of Intuitive Conceptual Analysis.
Hence, I stake out a fairly moderate position regarding the use of
intuitions in philosophy and allow for at least three limited ways in
which intuitions might be quite useful. However, I maintain that
Intuitive Conceptual Analysis is at best a starting point and that strong
empirical evidence about the actual usefulness of our concepts should
trump our intuitions surrounding those concepts. It is quite reason-
able to suspect that contemporary analytic philosophers have pushed
our intuitions far beyond the limits of their usefulness. Rather than
continue to pump intuitions about fantasy swampmen, esoteric trolley
problems or strange new variants of the Gettier problem, philosophers
would do well instead to seek good empirical evidence regarding the
sorts of useful work that our concepts have actually been doing for us
and then to seek explications that would allow our concepts to do this
work more efficiently.
68 Justin C. Fisher
6 Other methodologies
that a concept has regularly been delivering benefits in some other way
than what these cues initially suggest, we will have strong reason to
prefer explications that preserve these beneficial uses to explications
that sacrifice benefits to preserve these other cues. When we discover
that our metaphors or our linguistic practices are in tension with the
continued beneficial usage of our concepts, we should abandon the old
metaphors or linguistic practices, not the beneficial uses.
Similar considerations apply regarding any conceptual anchor one
might dream up. Suppose we are considering a version of Pragmatic
Conceptual Analysis defined in terms of a notion of benefits that we
have reason to pursue and suppose we are comparing this against some
other conceptually anchored methodology – call it M – which includes
other conceptual anchors besides pursuitworthy benefits and, hence,
sometimes offers explications that differ from those of Pragmatic
Conceptual Analysis. By definition, Pragmatic Conceptual Analysis
yields explications that will do optimally well to preserve the regular
ways in which analysed concepts have delivered pursuitworthy benefits.
Since M sometimes offers other explications, M must sometimes call
upon us to sacrifice regular ways of achieving pursuitworthy benefits to
preserve whatever features M takes to determine concept meaning (intu-
itions, metaphors or whatever). But when push comes to shove like this,
Jackson’s advice clearly applies: it is ‘only sensible’ to employ Pragmatic
Conceptual Analysis rather than a competing methodology that would
ask us to sacrifice pursuitworthy benefits in order to achieve something
less worthy of pursuit.19
Notes
1. I think of concepts as mental particulars that play a folder-like information-
coordinating role in cognition. In this, I follow a rich tradition in cognitive
psychology and the philosophy of cognitive science. For a good introduction
to this tradition, see Margolis and Lawrence (1999); for a view of concepts
very similar to my own, see Millikan (1998, 2000). This tradition may be
contrasted with an equally rich philosophical tradition that takes concepts
to be abstract entities.
2. My distinction between free-standing and conceptually anchored theory
construction closely mirrors Clark Glymour’s (2004) distinction between
Euclidean and Socratic approaches.
3. It may be difficult to draw a principled line between what is conceptually
anchored and what is free-standing. Even the most ‘free-standing’ of theory
builders usually intends her theories to be linked in some way to pre-existing
concepts, if only to those of observational evidence and simple logical rela-
tions. Such linkages may be taken to shed at least some light upon those
Meanings and Methodologies 73
References
Appiah, A. (1986) ‘Truth Conditions: A Causal Theory’. In Language, Mind and
Logic, Thyssen Seminar Volume, Jeremy Butterfield (ed.), Cambridge: Cambridge
University Press, 25–45.
Blackburn, S. (2005) ‘Success Semantics’. In Ramsey’s Legacy, H. Lillehammer and
D. H. Mellor (eds.), New York: Oxford University Press.
Bovens, L. and Hartmann, S. (2003) Bayesian Epistemology. Oxford: Clarendon
Press.
Boyd, R. (1988) ‘How to be a Moral Realist’. In Essays on Moral Realism, Sayre
McCord (ed.), Cambridge University Press, 181–228.
Burge, T. (1979) ‘Individualism and the Mental’. In Studies in Metaphysics,
P. French, T. Uehling and H. Wettstein (eds.), Minneapolis: University of
Minnesota Press.
Meanings and Methodologies 75
77
78 Mark Sprevak and Jesper Kallestrup
INDEPENDENCE
CE and VE are distinct claims that can be accepted or rejected
independently.
80 Mark Sprevak and Jesper Kallestrup
1 Content externalism
SUPERVENIENCE
Content-bearing mental states supervene on internal physical features
of individuals who are in those states.
But how exactly should the key notions of ‘supervenience’ and ‘internal’
be understood? Stalnaker (1989) and Jackson and Pettit (1993) emphasize
that narrow content should be understood as content shared between
internal physical duplicates who occupy the same world: an intrinsic
physical duplicate of me need not share my mental contents if located
in a world with deviant laws of nature or linguistic practices. To use the
analogy above, an intrinsic physical duplicate of a foot-shaped imprint
in our world is not itself a foot-shaped imprint if located in a possible
world where feet have abnormal shapes. If there is a viable notion of
narrow content, it is better to be intraworld narrow than interworld
narrow. So the pertinent notion of supervenience should be weak (indi-
vidual) supervenience, roughly:
SUPERVENIENCE*
States with mental content weakly supervene on internal, physical
features P iff necessarily; if individual I1 is in a state S with content M,
then there are some P such that I1 has P and every other individual I2
that has P is in state S with M.
2 Vehicle externalism
ACTIVE
VE is true iff an external resource is active: the resource is coupled
to the agent by a two-way causal loop such that it plays an action-
guiding role for the agent in the here and now.
EXPLANATORY
VE is true iff an external resource is explanatorily ineliminable: one is
unable to explain the existence or character of one’s mental state/
process without making reference to that resource.
Even if one disagrees with Noë’s claim about perceptual experience, one
may nevertheless find EXPLANATORY appealing as a way of stating VE.
EXPLANATORY suggests that the fortunes of VE are tied to the success or
failure of various explanations of mental phenomena. If our explana-
tion of the mind turns out to appeal to extraneural elements, VE is true;
otherwise, VE is false. EXPLANATORY is different from ACTIVE. Suppose, for
the sake of argument, that reference to an external resource is inelimi-
nable from the explanation of the character of an agent’s mental life;
that is no guarantee that the same resource also plays an ‘active’ role for
the agent concerned. The resource may be a cause rather than an effect
Entangled Externalisms 85
for the agent, and intervening on the resource may fail to change the
agent’s behaviour in the here and now.
A third formulation of VE, suggested by Block (2005), uses the notion
of a minimal supervenience base:
MIN-SUPERVENIENCE
VE is true iff an external resource is part of the minimal supervenience
base for that mental state/process.
REALIZATION
VE is true iff a mental state/process of an agent is realized by the
conjunction of the agent’s neural activity and an external resource.
3 Assessing INDEPENDENCE
are missing, then the agent’s processing of vehicles with that content
would simply not occur.
We do not pursue this particular line any further. Instead, we argue
that, more surprisingly, the Demarcation Problem afflicts someone who
endorses both CI and VE. At first blush, the Demarcation Problem does
not seem to arise for this particular combination of views. After all, if
we draw the internal/external distinction around the skin and skull,
it looks as if the skin/skull boundary could do useful work in distin-
guishing the relevant features of the supervenience base. We said above
that CI involves the claim that mental content weakly supervenes on
internal physical features. To say that a physical feature is internal to
some individual is to say that it is located inside the skin and skull of
that individual. In contrast, VE says that an external resource is part of
the minimal supervenience base for the mental state/process in ques-
tion. To say that a resource is external to some individual is to say that
it resides outside the boundary of the individual’s skin and skull. So
it looks like a friend of CI and VE can avail herself of the skin/skull
boundary to solve the Demarcation Problem: the physical features that
play a role in individuating the content of mental states are internal to
the individual, but physical features pertaining to the vehicles of those
mental states include features external to the individual.
While the foregoing looks initially promising, a problem arises.
According to VI, the internal/external boundary can safely be drawn
around the skin and skull, but once VE is accepted, the boundary
between the cognitive system and the external environment may be
revised to include whatever external resource – notebook, iPhone or
what have you – as an integral part of the cognitive system. Importantly,
this consequence is explicitly accepted by content internalists. Here are
two illustrative passages from Chalmers and Jackson, who also both
endorse at least the possibility of cognitive or mental extension:
It may even be that in certain cases, epistemic [narrow] content can itself
be constituted by an organism’s proximal environment, in cases where
the proximal environment is regarded as part of the cognitive system: if a
subject’s notebook is taken to be part of a subject’s memory, for example
(see Clark and Chalmers 1998). Here, epistemic content remains internal
to a cognitive system; it is just that the skin is not a God-given boundary
of a cognitive system. (Chalmers 2002, fn 22)
... the live issue, and the issue on the table here, is whether or not
duplicates from the skin, doppelgangers, in our world might differ in
Entangled Externalisms 91
REALIZATION-CE
CE is true iff the property of having a content-bearing mental state/
process is widely realized by the individual’s neural activity and an
external resource.
REALIZATION-VE
VE is true iff the property of being in a content-bearing mental state/
process is radically widely realized by the individual’s neural activity
and an external resource.
4 Conclusion
Both advocates and critics of VE have assumed that CE and VE are logi-
cally independent. We have found this assumption to be problematic.
The relationship between the views is more complex than it first appears.
We have seen that the primary reason for this entanglement is due to
variation in stating VE. We have examined four ways of stating VE and
found that none offer straightforward grounds to accept INDEPENDENCE.
We wish to propose that a priority for future work on VE is the formula-
tion of an agreed statement of the view that can be used for evaluating
its place in the philosophical landscape.
Notes
1. The terminology owes much to Hurley (2010).
2. For more details, see Kallestrup (2011).
3. Hurley (1998) calls this the ‘Input-Output Picture’.
4. For the functionalist argument for VE, see Clark (2008), Sprevak (2009),
Wheeler (2010).
5. For responses along this line, see Adams and Aizawa (2007), Rupert (2004),
Sprevak (2009).
6. See also Chalmers (2002).
7. For a worry along these lines, see Fodor (2009) and Ladyman and Ross
(2010).
8. As the so-called slow-switching cases (Burge 1988) illustrate, environmental
changes will not immediately result in intentional changes. If you were to
be unwittingly transported to Twin Earth, you would begin to think twater
thoughts only after you sustain enough causal connections to XYZ (or to
other speakers who have interacted with XYZ). Your wide intentional behav-
iour would then change accordingly, e.g. you would reach for twater, where
on Earth, when you were thinking water thoughts, you would have reached
for water. Still, the physical movements of your arm would remain the
same.
9. See Clark and Chalmers (1998) and Sprevak (2009).
10. For duplicates with no mental content, see Putnam (1981, ch. 1).
11. Mackie (1965) proposed that talk of causes involves INUS conditions: insuf-
ficient but necessary parts of a condition which is itself unnecessary but suffi-
cient for the occurrence of the effect. The condition Block has in mind is
different in that a minimal supervenience base for a mental state is distinct
from whatever caused that state.
12. There are reasons independent of the debate over VE for thinking that the
internal/external distinction should not be drawn around the skin/skull.
96 Mark Sprevak and Jesper Kallestrup
Take the meningitis example of Farkas (2003). You and I both have symp-
toms typical of meningitis, but whereas mine are caused by meningitis, yours
are caused by a different bacterium. So while we are physically distinct from
the skin in, we yet inhabit identical physical environments. Without our
knowledge, our token sentences containing ‘meningitis’ express distinct
propositions.
References
Adams, F., and K. Aizawa (2007) The Bounds of Cognition. Oxford: Blackwell.
Block, N. (2005) ‘Review of Alva Noë’s Action in Perception’. Journal of Philosophy,
102, 259–272.
Burge, T. (1988) ‘Individualism and Self-Knowledge’. Journal of Philosophy, 85,
649–663.
Burge, T. (2010) Origins of Objectivity. Oxford: Oxford University Press.
Chalmers, D. J. (2002) ‘The Components of Content’. In Philosophy of Mind:
Classical and Contemporary Readings, edited by D. J. Chalmers. Oxford: Oxford
University Press, 608–633.
Clark, A. (2008) Supersizing the Mind. Oxford: Oxford University Press.
Clark, A., and D. J. Chalmers (1998) ‘The Extended Mind’. Analysis, 58, 7–19.
Farkas, K. (2003) ‘What Is Externalism?’ Philosophical Studies, 112, 187–208.
Fodor, J. A. (2009) ‘Where Is My Mind?’ London Review of Books, 31, 13–15.
Hurley, S. (1998) Consciousness in Action. Cambridge, MA: Harvard University
Press.
Hurley, S. (2010) ‘Varieties of Externalism’. In The Extended Mind, edited by R.
Menary. Cambridge, MA: MIT Press, 101–153.
Jackson, F. (2003) ‘Narrow Content and Representationalism, or Twin Earth
Revisited’. Proceedings and Addresses of the American Philosophical Association,
77, 55–70.
Jackson, F., and P. Pettit (1993) ‘Some Content Is Narrow’. In Mental Causation,
edited by J. Heil and A. Mele. Oxford University Press, 259–282.
Kallestrup, J. (2011) Semantic Externalism. London: Routledge.
Ladyman, J., and D. Ross (2010) ‘The Alleged Coupling-Constitution Fallacy and
the Mature Sciences’. In The Extended Mind, edited by R. Menary. Cambridge,
MA: MIT Press, 155–165.
Mackie, J. L. (1965) ‘Causes and Conditions’. American Philosophical Quarterly, 2,
245–264.
Noë, A. (2007) ‘Magic Realism and the Limits of Intelligibility: What Makes Us
Conscious?’ Philosophical Perspectives, 21, 457–474.
Putnam, H. (1975) ‘The Meaning of “Meaning”’. In Mind, Language and Reality,
Philosophical Papers, vol. 2. Cambridge: Cambridge University Press, 215–271.
Putnam, H. (1981) Reason, Truth and History. Cambridge: Cambridge University
Press.
Rupert, R. D. (2004) ‘Challenges to the Hypothesis of Extended Cognition’.
Journal of Philosophy, 101, 389–428.
Shoemaker, S. (1984) ‘Some Varieties of Functionalism’. In Identity, Cause and
Mind. Cambridge: Cambridge University Press, 261–286.
Shoemaker, S. (2007) Physical Realization. Oxford: Clarendon Press.
Entangled Externalisms 97
98
The Phenomenal Basis of Epistemic Justification 99
The Question: What are the non-epistemic facts that determine the
epistemic facts about which doxastic attitudes one has justification
to hold?
These cases have a common structure: in each case, we vary the facts
about the reliability of the subject’s doxastic dispositions, but we do
not thereby vary the ways in which the subject has justification to form
beliefs so long as we hold fixed the facts about the subject’s mental states.
Moreover, this common structure suggests a common explanation:
namely, that epistemic justification is not determined by facts about the
reliability of the connections between the subject’s mental states and
the external world but rather by facts about the subject’s mental states
themselves.
Mentalism is a prominent alternative to reliabilism, according to
which epistemic facts about which doxastic attitudes one has justifica-
tion to hold are determined by non-epistemic facts about one’s mental
states:
about the external world. This is not to prejudge questions about the
structure of the justification that perceptual experience provides. On one
view, perceptual experience provides immediate, non-inferential justifi-
cation for beliefs about the external world (Pryor 2000). On another
view, perceptual experience provides justification for beliefs about the
external world in a way that is inferentially mediated by justification for
beliefs about the reliability of perceptual experience (Wright 2004). But
even on the second view, perceptual experience plays a foundational
role in providing immediate, non-inferential justification for beliefs
about which perceptual experiences one is having at any given time.
So on either view, one’s justification for beliefs about the external world
has its source in their relations to perceptual experience and not solely
in their relations to other beliefs.
Moreover, it is extremely plausible that perceptual experience plays
this foundational epistemic role in virtue of its phenomenal character.
It is because perceptual experience has the phenomenal character of
confronting one with objects and properties in the world around me that
it justifies forming beliefs about those objects and properties. This point
is most vividly illustrated by reflecting on cases in which the phenom-
enal character of perceptual experience goes missing – most notably in
the empirical phenomenon of blindsight (Weiskrantz 1997).
Patients with blindsight lose conscious visual experience in ‘blind’
regions of the visual field owing to lesions in the visual cortex. As a result,
they do not initiate spontaneous reasoning, action or verbal reports
directed towards stimuli in the blind field, but they are nevertheless
reliable in discriminating stimuli in the blind field under forced choice
conditions. For example, when asked to guess whether a presented item
is an X or an O, patients are able to report correctly in a high propor-
tion of trials. What explains this reliability is the fact that perceptual
information from stimuli in the blind field is represented and processed,
although it does not surface in conscious experience.
Does unconscious perceptual information in blindsight provide a source
of justification for beliefs about stimuli in the blind field? Intuitively, it
does not. After all, blindsighted subjects are not at all disposed to use
unconscious perceptual information in forming beliefs about stimuli
in the blind field. Instead, they tend to regard their reports in forced
choice tasks as mere guesswork and express surprise when informed of
their reliability. Moreover, this seems perfectly reasonable. Blindsight
is not plausibly regarded as a cognitive deficit in which subjects are in
possession of perceptual evidence that justifies forming beliefs about
the blind field, although they are cognitively disabled from using it in
104 Declan Smithies
there is something it is like for me to see that there is a cup on the desk.
To solve this problem, we need the notion of a phenomenally individu-
ated mental state – that is, a type of mental state that is individuated
by its phenomenal character in the sense that all and only tokens of
that type have the same phenomenal character. Factive mental states
are phenomenally conscious, but they are not phenomenally individu-
ated, since not all mental states with the same phenomenal character
are factive mental states.15
The second problem is that phenomenal consciousness is not neces-
sary for a mental state to play a role in determining epistemic justifica-
tion. After all, beliefs play an epistemic role in justifying other beliefs.
Indeed, Davidson (1986) went as far as to claim that nothing can justify
a belief except another belief. This is surely an overreaction, since beliefs
can also be justified by perceptual experiences, which are distinct from
the beliefs they justify. Yet it is surely an overreaction in the opposite
direction to claim that beliefs can never be justified by other beliefs.
Nevertheless, beliefs are not phenomenally conscious states: they are
disposed to cause phenomenally conscious states of judgment, but these
dispositions need not be manifested for beliefs to play an epistemic role.
To illustrate the point, suppose you observe that the streets are wet and
infer that it has been raining. Your justification to draw this conclu-
sion depends on all sorts of background beliefs about the relative prob-
ability of various hypotheses conditional on the streets being wet. More
generally, which conclusions one has inductive justification to draw
from observed evidence is a matter that depends upon vast amounts of
background information that is represented unconsciously in the belief
system; not all of this can be brought into consciousness in the process
of drawing a conclusion.
Any plausible answer to the generalization question must therefore be
permissive enough to include beliefs while also being restrictive enough
to exclude subdoxastic mental representations, such as unconscious
perceptual information in blindsight. What is needed is an account of
what beliefs and experiences have in common in virtue of which they
play their epistemic role. However, many philosophers are pessimistic
about the prospects for giving a unified account of the mental that
includes beliefs as well as experiences. Thus, Rorty (1979, 22) writes,
‘The attempt to hitch pains and beliefs together seems ad hoc – they
don’t seem to have anything in common except our refusal to call them
“physical”.’
In my view, however, this pessimism can be resisted. The key is to
recognize two distinct but related senses in which a mental state can be
110 Declan Smithies
section is that beliefs and other mental states play an epistemic role only
if they are introspectively accessible and hence phenomenally individu-
ated. This provides another more theoretical line of argument for the
phenomenal individuation of belief.
about rhubarb. But the strongest theoretical motivation for the simple
theory of introspection is that it is needed in order to explain the truth
of access internalism.
Assuming that some mental states are introspectively accessible, the
question arises, which ones? This is, in effect, a generalization question
about introspection:
she has a reliable disposition to form true beliefs about her unconscious
visual representations, but this is not sufficient to make her beliefs
introspectively justified. By analogy, the super-blindsighter has a reli-
able disposition to form true beliefs about stimuli in the blind field, but
this is not sufficient to make them perceptually justified. So why should
we suppose that the hyper-blindsighter’s beliefs about her unconscious
visual representations are any more justified than the super-blindsight-
er’s beliefs about objects in the blind field? Therefore, metacognitive
consciousness is not sufficient for introspective accessibility.
In response to the generalization question, I propose the following
thesis:
Notes
1. The distinction between hard and easy problems was introduced by Chalmers
(1995), but the program of understanding the mind and our knowledge of
the external world without reference to phenomenal consciousness goes
back at least as far as Ryle (1949).
2. This is perhaps most clearly evident in the attempts to ‘naturalize’ intention-
ality in the work of Dretske (1981), Stalnaker (1984), Millikan (1984) and
Fodor (1987).
3. This is a defining feature of the ‘reliabilist’ tradition in epistemology that
includes the work of Armstrong (1968), Goldman (1979), Dretske (1981) and
Nozick (1981).
4. This is one consequence of ‘representationalism’ or ‘intentionalism’ in philos-
ophy of mind. Reductive representationalists, including Tye (1995), Dretske
(1995) and Lycan (1996), claim that the problem of explaining phenom-
enal consciousness is made easier by its connections with mental repre-
sentation, while non-reductive representationalists, including Horgan and
Tienson (2002) and Chalmers (2004), claim that the problem of explaining
mental representation is made harder by its connections with phenomenal
consciousness.
5. Other epistemologists who emphasize the role of perceptual experience in
explaining our knowledge of the external world include McDowell (1994),
Brewer (1999) and Pryor (2000).
6. See Smithies (2012a) for further discussion.
7. Reliabilist theories of justification are proposed by Goldman (1979), Sosa
(2003) and Bergmann (2006), although I cannot discuss the specific details
of their views here.
8. This is a variation on Cohen’s (1984) ‘new evil demon problem’.
9. The clairvoyance cases were originally proposed by BonJour (1980).
10. Proponents of mentalism include Conee and Feldman (2001) and Wedgwood
(2002). I focus on ‘current time-slice’ versions of mentalism rather than
‘historical’ versions: i.e., one’s mental states at a time determine which
doxastic attitudes one has justification to hold at that time.
11. Williamson (2000) endorses a factive version of mentalism on which one’s
evidence – and so which doxastic attitudes one has justification to hold – is
determined by one’s knowledge, which he claims to be the most general kind
of factive mental state.
12. Stich defines subdoxastic states as ‘psychological states that play a role in the
proximate causal history of beliefs, though they are not beliefs themselves’
(1978, 499). We can add the further stipulation that no subdoxastic states are
phenomenally conscious states.
13. See Smithies (2011a, 2011b) and Siegel and Silins (2014).
14. According to Block, the super-blindsighter is ‘trained to prompt himself at
will, guessing without being told to guess’ (1997, 385), but let us suppose
The Phenomenal Basis of Epistemic Justification 121
instead that he forms beliefs spontaneously without any need for self-
prompting.
15. Object-involving mental states raise similar problems and can be treated in
much the same way as factive-mental states. I plan to discuss this in more
detail in future.
16. See Smithies (2012a, 2013) for a more detailed discussion and defence of this
account of the individuation of belief and judgment.
17. See Smithies (2013) for an overview and further references.
18. Proponents of intentionalism include Dretske (1995), Tye (1995), Lycan
(1996), Siewert (1998), Horgan and Tienson (2002) and Chalmers (2004).
19. The terminology of ‘content-specific’ and ‘attitude-specific’ phenomenal
properties is borrowed from Ole Koksvik (2011); see also Horgan and Tienson
(2002) for a related distinction between the phenomenology of intentional
content and the phenomenology of attitude type.
20. See Strawson (1994), Peacocke (1998), Siewert (1998), Horgan and Tienson
(2002) and Pitt (2004).
21. See Pautz (Ch. 8 in this volume) for discussion. I should note that while I
find this assumption plausible, I am not independently committed to it, so
my commitment to narrow intentionalism is conditional on the truth of this
assumption.
22. See Horgan and Tienson (2002) and Chalmers (2004) for versions of narrow
intentionalism on which some intentional properties are wide and Farkas
(2008) for a more uncompromising view on which all intentional properties
are narrow.
23. Compare Audi (2001) for a related proposal and Williamson (2007) for crit-
ical discussion. I plan to discuss this proposal in more detail elsewhere.
24. This chapter reworks some of the central ideas in my Ph.D. dissertation
(Smithies 2006) and draws on themes that I have developed in a series of
papers and plan to bring together in a monograph for Oxford University
Press with the provisional title ‘The Epistemic Role of Consciousness’. I have
presented these ideas at several venues over the past few years, including
ANU, Dubrovnik, Harvard, Melbourne, Ohio State, Fribourg, Northwestern,
MIT and the Pacific APA, as well as the Online Philosophy Conference for New
Waves in Philosophy of Mind. I am grateful for feedback on all of those occa-
sions and especially to John Campbell, David Chalmers, Elijah Chudnoff,
Terry Horgan, Geoff Lee, Susanna Siegel, Charles Siewert, Nico Silins and
Daniel Stoljar.
References
Armstrong, David (1968) A Materialist Theory of the Mind. London: Routledge.
Audi, Robert (2001) ‘An Internalist Theory of Normative Grounds’. Philosophical
Topics, 23, 31–45.
Bergmann, Michael (2006) Justification without Awareness: A Defense of Epistemic
Externalism. New York: Oxford University Press.
Block, Ned (1997) ‘On a Confusion about a Function of Consciousness’. In The
Nature of Consciousness: Philosophical Debates, edited by N. Block, O. Flanagan
and G. Guzeldere. Cambridge, MA: MIT Press.
122 Declan Smithies
Different structures can have the same function. The wings and feet
of insects, birds and bats have different structural properties, yet they
perform the same functions. Many important concepts and explana-
tions in the special sciences depend on the idea that the same func-
tion can be performed by different structures. For instance, in biology,
although both homologous and analogous structures of a given type
have the same function, only homologous structures of that type have
a common evolutionary history. These observations undergird the
concepts of homologous and analogous structures and the distinc-
tion between them: we cannot make sense of this important biological
distinction in any other way. Similar considerations seem to be true of
psychology. Different structures might well have the same psychological
function, particularly across species. The eye of an octopus might be
quite different from the eye of a human being, although both have the
same function.
Making sense of these phenomena is central to the discussion of
Multiple Realizability (MR). Originally, philosophical attention to MR
was focused on issues in the philosophy of mind; more recently, philos-
ophers have realized that MR is an important issue in the metaphysics
of science, particularly in the special sciences.
To a first approximation, a property P is multiply realizable if and only
if there are multiple properties P1, ... , Pn, each one of which can realize P,
and where P, P1, P2, ... , Pn are all distinct from one another. The idea that
mental properties are multiply realizable was introduced in the philos-
ophy of mind in the early functionalist writings of Putnam and Fodor
(Fodor 1968; Putnam 1960, 1967a). Since then, MR has been an impor-
tant consideration in favour of antireductionism in psychology and
other special sciences (e.g., Fodor 1974, 1997). Initially, the reductionist
125
126 Gualtiero Piccinini and Corey J. Maley
the property that is putatively realized by the realizers must be the same
property, or else there wouldn’t be any one property that is multiply
realized. Second, the putative realizers must be relevantly different from
one another, or else they wouldn’t constitute multiple realizations of the
same property.
Unfortunately, there is little consensus on what counts as realizing
the same property or what counts as being relevantly different realizers
of that same property (cf. Sullivan 2008; Weiskopf 2011, among others).
To make matters worse, supporters of MR often talk about MR of func-
tional properties without clarifying which notion of function is in play
and whether the many disparate examples of putative MR are instances
of the same phenomenon or different phenomena. Thus, the notion of
function is often vague, and there is a tacit assumption that there is one
variety of MR. As a consequence, critics of MR have put pressure on the
canonical examples of MR and the intuitions behind them.
Three observations will help motivate our account. First, the proper-
ties (causal powers) of objects can be described with higher or lower
resolution – in other words, properties can be described in more specific
or more general ways (cf. Bechtel and Mundale 1999). When using a
higher-level description with high enough resolution, the set of causal
powers picked out by that description may be so specific that it has
only one lower-level realizer, so it may not be multiply realizable. When
using a higher-level description with lower resolution, the set of causal
powers picked out by that description might have many lower-level real-
izers but perhaps only trivially so; its multiple realizability might be an
artefact of a higher-level description construed so broadly that it has no
useful role to play in scientific taxonomy or explanation.
Consider keeping time. What counts as a clock? By what mechanism
does it keep time? How precise does it have to be? If we answer these
questions liberally enough, almost anything counts as a clock. The
‘property’ of keeping time might be multiply realizable but trivially so.
If we answer these questions more restrictively, however, the property
we pick out might be realized by only a specific kind of clock or perhaps
only one particular clock. Then, the property of keeping time would not
be multiply realizable. Can properties be specified so that they turn out
to be multiply realizable in a non-trivial way?
A second important observation is that things are similar and different
in many ways, not all of which are relevant to MR. For two things to
realize the same property in different ways, it is not enough that they
are different in just some respect or other. The way they are different
might be irrelevant: they might realize the same high-level property
128 Gualtiero Piccinini and Corey J. Maley
mechanisms are the genuine kinds, whereas the putative property that
they realize needs to be eliminated in favour of those different kinds. For
example, one should eliminate the general kind corkscrew in favour of,
say, the more specific kinds winged corkscrew and waiter’s corkscrew.
Q(M) T Q(N)
MR has at least two sources. Each source is sufficient to give rise to MR,
but the two sources may also be combined to form a composite form of
MR.
and the remaining leg may be used to connect the centre of the cross
to the centre of the tabletop. The result of this different organization is
still a table.
For this proposal to have bite, we need to say something about when
two organizations of the same components are different. Relevant
differences include spatial, temporal, operational and causal differ-
ences. Spatial differences are differences in the way the components
are spatially arranged. Temporal differences are differences in the way
the components’ operations are sequenced. Operational differences
are differences in the operations needed to exhibit a capacity. Finally,
causal differences are differences in the components’ causal powers
that contribute to the capacity and the way such causal powers affect
one another. Two organizations are relevantly different just in case
they include some combination of the following: components spatially
arranged in different ways, performing different operations or the same
operations in different orders, such that the way the causal powers
that contribute to the capacity or the way the causal powers affect one
another are different.
MR1 is ubiquitous in computer science. Consider two programs
(running on the same computer) for multiplying very large integers,
stored as arrays of bits. The first program uses a simple algorithm, such
as what children learn in school, and the second uses a more sophisti-
cated (and faster) algorithm, such as the Fast Fourier Transform. These
programs compute the same function (i.e., they multiply two integers)
using the same hardware components (memory registers, processor,
etc.). But the temporal organization of the components mandated by
the two programs differs considerably: many children could understand
the first, but understanding the second requires non-trivial mathemat-
ical training. Thus, the processes generated by the two programs count
as two different realizations of the operation of multiplication.
The notion of MR1 allows us to make one of Shapiro’s conclusions
more precise. Shapiro (2000) is right that components made of different
materials (e.g., aluminum vs steel) need not count as different realiza-
tions of the same property (e.g., lifting corks out of bottles) because they
contribute the same property (e.g., rigidity) that effectively screens off
the difference in materials. But this is true only if the different sets of
components are organized in the same way. It is important to realize
that if two sets of components that are made of different materials (or
even the same material) give rise to the same functional property by
contributing the same properties through different functional organiza-
tions, those are multiple realizations of the same property.
The Metaphysics of Mind 139
S2, can be organized in another way, O2, different from O1, and yet still
form a whole that exhibits the same property.
MR3 is probably the typical case of MR and the one that applies to
most of the standard examples described by philosophers. One much-
discussed example that we’ve already mentioned is the corkscrew.
Proponents of both the flat and dimensioned views usually agree that a
waiter’s corkscrew and a wing corkscrew are multiple realizations of the
kind corkscrew. The components of the two corkscrews are different, as is
the organization of those components: a waiter’s corkscrew has a folding
piece of metal that serves as a fulcrum, which is sometimes hinged, and
often doubles as a bottle opener; a winged corkscrew has a rack and
pinion connecting its levers to the shaft of the screw (or worm). So both
the components differ and their organizations differ.
Eyes – another much-discussed case – follow a similar pattern. Several
authors have noted that there are many different ways in which compo-
nents can be organized to form eyes (e.g., Shapiro 2000), and many
different kinds of components that can be so organized (e.g., Aizawa and
Gillett 2011). The same pattern can be found in many other examples,
such as engines, mousetraps, rifles, sanders and so on. We’ll mention
just one more.
Computer science provides examples of systems that exhibit MR3.
In some computers, the logic circuits are all realized using only NAND
gates, which, because they are functionally complete, can implement
all other gates. In other computers, the logic circuits are realized using
only NOR gates. In still other computers, AND, OR and NOT gates might
realize the logic circuits. The same computer design can be realized by
different technologies and can be organized to perform the same compu-
tations in different ways. Computers are anomalous because their satis-
factions of MR1 and MR2 are independent of one another (more on this
below), whereas in most cases, the two are mixed inextricably because
the different sets of components that form two different realizations can
exhibit the same capacity only by being organized in different ways.
1. Q(N) + R(N) and Q(M) + R(M) are at the mechanistic level immediately
below T;
2. One of the following is satisfied:
a. [MR1] Q(N) = Q(M) but R(N) ≠ R(M)
b. [MR2] Q(N) ≠ Q(M) but R(N) = R(M)
c. [MR3] Q(N) ≠ Q(M) and R(N) ≠ R(M).
legged stool versus a four-legged stool versus a five-legged stool ... versus
an n-legged stool. Are these cases of MR? For Aizawa and Gillett, they are.
But there is no independent reason, besides Aizawa and Gillett’s account
of MR, to think so. Sure, there are differences between stools with a
different number of legs. But they do not require significantly different
mechanistic explanations, because they all rely on the same kinds of
components organized in (roughly) the same way: an n-legged stool has
approximately 1/nth of its weight supported by each leg, supposing the
legs are distributed equidistantly on the outside edge of the seat. Thus,
this is not a case of MR, and our account of MR accommodates this
fact. By the same token, so-called MR by compensatory adjustments
(Aizawa 2012), in which quantitative changes in the properties of some
components are compensated by quantitative changes in the properties
of other components, is not MR properly so called.
While some lower-level differences are clear cases of MR, other lower-
level differences are clearly not cases of MR. As it often happens, between
the clear cases at the two extremes of a continuum there is a grey area.
Our account works well for the clear cases in which MR occurs or fails to
occur, and that’s all that we hoped to accomplish.
or by atoms with the same properties organized in the same way (MR1)
or both (MR3). The properties of atoms may be realized by subatomic
particles with different properties organized in one way (MR2) or by
particles with the same properties organized in the same way (MR1) or
both (MR3).
Another example is a neuron’s capacity to fire, or generate an action
potential. Is this multiply realized? No, because neural firing is explained
by the movement of ions into and out of the axon through ion chan-
nels. Are ion channels multiply realized? Yes, at least in some cases. For
instance, there are two kinds of potassium ion channels, the voltage-
gated channels and the calcium-activated channels, and the capacity
to selectively allow ions to pass through the channel is explained by
different mechanisms in each case. Because they have different compo-
nents (one has a voltage ‘sensor’, the other has a calcium ‘sensor’)
organized in different ways, this is a case of MR3. Now take the voltage-
gated potassium ion channels. Are they multiply realized? It seems not,
although there are variations: there are differences in the molecules that
constitute the structure allowing these channels to inactivate, but they
seem to operate in the same way (although research into these structures
is a hot topic in neuroscience as of this writing; see Jensen et. al. [2012]
for just one example).
Where does MR stop? Either at a level where there is no MR, because
there is only one kind of component and one kind of organization that
realizes a certain property, or at the level of the smallest physical compo-
nents (if there is one).
Computing systems are an especially good case study for MR. Computers
exhibit MR of computed function by algorithm, of algorithm by program
(using different programming languages), of program by memory loca-
tions, of memory locations (and processing of programs) by architec-
ture, of architecture by technology.
As we pointed out above, computing systems exhibit all forms of
MR. You can realize the same computation type using the same compo-
nent types arranged in different ways (MR1), different component types
arranged in the same way (MR2), as well as different component types
arranged in different ways (MR3). When most systems exhibit MR3,
it’s because different components can only exhibit the same capacity
by being organized in different ways. By contrast, computing systems
are such that the same components can be organized in different ways
The Metaphysics of Mind 145
Shapiro (2000, 2004) claims that functional properties do not enter into
laws other than dull, analytic laws like ‘all corkscrews have the function
of removing corks’. But worrying about laws may be beside the point.
Laws do not seem to be of primary (or even of any) importance to many
special sciences; hence the move from models of scientific explanation
such as the deductive-nomological and inductive-nomological models
to models of explanation specific to the special sciences (Craver 2007;
Salmon 1989). The domains of the special sciences abound in functional
properties, however, so it is important to see where (and how) they fit.
First, as Shapiro notes, functional properties do enter into explana-
tions; in particular, they often enter into non-analytic generalizations.
For human artefacts, we start with a goal, and we try to build something
146 Gualtiero Piccinini and Corey J. Maley
8 Conclusion
Notes
1. We are going assume that everything is physical and leave angels aside.
2. The thesis that most programs are realized by most physical systems is
discussed and refuted in Piccinini (2010).
3. For an account of functions properly so called – teleological functions – and
their relations to capacities or causal powers, see Garson and Piccinini (2013)
and Maley and Piccinini (forthcoming).
4. There may well be more to properties than causal powers – e.g., properties
may have a qualitative aspect (Heil 2003) – but we ignore this possibility
here.
5. Heil argues that MR is a phenomenon pertaining to high-level predicates in
virtue of different underlying properties of the entities to which the predi-
cates apply. Strictly speaking, for Heil there are no high-level properties and
a fortiori no high-level properties that are multiply realizable.
6. Unfortunately we lack the space to further articulate and defend our egali-
tarian ontology. We hope to do so in future work.
7. Thanks to Eric Funkhouser for raising this potential objection.
8. We hope to articulate this sketch of an ontology of composition at greater
length in future work.
9. What about components of different size, such as two levers from two winged
corkscrews, one of which is twice the size of the other? In this case, we may
have to adjust the scale of the components before recombining them. See
Wimsatt (2002) for a detailed discussion of functional equivalence, isomor-
phism and similarity.
10. Thanks to Sarah K. Robins for suggesting this example.
11. Thanks to Bill Bechtel for helpful comments on this point.
12. What about malfunctioning mousetraps? They belong to a type that catches
mice. What about putative mousetraps that are so ill designed that they can
never catch any mice? They are not truly mousetraps. For more on this kind
of case, see Maley and Piccinini (forthcoming).
13. Some of these points are made independently by Alyssa Ney (2010).
14. Yet another form of autonomy is the lack of direct constraints between high-
er-level explanations and lower-level ones. As one of us argued elsewhere
(Piccinini and Craver 2011), this kind of autonomy fails, too.
15. Some of the ideas behind this paper were sketched by Piccinini around 2005,
mostly in reaction to reading recent work by Shapiro (2000, 2004). Thanks
to Mark Sprevak for inviting us to submit this paper and organizing the New
Waves in Philosophy of Mind Online Conference. Thanks to Bill Bechtel,
Erik Funkhouser, Carl Gillett, Alyssa Ney, Tom Polger, Sarah K. Robins, Kevin
Ryan, Paul Schweizer, Larry Shapiro, Dan Weiskopf, our audience at the 2013
SSPP meeting and especially Ken Aizawa for helpful comments on a previous
version.
150 Gualtiero Piccinini and Corey J. Maley
References
Aizawa, K. (2012) ‘Multiple Realization by Compensatory Differences’. European
Journal for Philosophy of Science, 3(1), 1–18.
Aizawa, K., and C. Gillett (2009) ‘The (Multiple) Realization of Psychological
and Other Properties in the Sciences’. Mind and Language, 24(2), 181–208.
Doi:10.1111/j.1468–0017.2008.01359.x.
Aizawa, K., and C. Gillett (2011) ‘The Autonomy of Psychology in the Age of
Neuroscience’. In Causality in the Sciences, eds, P. M. Illari, F. Russo and J.
Williamson, 202–223. Oxford University Press.
Bechtel, W., and J. Mundale (1999) ‘Multiple Realizability Revisited: Linking
Cognitive and Neural States’. Philosophy of Science, 66(2), 175–207.
Bickle, J. (2003) Philosophy and Neuroscience: A Ruthlessly Reductive Approach.
Dordrecht: Kluwer.
Block, N. J., and J. A. Fodor (1972) ‘What Psychological States Are Not’.
Philosophical Review, 81(2), 159–181.
Buhr, E. D., and J. S. Takahashi (2013) ‘Molecular Components of the
Mammalian Circadian Clock’. In Circadian Clocks, vol. 217, eds, A. Kramer
and M. Merrow, 3–27. Berlin, Heidelberg: Springer Berlin Heidelberg.
Doi:10.1007/978–3–642–25950–0_1.
Couch, M. (2005). ‘Functional Properties and Convergence in Biology’. Philosophy
of Science, 72, 1041–1051.
Craver, C. (2007) Explaining the Brain. Oxford: Oxford University Press.
Fodor, J. A. (1968) Psychological Explanation. New York: Random House.
Fodor, J. A. (1974) ‘Special Sciences (or, The disunity of science as a working
hypothesis)’. Synthese, 28(2), 97–115, doi:10.1007/BF00485230.
Fodor, J. A. (1997) ‘Special Sciences: Still Autonomous After all these Years. Noûs,
31 (suppl.: Philosophical Perspectives, 11), 149–163.
Garson, J., and G. Piccinini (2013) ‘Functions Must Be Performed at Appropriate
Rates in Appropriate Situations’. British Journal for the Philosophy of Science. doi:
10.1093/bjps/axs041.
Gillett, C. (2002) ‘The Dimensions of Realization: A Critique of the Standard
View’. Analysis, 62, 316–323.
Gillett, C. (2003) ‘The Metaphysics of Realization, Multiple Realizability, and the
Special Sciences’. Journal of Philosophy, 100(11), 591–603.
Gillett, C. (2010) ‘Moving beyond the Subset Model of Realization: The Problem
of Qualitative Distinctness in the Metaphysics of Science’. Synthese, 177,
165–192.
Heil, J. (2003) From an Ontological Point of View. Oxford: Oxford University Press.
Jensen, M. O., V. Jogini, D. W. Borhani, A. E. Leffler, R. O. Dror, and D. E. Shaw
(2012) ‘Mechanism of Voltage Gating in Potassium Channels’. Science Signaling,
336(6078), 229, doi:10.1126/science.1216533.
Keeley, B. L. (2000) ‘Shocking Lessons from Electric Fish: The Theory and Practice
of Multiple Realization’. Philosophy of Science, 67(3), 444–465.
Kim, J. (1992) ‘Multiple Realization and the Metaphysics of Reduction’. Philosophy
and Phenomenological Research, 52(1), 1–26.
Kim, J. (1998) Mind in a Physical World. Cambridge, MA: MIT Press.
Klein, C. (2008) ‘An Ideal Solution to Disputes about Multiply Realized Kinds’.
Philosophical Studies, 140(2), 161–177.
The Metaphysics of Mind 151
153
154 Adam Pautz
theorists not only reject reductive externalist theories; they often reject
all reductive approaches, adopting instead what I call a consciousness
first approach: phenomenal consciousness is not something that must
be reductively explained in other terms (e.g., tracking plus cognitive/
rational accessibility) but rather a starting point from which to explain
other things (e.g., cognition, rationality, value).
My sympathies lie with phenomenal internalism and the phenom-
enal intentionality program.2 In particular, I defend an internalist, neo-
Galilean (or ‘Edenic’) version of ‘intentionalism’ (Pautz 2006; Chalmers
2006). But here my aim is negative. I criticize three main armchair
arguments against rival reductive externalist theories. Externalists like
Dretske, Lycan and Tye have raised their own objections to such argu-
ments. But I show that they do not quite hit the nail on the head; I iden-
tify what I take to be the real problems, arguing that the much-discussed
armchair arguments are in fact without merit. The moral: the case for
phenomenal internalism must depend on empirical arguments.
304–305). (As we saw, this follows from the spatial datum and the
existence of properties.)
4. But, ex hypothesi, the BIV bears no dyadic physical-functional (e.g.,
tracking) relation to this property.
5. Conclusion: phenomenal internalism implies that the phenomenal
representation relation is a non-physical and (presumably) primi-
tive relation between individuals and being round and other sensible
properties.
Externalists like Tye and Dretske have been much too concessive in
granting that phenomenal internalism enjoys pre-theoretical support at
all. The real problem is that this is not so. A little reflection, indeed the
whole history of human thought on perception, shows that phenom-
enal externalism is not absurd at all; indeed, if anything, it is pre-theo-
retically quite plausible.
For instance, many have accepted naive realism. On this view, the
sensible qualities, or ‘qualia’, are really out in the world: colours, sound
qualities, tastes and so on. For instance, when you view a tomato and
have phenomenal property R, the red quality you are directly acquainted
with is really ‘spread out’ on its surface. The distinctive claim of naive
realism is that, at least in ‘veridical’ cases, some ordinary (non-burry, etc.)
phenomenal properties such as R are grounded in nothing but standing
in a relation of direct acquaintance to a concrete state, or condition,
involving the instantiation of sensible properties (colours, shapes) by
a mind-independent physical object. (These sensible properties include
viewpoint-relative properties such as being elliptical from here – ‘objective
looks’.)
To handle hallucination, naive realists might appeal to non-normal
objects such as sense data or Meinongian objects, so that R is always
grounded in acquaintance with objects. Alternatively, they might accept
a more extreme ‘disjunctivism’ (I think the best version is ‘primitivist
disjunctivism’, discussed in Pautz 2010a, 275). So hallucination doesn’t
undermine naive realism.
Naive realism is an example of a relational (act-object) theory of phenom-
enology: some phenomenal properties are, in some cases, grounded in
direct acquaintance with concrete objects and states wholly distinct
from perceivers.
Naive realists have taken different views on the physical basis of
acquaintance with the world. Pre-modern thinkers such as Plato, Euclid
and Ptolemy accepted the extromission theory: we become acquainted
with the world by way of rays emanating from the eye (perhaps with
infinite velocity, like gravity in Newton’s theory). Thus in his Optika
Euclid wrote:
Rays [proceed] from the eye [and] those things are seen upon which
the visual rays fall and those things are not seen upon which the
visual rays do not fall.
sense data? On one version, sense data occupy a separate, private two-
dimensional mental space. Even on this version they are wholly distinct
from the subject (soul or brain) that observes them. On the different
version defended by Price (1954, vii–viii) and Jackson (1977, 102), they
are three-dimensional objects in public physical space alongside physical
objects. Think of them as projections of the brain. On this view, even
though a sense datum exists in public space, only the subject of the brain
that causes it to come into existence there can be acquainted with it. So
the sense datum theory is a relational theory like naive realism, with
sense data as simulacra for physical objects.
Since sense data are wholly distinct from subjects, this theory is also
externalist. Suppose as before that Harold has an ‘intrinsic duplicate’,
Twin Harold, on inverted earth. Exactly what that means depends on
the correct ontology of the human subject. If Harold and Twin Harold
are physical things like brains or bodies (even if they bear non-physical
acquaintance relations to non-physical sense data), then they are
intrinsic duplicates in that these brains or bodies share all of their
intrinsic properties. Alternatively, if they are simple, non-physical souls,
they are intrinsic duplicates because these souls have exactly the same
intrinsic properties, like being happy or thinking.
Either way, just like naive realism and tracking intentionalism, the sense
datum theory implies that Harold and Twin Harold can differ phenom-
enally, contrary to phenomenal internalism. For suppose that Harold
and Twin Harold live under different (‘inverted’) psychophysical laws
connecting brain states with acquaintance with sense data, so that while
Harold is acquainted with a blue sense datum on looking at the sky, Twin
Harold is acquainted with a yellow one. Alternatively, suppose that they
live under the same psychophysical laws but these laws are probabilistic,
so that even though Harold and Twin Harold undergo an intrinsically
identical brain state, this same brain state happens to cause them to be
acquainted with these qualitatively different sense data.
Then although Harold and Twin Harold are intrinsic duplicates, they
differ phenomenally. On the sense datum theory, just as on naive realism
and tracking intentionalism, the phenomenal difference between these
two brains or souls is not an intrinsic difference. Instead, it is a purely
relational, extrinsic difference: they bear the acquaintance relation to
different sense data wholly distinct from them. It is like the difference
between sitting next to Mary and sitting next to Jane. Indeed, the whole
point of the sense datum theory is that phenomenal differences are rela-
tional differences, contrary to rival ‘non-relational’ views such as ‘adver-
bialism’. On the sense datum theory, this purely relational difference
The Real Trouble with Armchair Arguments 165
scientific facts about the role of the brain in enabling conscious experi-
ence. Because of the seductive simple empirical argument, they have
become totally convinced of phenomenal internalism: phenomenal
differences do require intrinsic differences inside the head. Because their
confident belief in phenomenal internalism has become so ingrained,
they mistakenly take it to be something that is obvious or self-evident
on a moment’s reflection. But really it is just a high-level empirical belief,
one that became widely accepted in the history of human thought only
after detailed empirical investigation.
Now the phenomenal internalist might naturally respond, ‘OK,
the armchair argument from the internalist intuition fails, but why
can’t I just directly rely on the simple empirical argument to undermine
reductive externalist theories like tracking intentionalism and naive
realism?’
My focus here is on armchair arguments, but let me address this ques-
tion. I favour certain empirical arguments (see Section 8), but I think
that this very simple empirical argument fails. The quick way to see this
is to note that it is equally true that ‘for every change in thoughts about
natural kinds, there is a measurable change in the brain’ (to appropriate
Prinz’s language). But this does not entail that natural kind thoughts are
fixed by intrinsic neutral properties: content externalism means this is
not the case. The inference is equally fallacious concerning phenomenal
states.11
The fallacy is obvious: the mere fact that it is nomically necessary
in actual humans that phenomenal differences are correlated with
intrinsic neural differences doesn’t mean that this is metaphysically
necessary. What Kriegel calls ‘the laws of neurophysiology’ might
(like typical special science laws) obtain only relative to a background
condition, one not satisfied in the case of the BIV. (For instance, a
brain state might result in an experience of round only if it normally
tracks round objects.) Indeed, Prinz is wrong that the simple correla-
tional data even raise the probability of phenomenal internalism over
phenomenal externalism, since all phenomenal changes are correlated
with both changes in intrinsic neural states and changes in externally-
determined content (e.g., when you go from seeing yellow to seeing
blue, you go from a neural state that normally tracks yellow objects to
a neural state that normally tracks blue objects). So the simple correla-
tional data alone are entirely neutral between phenomenal internalism
and phenomenal externalism (naive realism, tracking intentionalism,
active externalism).
The Real Trouble with Armchair Arguments 169
So the argument from the internalist intuition fails. But perhaps all is
not lost for the armchair enthusiasts. Another argument is available:
the argument from possibility intuitions. The argument specifically targets
reductive materialist externalist theories, like tracking intentionalism. (It
doesn’t work against dualist externalist theories; see note 13.) Chalmers,
Loar, Shoemaker and Levine suggest the argument but do not clearly
distinguish it from the argument from the internalist intuition.12 So let
me explain the difference.
Again, consider tracking intentionalism. On tracking intentionalism,
having R entails the obtaining of a certain wide (non-intrinsic) physical
condition: having a state that under biologically normal conditions
tracks – and thereby represents – the instantiation of redness (on this
view, a reflectance property) and roundness in the external world.
Having the experience in the absence of the wide physical condition is
metaphysically impossible.
Therefore, to refute such a reductive externalist theory, it would be
enough to establish from the armchair the mere possibility of having R in
the absence of the relevant wide physical condition. For instance, it would
be enough to show that in some possible world a BIV intrinsic duplicate of
oneself has R. It would also be enough to show that certain spectrum inver-
sion scenarios are possible (more on this below). The argument from possi-
bility intuitions merely relies on such possibility claims. Thus it differs
from the argument from the internalist intuition, which by contrast relies
on a much stronger necessitation claim to the effect that every possible
intrinsic duplicate of oneself (e.g., every possible brain in a vat duplicate)
in every possible world is a phenomenal duplicate of oneself.13
For example:
But then these other arguments would be doing all the justificatory
work.
Materialists cannot use possibility intuitions against externalist mate-
rialism for another reason. I call it the bad lot problem. Consider an
analogy: if you believe that the weather man is wrong in his predic-
tions about wind conditions half the time (these predictions form a ‘bad
lot’), you should put hardly any stock in any of his predictions about
wind conditions. But the materialist believes that our antimaterialist
possibility intuitions about the relationship between the phenomenal
and the physical also form a ‘bad lot’: whatever version of materialism
turns out to be true, intuitions in this group must generally be false (e.g.,
if internalist materialism is true, all contrary possibility intuitions are
false). So if you accept materialism, you must say that not only do they
provide equal justification against internalist materialism and exter-
nalist materialism (the parity problem); they are also not to be trusted at
all (the bad lot problem).
beliefs might be generally reliable. But that is hard to come by. (A mate-
rialist cannot comfortably accept ‘revelation’: that we ‘immediately
grasp’ the full essential nature of phenomenal properties just by being
acquainted with them and can tell that those essential natures don’t
involve the past or future.) Absent such a theory, maybe we should be
sceptical about our intuition favouring phenomenal localism.
Third, Tye and Dretske might explain away our localist ‘intuition’ as
follows: since we do not have to look to the past or future to know
that we have certain phenomenal properties now – one need only intro-
spect – we might erroneously conclude that they are temporally local.
To see that this inference is erroneous, consider another case: my three-
year-old daughter can immediately tell just by looking that something
is a heart, without knowing about its evolutionary history. But being
a heart is a historical, non-local property: if an intrinsic duplicate of
the heart formed by chance in a swamp, it would not also be a heart,
because it would lack the right evolutionary history – it would be a ‘fake
heart’. The general point: you can know something without knowing
all its a posteriori consequences. Likewise, maybe on Dretske and Tye’s
historical externalism my daughter (or an adult sceptical of evolution)
can immediately know about her phenomenal properties, even if she
doesn’t know about her evolutionary history.
Notes
1. For reductive externalism, see Dretske (1995), Lycan (2001), Tye (2000), Noë
(2004) and Fish (2009, 153). I will not explain ‘reductive’ here. See Sider
(2011, 116–132) for clarification and defence of a general reductionism about
the manifest image. See §2 of this chapter for a case for reduction over alter-
natives (e.g., basic grounding relations).
2. See Kriegel (2011), Horgan and Tienson (2002), Loar (2003) and Mendelovici
(2010). Pautz (2013) defends in detail the following ‘consciousness-first’
picture: Consciousness grounds rationality because it is implicated in basic
epistemic norms. (For a related view, see Smithies, Ch. 6 of this volume.)
In turn, the facts about rationality help to constitutively determine belief
and desire (Davidson, Lewis). So consciousness also ultimately grounds belief
and desire. Chalmers (2012, 467) briefly defends a related two-stage view on
which acquaintance grounds normative inferential connections and these in
turn pin down content.
3. Another well-known argument for phenomenal externalism starts with what
I have elsewhere (2007, 251) called the properties version of the ‘transparency
observation’ (see, e.g., Tye [forthcoming b]). But I think this ‘transparency
observation’ (unlike the ‘spatial datum’) is far from pre-theoretically obvious,
due to problems (not considered by Tye) concerning hallucination, many-
property situations and a priori constraints on attentive awareness (Pautz
2007, 517, 522 and n. 12).
4. See, e.g., Kriegel (2011, 167) and Prinz (2012, 286).
5. The bad-off BIV has no visual receptor system or motor output system (just
the central nervous system). Granted, if the BIV were suitably connected to
a human body, its current neural state would track round things and cause
round-appropriate behavioural movements. Could the BIV’s standing in this
counterfactual relation to roundness constitute its standing in the phenom-
enal representation relation to being round rather than to any other shape
(e.g., being square)? No, for by differently hooking up the BIV to the world
and a motor output-system, we could get its brain state to be caused by (say)
square things and to cause square-appropriate behaviour.
The Real Trouble with Armchair Arguments 177
References
Block, N. (1990) ‘Inverted Earth’. Philosophical Perspectives, 4, 53–79.
Burge, T. (2003) ‘Phenomenality and Reference: Reply to Loar’. In M. Hahn and B.
Ramberg (eds), Reflections and Replies. Cambridge, MA: MIT Press.
Byrne, A. (2003) ‘Color and Similarity’. Philosophy and Phenomenological Research,
66, 641–665.
Chalmers, D. (2004) ‘The Representational Character of Experience’. In B. Leiter
(ed.), The Future of Philosophy. Oxford: Oxford University Press.
Chalmers, D. (2006) ‘Perception and the Fall from Eden’. In T. Szabo Gendler and
J. Hawthorne (eds), Perceptual Experience. Oxford: Oxford University Press.
Chalmers, D. (2009) ‘The Two-Dimensional Argument against Materialism’. In
B. McLaughlin and S. Walter (eds), Oxford Handbook to the Philosophy of Mind.
Oxford: Oxford University Press.
Chalmers, D. (2012) Constructing the World. Oxford: Oxford University Press.
Devitt, M. and Sterelny, K. (1987) Language and Reality. Cambridge, MA: MIT
Press.
Dretske, F. (1995) Naturalizing the Mind. Cambridge, MA: MIT Press.
Field, H. (2001) Truth in the Absence of Fact. Oxford: Oxford University Press.
Fish, W. (2009) Perception, Hallucination and Illusion. Oxford: Oxford University
Press.
Fodor, J. (1991) ‘A Modal Argument for Narrow Content’. Journal of Philosophy,
88, 5–26.
Hawthorne, J. (2004) ‘Why Humeans Are Out of their Minds’. Noûs, 38,
351–358.
Horgan, T., and J. Tienson. (2002) ‘The Intentionality of Phenomenology and the
Phenomenology of Intentionality’. In D. Chalmers (ed.), Philosophy of Mind:
Classical and Contemporary Readings. Oxford: Oxford University Press.
180 Adam Pautz
185
186 Elizabeth Irvine
even the apparently ‘good’ cases can often be seriously flawed. This
problem shows that doing empirically informed philosophy is not as easy
as it looks. On closer inspection of many cases where empirical work is
used to support a philosophical theory, discussions of empirical research
are either irrelevant to the philosophical theory or distinction, or they
show it to be fundamentally misguided or uninteresting.
The second problem is that deriving a philosophical account of a mental
phenomenon from empirical data, such as perception or consciousness,
often (though not always) makes the philosophical account little more
than a vague version of currently accepted scientific theories. Within
science, vague theories are seen as inadequate theories. Typically, these
philosophical accounts also attempt to provide a unified theory of wide
explanatory scope, yet science tolerates very few such ‘grand unified’
theories. If philosophers aim to contribute to the mind/brain sciences
by developing unifying but (largely) non-predictive theories, there is a
real question about how scientifically useful such work is.
It is suggested below that these problems are serious and fairly
common ones within at least some approaches to interdisciplinary work
across philosophy and the mind/brain sciences. Instead of throwing in
the towel and either retreating to the armchair or the local cognitive
science department, I later propose an alternative, and hopefully more
productive, philosophical way to engage with empirical work.
in the type of information found in these two stores and the different
decay rates of this information, leads to specific kind of errors in the
Sperling paradigm. When subjects are cued after the visible analogue
store has decayed (location information gone) but while the post-
categorical store is still available (identity information still available),
subjects make ‘location errors’. In these cases, subjects correctly identify
some letters, but since location information is no longer available, the
letters come from non-cued rows. Significantly, informational persist-
ence is a type of visual memory; it tells us about the short-term storage of
different types of visual information but not about the content or dura-
tion of visual experiences. As Luck and Hollingworth (2008) state: ‘the
partial-report technique does not measure directly the visible aspect of
visual sensory memory, but rather that information persists after stimulus
onset’ (16; original italics).
These facts pose several problems for accounts that equate the contents
of rich phenomenal consciousness with the contents of sensory memory.
First, several authors have treated visible and information persistence as
different aspects of the same (conscious) phenomenon (e.g., Block 2007,
488–491; Tye 2006, 511–513), but these are misleading conflations of
two very different phenomena. Visible and informational persistence are
investigated with different experimental paradigms, concern different
parts of the visual system, have different temporal properties, and relate
either to early visual processing and visual experience or to short-term
visual memory. The Sperling paradigm, and the general phenomenon
of informational persistence are not measures of what is experienced,
only what is ‘remembered’ and reported when the display is no longer
present. Using informational persistence (memory) to make claims about
the contents of visual experience is not accepted within current psycho-
logical theories, so neither should it be in philosophical accounts.
Second, even if we could make inferences from informational persist-
ence to the contents of phenomenal consciousness, the properties of
the visible analogue and post-categorical stores are not consistent
with the properties typically attributed to phenomenal conscious-
ness. Information in these stores is deeply processed, up to the level of
object identity. Subjects are not therefore reading off the letter identities
from a conscious perception of detailed but (non-conceptual) visually
detailed letter-like shapes. They are simply reporting the letter identities
that are already processed and stored; this type of sensory memory is
certainly not ‘iconic’. Further, subjects do not report a changing expe-
rience as different types of information degrade, as one might expect
if the contents of sensory memory are the contents of phenomenal
Problems and Possibilities 191
is that they are often of the form of ‘grand unifying’ theories that
attempt to account for a wide range of phenomena (e.g., the theories
of consciousness noted above). When compared with other inadequate
scientific theories, philosophical theories can therefore play a small but
useful role in science: suggesting new frameworks to account for (and
sometimes unify) a range of phenomena. However, a proliferation of
these kinds of theories is not necessarily helpful.
This is largely because there are very few accepted theories in science
that unify and explain a wide range of phenomena but fail to exhibit the
typical properties of ‘good’ theories noted above. The theory of natural
selection is a standard example; its explanatory scope is massive, but
without the addition of context-specific knowledge and assumptions,
it cannot make (quantitative) predictions, and the boundaries of the
theory are still being worked out for specific cases (e.g., what proportion
of evolution is really due to natural selection). As these kind of non-pre-
dictive, ‘grand unifying’ theories do not possess the standard properties
of scientific theories, very few of them are tolerated for long or seen as
useful explanatory frameworks in the first place. For example, Friston’s
‘free-energy’ principle (e.g., Friston and Stephan 2007; Friston et al.
2006) is potentially a powerful ‘grand unifying’ theory for cognition and
animal behaviour in general. However, it appears to be taken seriously
only in relevant research communities when empirical support is given
for details of its implementation in specific cases (e.g., see Clark 2013 for
a thorough overview of recent work with much the same viewpoint).
On a smaller scale, there is often complaint about the proliferation of
models providing a ‘proof of concept’ for a particular explanatory frame-
work. This occurs when a model or framework is shown to account for
core, simple phenomena in a post hoc way. These models often lack the
properties of ‘good’ scientific models, as they are often fairly unspecified
or not well worked out and so do not provide clear, testable predictions
about any other cases (e.g., in decision making; see Glöckner and Betsch
2011). Clearly, an explanatory framework has to start somewhere, but
it will not be particularly useful if it goes undeveloped and untested, as
many proof of concepts appear to do.
Philosophers aiming to give ‘grand unifying’ theories of mental/
cognitive phenomena need to be mindful of these facts. While there
does seem to be a shortage of theoretical work in cognitive science (a
gap philosophers could try to fill), the scientific value of adding more
loosely specified, wide-scope theories into the literature is not obvious.
‘Good’ theoretical work needs to be highly empirically sensitive, and
theories need to be good scientific theories – at the very least being able
Problems and Possibilities 195
1.3 Summary
I have not argued that all empirically informed or empirically based
philosophical accounts face the problem of making inappropriate
theoretical distinctions or of providing nothing more than inadequate
scientific theories; it is only that within my experience these seem
to be common problems facing empirically informed philosophy of
mind and philosophy of cognitive science. In part, I suspect that this is
because much of (interdisciplinary) philosophy of mind and cognitive
science is focused on answering constitutive questions: what is percep-
tion or attention or consciousness, what are their functions, and how
do they relate to other mental/cognitive/psychological states. Given
the differences between philosophy of mind and contemporary cogni-
tive science, it is hardly surprising that, as philosophical distinctions
do not always map onto scientific ones, many philosophers are left to
force a fit (often without realizing it), treat (constitutive) philosophical
questions as ones to be answered a priori, or fall into the bit of cogni-
tive science that they are best trained to do (interpreting experimental
results into newish conceptual frameworks but not engaging in full-on
scientific research).
The problems outlined above are not irresolvable. I suggest below that
there are other ways – as yet underutilized and potentially invaluable –
of pursuing interdisciplinary work. These methods are based on the idea
of treating mental/cognitive/psychological properties as any other high-
level property of a complex biological system and trying to figure out
the best way to investigate them. Rather than ask ‘What is (philosophi-
cally interesting phenomenon) X?’ or ‘Does X have (philosophically
interesting) property Y?’, we can instead ask ‘What methods can we use
to investigate X (and whether X has property Y)?’ and perhaps learn
some surprising things in the process, both about how science works
and about the phenomena we’re interested in. While an approach based
on philosophy of science is sometimes found in philosophy of cognitive
science, I suggest that it deserves to be taken more seriously.
196 Elizabeth Irvine
and talk about how to interpret them is that there really is a fact of the
matter about what mental state a subject is in at a particular point in
time. Given this assumption and the problems identified above, subjects
are now sometimes given training on how to figure out what they
really are experiencing and how best to report their mental states (e.g.,
Overgaard et al. 2006). However, there are other ways of thinking about
what first-person data are about. Dennett (2003, 2005, 2007) claims that
first-person reports tell us of individuals’ beliefs about their experiences
or mental states. In contrast, Piccinini (2010) suggests that they can
instead be seen as direct reports about their experiences or mental states.
The problem, as noted above, is that we are very unsure how to
map measures of cognitive processes, such as reports, to mental states.
Perhaps mental states just are whatever we think they are or perhaps
there are facts of the matter about what mental states we have at any
one time that do not necessarily match our reports about them. If the
latter is true, we don’t know how much of a grasp of our mental life we
really have (Schwitzgebel), and even if we did have a good knowledge of
it, this doesn’t necessarily translate into first-person data being faithful
reflections of our mental states (e.g., due to response bias). What the
relation is between first-person measures and mental states, as well as
the implications this has for what we think mental states are, are ques-
tions that are central to the way we interpret first-person data and thus
deserve further serious discussion.
The aim of this section has been to show that the methodological
questions associated with the use of first-person measures – including
how to deal with bias, how these measures are generated and what they
are measures of – are more serious than is often given credit. While some
of these points could of course arise in philosophy of cognitive science,
they often do not. The approach outlined above favours an investi-
gation of fundamental questions about scientific measurement and
the interpretation of data that underlie any scientific enterprise. This
focus on general methodological questions rather than on issues that
are specific to particular research areas, is a definite advantage of the
approach. Instead of having to respond to long-standing intuitions and
assumptions (e.g., first-person reports are largely correct, direct read-
outs of private mental states), a focus on scientific methodology offers
a new and potentially insightful approach to this important range of
questions. A philosophy-of-science approach is not the only way into
these questions, but it does highlight many essential methodological
questions that are often bypassed or are simply not part of the current
dialectic.
200 Elizabeth Irvine
So far, this may seem to affect only how we talk about cognition, not
what it really is. In this way of thinking, empirical questions have no
import for metaphysical questions. Yet similar work in philosophy of
biology, particularly related to complex multi-level systems, suggests
that just as there are multiple ways of carving up mechanisms, there
are multiple ways of carving the same process up into ‘real’ ontological
chunks. Ontological pluralism, accepting that there are multiple and
equally viable ways of carving up the same bit of reality into ontological
types or kinds, is not necessarily a radical position; rather, it is one that
naturally arises from the consideration of scientific methods (see, e.g.,
Boyd 1999; Wilson 2005; Craver 2009; Dupré 1993).
While this flies in the face of much debate about this issue, it seems
that ideas from philosophy of science, properly applied, have much
potential to calm similar debates within philosophy of mind and cogni-
tive science. This is so because – as has become clear in biology and is
now reflected in philosophy of biology – complex biological systems
are not the sort of thing that support exception-free generalizations
or clear-cut dichotomies. Neither is it profitable to model biological
systems in only one way or carve up a causal landscape in only one way.
Reflection on biology itself has generated philosophical positions of
experimental, cross-level, conceptual and theoretical pluralism because
these are the best ways we have of understanding biological systems
(e.g., Kellert et al. 2006).
These are also reflected in metaphysical positions about biological
systems; current science is structured in a pluralist way not just because
we aren’t very good at it but because sharp, generalizable, context-in-
dependent boundaries are not a feature of biological systems. They are
not something we can ever expect to find. We, as evolved cognizing
beings, are clearly complex biological systems, so insights from philos-
ophy of biology have obvious application in philosophy of mind and
cognitive science. Future projects would include seeing how far similar
tactics work in other debates (e.g., related topics in embodied cogni-
tion, whether cognition involves representations, dual systems theories
vs sensorimotor theories of perception and so on).
Problems and Possibilities 203
3 Conclusions
Note
1. This is true across the board; see, e.g., Kessel et al. (2008) on interdiscipli-
nary research across health and the social sciences and McCauley and Bechtel
(2001) and Churchland (1993) on cross-level research in psychology and into
philosophy.
204 Elizabeth Irvine
References
Adams, F., and Aizawa, K. (2001) ‘The Bounds of Cognition’. Philosophical
Psychology, 14, 43–64.
Adams, F., and Aizawa, K. (2008). The Bounds of Cognition. Oxford: Blackwell.
Anderson, J. R., and Lebiere, C. (2003) ‘The Newell Test for a Theory of Cognition’.
Behavioral and Brain Sciences, 26(5), 587–601.
Baars, B. J. (1997) ‘In the Theatre of Consciousness: Global Workspace Theory, a
Rigorous Scientific Theory of Consciousness’. Journal of Consciousness Studies,
4(4), 292–309.
Bayne, T., and Spener, M. (2010) ‘Introspective Humility’. Philosophical Issues,
20(1), 1–22.
Bechtel, W. (2009) ‘Constructing a Philosophy of Science of Cognitive Science’.
Topics in Cognitive Science, 1(3), 548–569.
Bechtel, W. P., and McCauley, R. N. (1999) ‘Heuristic Identity Theory (or back
to the future): The Mind-body Problem Against the Background of Research
Strategies in Cognitive Neuroscience. In M. Hahn and S. C. Stoness (eds),
Proceedings of the 21st Annual Meeting of the Cognitive Science Society. New York:
Erlbaum.
Block, N. (2007) ‘Consciousness, Accessibility, and the Mesh between Psychology
and Neuroscience’. Behavioral and Brain Sciences, 30(5–6), 481–499; discussion
499–548.
Boyd, R. (1999) ‘Homeostasis, Species, and Higher Taxa’. In R. A. Wilson (ed.),
Species: New Interdisciplinary Essays, 141–185. Cambridge, MA: MIT Press.
Brook, A. (2009) ‘Introduction: Philosophy in Philosophy of Cognitive Science’.
Topics in Cognitive Science, 1(2), 216–230.
Bruner, J. S., and Postman, L. (1949) ‘Perception, Cognition, and Behavior’.
Journal of Personality, 18, 14–31.
Bruner, J. S., and Postman, L. (1947) ‘Emotional Selectivity in Perception and
Reaction’. Journal of Personality, 16(1), 69–77.
Burge, T. (2010) Origins of Objectivity. Oxford: Oxford University Press.
Burge, T. (2011) ‘Disjunctivism Again’. Philosophical Explorations, 14(1), 43–80.
Castelhano, M. S., and Henderson, J. M. (2008) ‘The Influence of Color on
the Activation of Scene Gist’. Journal of Experimental Psychology Human, 34,
660–675.
Chemero, A., and Silberstein, M. (2008) ‘Defending Extended Cognition’. In B. C.
Love, K. McRae and V. M. Sloutsky (eds), Proceedings of the 30th Annual Meeting
of the Cognitive Science Society, 129–134.
Churchland, P. S. (1993) ‘The Co-evolutionary Research Ideology’. In A. Goldman
(ed.), Readings in Philosophy and Cognitive Science, 745–767. Cambridge, MA:
MIT Press.
Clark, A. (2013) ‘Whatever Next? Predictive Brains, Situated Agents, and the
Future of Cognitive Science’. Behavioral and Brain Sciences, 36, 181–253.
Clark, A., and Chalmers, D. J. (2006) ‘The Extended Mind’. Analysis, 58(1), 7–19.
Craver, C. F. (2007a) ‘Constitutive Explanatory Relevance’. Journal of Philosophical
Research, 32, 3–20.
Craver, C. F. (2007b) Explaining the Brain: Mechanisms and the Mosaic Unity of
Neuroscience. Oxford: Oxford University Press.
Problems and Possibilities 205
Irvine, E. (2012) ‘Old Problems with New Measures in the Science of Consciousness’.
British Journal for the Philosophy of Science, 63, 627–648.
Jacob, P., and de Vignemont, F. (2010) ‘Spatial Coordinates and Phenomenology
in the Two-visual Systems Model’. In N. Gangopadhyay, M. Madary and F.
Spicer (eds), Perception, Action and Consciousness, 125–144. Oxford: Oxford
University Press.
Kaplan, D. M. (2012) ‘How to Demarcate the Boundaries of Cognition’. Biology
and Philosophy, 27, 545–570.
Kellert, S. H., Longino, H. E., and Waters, C. K. (eds) (2006) Scientific Pluralism.
Minneapolis: University of Minnesota Press.
Kessel, F., Rosenfield, P., and Anderson, N. (eds) (2008) Interdisciplinary Research:
Case Studies from Health and Social Science. Oxford: Oxford University Press.
Kouider, S., De Gardelle, V., Sackur, J., and Dupoux, E. (2010) ‘How Rich is
Consciousness? The Partial Awareness Hypothesis’. Trends in Cognitive Sciences,
14(7), 301–307.
Lau, H. C. (2006) ‘Are We Studying Consciousness Yet? Journal of Consciousness
Studies, 13(4), 94–112.
Loftus, G., and Irwin, D. (1988) ‘On the Relations among Different Measures of
Visible and Informational Persistence’. Cognitive Psychology, 35, 135–199.
Luck, S. J., and Hollingworth, A. (2008) Visual Memory. Oxford: Oxford University
Press.
Machamer, P., and Sullivan, J. (2001) Levelling reduction. PhilSci archive, http://
philsci-archive.pitt.edu/386/.
Machamer, P., Darden, L., and Craver, C. F. (2000) ‘Thinking about Mechanisms’.
Philosophy of Science, 67, 1–25.
Mack, A., and Rock, I. (1998) ‘Inattentional Blindness’. Current Directions in
Psychological Science, 12, 180–184.
McCauley, R. N., and Bechtel, W. P. (2001) ‘Explanatory Pluralism and Heuristic
Identity Theory’. Theory and Psychology, 11(6), 736–760.
McClelland, J. L., Botvinick, M. M., Noelle, D. C., Plaut, D. C., Rogers, T.
T., Seidenberg, M. S., and Smith, L. B. (2010) ‘Letting Structure Emerge:
Connectionist and Dynamical Systems Approaches to Cognition’. Trends in
Cognitive Sciences, 14(8), 348–356.
McClelland, J. L., Plaut, D. C., Gotts, S. J., and Maia, T. V. (2003) ‘Developing
a Domain-general Framework for Cognition: What is the Best Approach?
Behavioral and Brain Sciences, 26(5), 611–614.
McDowell, J. (2010) ‘Tyler Burge on Disjunctivism’. Philosophical Explorations,
13(3), 243–255.
Oliva, A. (2005) ‘Gist of the Scene’. In L. Itti, G. Rees and J. K. Tsotsos (eds),
Neurobiology of Attention, 251–256. San Diego: Elsevier.
O’Regan, J. K., and Noë, A. (2001) ‘A Sensorimotor Account of Vision and Visual
Consciousness’. Behavioral and Brain Sciences, 24(5), 939–973; discussion
973–1031.
Overgaard, M., Rote, J., Mouridsen, K., and Ramsøy, T. Z. (2006) ‘Is Conscious
Perception Gradual or Dichotomous? A Comparison of Report Methodologies
during a Visual Task, Consciousness and Cognition, 15(4), 700–708.
Piccinini, G. (2005) ‘Data from Introspective Reports’. Journal of Consciousness
Studies, 10(9), 141–156.
Problems and Possibilities 207
208
Psychological Explanation 209
1. (a) The square peg failed to pass through the hole because its cross
section was longer than the diameter of the hole.
(b) The peg failed to pass through the hole because [extremely long
description of atomic movements].
Many have the strong intuition that the first sentence in each pair is
a better explanation than the second. This is true, note, even though the
truth of the second sentence guarantees the truth of the first. I want to
take that intuition for granted and explore two different stories about
why this might be the case.
There is a well-loved account, tracing at least back to Hilary Putnam,
for the superiority of some explanations. Explanation 1a, Putnam
claimed, is clearly better because:
4. (c) Esther ran because she was scared of the small flying thing.
This is both true and more general than (4a); nevertheless, it is an infe-
rior explanation if Esther is scared only of bees but indifferent to flies.
Rather, it is proportionality between higher-level cause and effect that
picks out the most explanatory of the causally relevant properties.
Call someone who adopts this view a literalist about explanatory good-
ness. Literalism says that good explanations are superior to rivals because
they pick out a property that their rivals don’t and that this property
bears the right sort of relationship to the explanans. Our best explana-
tions are thus ontologically committing. If a term ϕ appears in the best
explanation of some phenomenon, then we are, all things being equal,
justified in believing that ϕ refers to some unique property. Hence the
term ‘literalism’: one can read off the ontological commitments from a
good explanation largely by taking it literally and supposing that each
term ϕ really is meant to refer to a corresponding property or entity.1
The literalist view is widely accepted in philosophy of mind. It has
been a particular comfort to non-reductive physicalists. The fact that
(4b) is inferior to (4a) suggests that even were psychology to be reduced
to neuroscience, the resulting neural explanations would be inferior
to the psychological ones because they would no longer refer to the
most commensurate high-level properties. Further, the explanatory
superiority of proportionate properties might lead us to suppose that
we have a solution to the hoary causal exclusion argument. The causal
exclusion argument says, in simplified terms, that mental and physical
properties must (if distinct) compete for causal influence and that a
plausible physicalism should force us to assign causal priority to the
physical one. Not so, literalism responds: both properties are causally
relevant, but only the higher-level one counts as the cause. It does so
Psychological Explanation 211
It is not necessarily the case that all the terms used to describe people
are matched by measurable attributes – e.g., ego strength, extrasensory
perception, and dogmatism. Another possibility is that a measure may
concern a mixture of attributes rather than only one attribute. This
frequently occurs in questionnaire measures of ‘adjustment,’ which
tend to contain items relating to a number of separable attributes.
Although such conglomerate measures sometimes are partly justifiable
on practical grounds, the use of such conglomerate measures offers a
poor foundation for psychological science. (Nunnally 1967, 3)
3 A diagnosis
What’s the lesson from all of this? One could, I suppose, use it to defend
a crude sort of old-fashioned reductionism. That is, one could argue that
all mental predicates are simply derived quantities and that the only real
causal properties are the physical ones and the properties that are iden-
tical to them. (Indeed, much of the above was inspired by Kim’s remarks
about second-order descriptions in science [in ch. 4 of Kim 1998] and
could be thought of as one way of unpacking them.) I think, though,
that we can draw another, deeper conclusion. The real question is why
literalism seems so plausible even if it’s problematic, especially to natu-
ralistically minded philosophers of mind. Here, I think, I can offer a
diagnosis.
the world: that depends, at the very least, on the intended model-world
mapping.7
In addition to fitting the apparent practice of science, the semantic
view also provides a neat solution to the role of mathematics in
science. Mathematics is something we use to reason about the models.
Mathematics is not a part of any theory but is available to all. Thus, as
van Fraassen puts it, physics first sets up a framework of models and
then, having done so, ‘The theoretical reasoning of the physicist is
viewed as ordinary mathematical reasoning concerning this framework’
(van Fraassen 1970, 338).
With that in mind, consider mathematically complex claims, like the
Hodgkin-Huxley equation, or mathematically complex expressions, like
the one describing GNa:
GNa = gNa(max)m3h
4 Going further
explanation shows several reasons why this simple view must be aban-
doned. Good explanations often involve abstract redescriptions of
specific, lower-order properties. These redescriptions are required for
pragmatic reasons, not for ontological ones. This in turn fits well with
the semantic view of theories, which carefully separates the language in
which models are specified from the models themselves and the model-
world relationships asserted by the theory.
I want to conclude by considering ways in which the abandonment
of literalism might matter for philosophy of mind. I have argued else-
where that once we break the link between theories and the language
in which they are formulated, traditional arguments for multiple realiz-
ability fail.8 This is so because traditional arguments for multiple realiz-
ability suppose that the only explanations available to physics are those
that describe atoms and their motions in tedious detail. The idea that
physics describes only mereological simples is almost unavoidable on
the axiomatic view, for reasons outlined above. It is also patently absurd:
physicists spend most of their time trying to give high-level abstract
explanations of physical phenomena. Once we realize this, as well as the
role of model redescription in science, multiple realizability becomes
difficult to motivate.
Indeed, I think there’s a more general point that can be made here
about the individuation of scientific disciplines. There has been an
assumption that scientific disciplines are individuated by their domains:
that is, what’s characteristic about physics or biology is primarily the set
of things that fall under their laws. This view is again almost unavoid-
able on the axiomatic view: the domain of a science just is the domain of
its quantifiers. This turn leads to the hierarchical, striated view of reality
made famous by Oppenheim and Putnam (1958). On such a view, each
scientific discipline corresponds to a distinct level of reality. Again, a
metaphysical point grows out of a substantive view in philosophy of
science. If my argument is right, however, we should be wary of this
view of the world. Sciences have more descriptive flexibility than the
philosopher of mind tends to ascribe to them, and there is no reason
why scientific disciplines must carve the world into non-overlapping
spheres of influence.
The semantic view of theories permits an alternative view of discipli-
nary individuation: what I’ll call (with some trepidation) a paradigm-based
view. Every discipline or subdiscipline starts with a set of characteristic
phenomena that it tries to explain: living things for biology, minds for
psychology, nerves for neuroscience, lenses for optics and so on. The
investigation of characteristic phenomena often hinges on creating
222 Colin Klein
Notes
1. Note that one could hold a much stricter version of literalism, on which a
predicate is ontologically committing only if it is ineliminable or just in case it
appears in the best overall axiomatization of the phenomena. I do not focus on
these formulations for two reasons. First, in practice nobody actually adheres
to this standard, because figuring out whether a predicate is ineliminable or
part of the best axiomatization tout court is too difficult a task. If we held
ourselves to such a high standard, the game would be up from the beginning:
no one should have confidence that their predicates refer, and so literalism
would be a straw man. Second, formulations of the requirement in terms of
axiomatization or the eliminability of predicates is so obviously derived from
the axiomatic view of theories that the considerations presented in §4 will
apply directly. Thanks to Mark Sprevak for pressing me on this point.
2. See esp. Bontly (2005, 343). I am indebted to Bontly’s article for prompting
many of the reflections in this section.
3. Here, some care is needed. It has become recently fashionable to claim that
the Hodgkin-Huxley equation does not explain anything but merely describes
the shape of the action potential (Craver 2007, ch. 3). It is true that insofar as
the above is explanatory, it is not because it constitutes a deduction from the
more general laws postulated by Hodgkin and Huxley. Rather, (6) is explana-
tory because it details some facts about the mechanism that underlies the
action potential and then uses facts about that mechanism to explain the
threshold. It does not detail the mechanisms by which the voltage-gated ion
channels work; to the extent that the detailing of those mechanisms was part
of neuroscientists’ shared explanatory interests, Hodgkin and Huxley fell
short of explaining everything there was to explain about the action poten-
tial. But that does not mean that the equations they experimentally derived
were not themselves explanatory of some phenomena. Thanks to Carl Craver
for helpful discussion on this point.
4. See Nagel (1961) for a classic statement and Suppe (1989) for a contemporary
reconstruction and discussion.
5. See ch. 2 of Suppe (1989) for an extended discussion of problems with the
axiomatic account. The essays in Salmon (1998a), esp. Salmon (1998b), also
224 Colin Klein
References
Bontly, T. (2005) ‘Proportionality, causation, and exclusion’. Philosophia 32(1),
331–348.
Craver, C. (2007) Explaining the Brain. New York: Oxford University Press.
Fodor, J. (1997) ‘Special sciences: Still autonomous after all these years’.
Philosophical Perspectives: Mind, Causation, and World 11, 149–163.
Giere, R. N. (1988) Explaining Science: A Cognitive Approach. Chicago: University
of Chicago Press.
Godfrey-Smith, P. (2006) ‘The strategy of model-based science’. Biology and
Philosophy 21, 725–740.
Grice, H. P. (1989) Studies in the Way of Words. Cambridge, MA: Harvard University
Press.
Halvorson, H. (2012) ‘What scientific theories could not be’. Philosophy of Science
79, 183–206.
Kim, J. (1998) Mind in a Physical World. Cambridge, MA: MIT Press.
Klein, C. (2009) ‘Reduction without reductionism: A defence of Nagel on
connectability’. Philosophical Quarterly 59(234), 39–53.
——. (2013) ‘Multiple realizability and the semantic view of theories’. Philosophical
Studies, 163(3), 683–695.
Kuffler, S. W., J. G. Nicholls and A. R. Martin. (1984) From Neuron to Brain: A
Cellular Approach to the Function of the Nervous System. 2nd edn. Sunderland,
MA: Sinauer.
Lewis, D. (1970) ‘How to define theoretical terms’. Journal of Philosophy, 63(13),
427–446.
——. (1986) ‘Causal explanation’. In Philosophical Papers, vol. 2. New York, Oxford
University Press.
Lloyd, E. (1994) The Structure and Confirmation of Evolutionary Theory. Princeton,
NJ: Princeton University Press.
Nagel, E. (1961) The Structure of Science: Problems in the Logic of Scientific Explanation.
New York: Harcourt, Brace and World.
Nunnally, J. (1967) Psychometric Theory. New York: McGraw-Hill.
Oppenheim, P., and H. Putnam. (1958) ‘Unity of science as a working hypoth-
esis’. Minnesota Studies in the Philosophy of Science 2, 3–36.
Psychological Explanation 225
Putnam, H. (1975) ‘Philosophy and our mental life’. In Mind, Language and Reality.
London: Cambridge University Press.
Salmon, W. (1998a) Causality and Explanation. New York: Oxford University
Press.
——. (1998b) ‘Deductivism visited and revisited’. In Salmon, Causality and
Explanation, 142–177. New York: Oxford University Press.
Suppe, F. (1989) The semantic conception of theories and scientific realism. Champaign:
University of Illinois Press.
Suppes, P. (1967) ‘What is a scientific theory?’. In S. Morgenbesser (ed.), Philosophy
of Science Today. New York: Basic Books.
van Fraassen, B. (1970) ‘On the extension of Beth’s semantics of physical theo-
ries’. Philosophy of Science 37(3), 325–338.
——. (1980) The Scientific Image. New York: Oxford University Press.
——. (1989) Laws and Symmetry. New York: Oxford University Press.
Yablo, S. (1992) ‘Mental causation’. Philosophical Review 101(2), 245–280.
11
Naturalizing Action Theory
Bence Nanay
226
Naturalizing Action Theory 227
I admit to naturalism and even glory in it. This means banishing the
dream of a first philosophy and pursuing philosophy rather as a part
of one’s system of the world, continuous with the rest of science.
(Quine 1984, 430–431)
replace them with concepts that do pick out natural kinds.3 Science can
tell us what this new concept should be.
I talked about the importance of empirical findings in naturalized
action theory: empirical findings constrain the philosophical theories of
action we can plausibly hold. But the interaction between philosophy
and the empirical sciences is bidirectional. The philosophical hypoth-
eses and theories, as a result of being empirically informed, should be
specific enough to be falsified or verified by further empirical studies.
Psychologists and neuroscientists often accuse philosophers in general,
philosophers of mind in particular, of providing theories that are too
general and abstract, that are of no use for the empirical sciences.
Philosophers of a non-naturalistic creed are of course free to do so, but
if we want to preserve the naturalistic insight that philosophy should
be continuous with the empirical sciences, such disconnect would not
be permissible. Thus, naturalistic philosophy needs to give exact, test-
able hypotheses that psychologists as well as cognitive neuroscientists
of action can engage with. Naturalized action theory, besides using
empirical studies, could also be used for future empirical research. This
is the only sense in which the ‘integration of the philosophical with the
scientific’ that Brand talked about does not become a mere slogan. This
is the methodology that has been used by more and more philosophers
of perception (I won’t pretend that it has been used by all), and given
the extremely rich body of empirical research, especially in the cognitive
neuroscience of action,4 more and more philosophers of action should
use the same methodology.
This may sound like a manifesto about how nice naturalized action
theory would be. But the aim of this section is to argue that it is difficult
to see how naturalized action theory can be avoided. The sketch of the
argument is the following: pragmatic representations, the mental states
that make actions actions, are not normally accessible to introspection.
So we have no other option but to turn to the empirical sciences if we
want to characterize and analyse them.
2 Pragmatic representations
the bodily movement in these two cases is the same, whatever it is that
makes the difference, it seems to be a plausible assumption that what
makes actions actions is a mental state that triggers, guides or maybe
accompanies the bodily movements. If bodily movements are triggered
(or guided or accompanied) by mental states of a certain kind, they
qualify as actions. If they are not, they are mere bodily movements.5
The big question is of course what mental states are the ones that trigger
(or guide or accompany) actions. There is no consensus about what
these mental antecedents of actions are supposed to be. Whatever they
are, they seem to be representational states that attribute properties the
representation of which is necessary for the performance of the action.
They guide, and sometimes even monitor, our bodily movements. Myles
Brand called mental states of this kind ‘immediate intentions’ (Brand
1984), Kent Bach calls them ‘executive representations’ (Bach 1978),
John Searle ‘intentions-in-action’ (Searle 1983), Ruth Millikan ‘goal state
representation’ (Millikan 2004, ch. 16) and Marc Jeannerod ‘represen-
tation of goals for actions’ or ‘visuomotor representations’ (Jeannerod
1994, §5; Jeannerod 1997; Jacob and Jeannerod 2003, 202–204). I called
them ‘action-oriented perceptual states’ (Nanay 2012a) or ‘action-guiding
perceptual representations’ (Nanay 2011).6 Here I just use the placeholder
term ‘the immediate mental antecedent of actions’.
I use the term ‘the immediate mental antecedent of actions’ as a place-
holder for the mental state that makes actions actions, that is present when
our bodily movement counts as action but is absent in the case of reflexes
and other mere bodily movements. Thus, we can talk about the ‘immediate
mental antecedents of actions’ in the case of all actions. Intentional actions
have immediate mental antecedents, but so do non-intentional actions.
Autonomous intentional actions have immediate mental antecedents as
much as non-autonomous actions (see Velleman 2000; Hornsby 2004).
As immediate mental antecedents of action are what make actions
actions, understanding the nature of these mental states is a logically prior
task for philosophers of action to all other questions in action theory.
In order to even set out to answer questions like ‘What makes actions
intentional?’ or ‘What makes actions autonomous?’ one needs to have an
answer to the question ‘What makes actions actions?’ The way to answer
this question is to describe the immediate mental antecedents of action.
Many philosophers of action distinguish between two different
components of the immediate mental antecedent of actions. Kent Bach
differentiates ‘receptive representations’ and ‘effective representations’
that together make up ‘executive representations’, which is his label for
the immediate mental antecedent of action (Bach 1978, see esp. 366).
Myles Brand talks about the cognitive and the conative components of
Naturalizing Action Theory 231
But after having practised our throws a couple of times with the goggles
on, we are asked to take off the goggles and perform the task without
them. Now we experience the same phenomenon again: when we first
attempt to throw the ball towards the basket without the goggles, we
miss it; after several attempts, we manage to throw it as we did before
putting on the goggles.
I would like to focus on this change in our perception and action
after taking off the goggles. At the beginning of the learning process, my
pragmatic representation is clearly different from my pragmatic repre-
sentation at the end, when I can successfully throw the ball into the
basket. My pragmatic representation changes during this process; it is
this change that allows me to perform the action successfully at the
end of the process. The mental state that guides my action at the end of
the process does so much more efficiently than the one that guides my
action at the beginning.
Here is how we can make sense of this phenomenon: our pragmatic
representation attributes a certain location property to the basket, which
enables and guides us to execute the action of throwing the ball in the
basket. Our conscious perceptual experience attributes another location
property to the basket. During the process of perceptual learning, the
former representation changes, but the latter does not.
Similar results are documented in the case of a number of optical
illusions that mislead our perceptual experience but not our pragmatic
representation. One such example is the three-dimensional Ebbinghaus
illusion. The two-dimensional Ebbinghaus illusion is a simple optical
illusion. A circle that is surrounded by smaller circles looks larger than a
circle of the same size that is surrounded by larger circles. The three-di-
mensional Ebbinghaus illusion reproduces this illusion in space: a poker
chip surrounded by smaller poker chips appears to be larger than a poker
chip of the same diameter surrounded by larger ones. The surprising
finding is that although our perceptual experience is incorrect – we
experience the first chip to be larger than the second one – if we are
asked to pick up one of the chips, our grip size is hardly influenced by
the illusion (Aglioti et al. 1995, see also Milner and Goodale 1995, ch.
6; Goodale and Milner 2004). Similar results can be reproduced in the
case of other optical illusions, like the Müller-Lyer illusion (Goodale and
Humphrey 1998; Gentilucci et al. 1996; Daprati and Gentilucci 1997;
Bruno 2001), the Kanizsa compression illusion (Bruno and Bernardis
2002), the dot-in-frame illusion (Bridgeman et al. 1997), the Ponzo illu-
sion (Jackson and Shaw 2000; Gonzalez et al. 2008) and the ‘hollow face
illusion’ (Króliczak et al. 2006).10
Naturalizing Action Theory 233
real work is in figuring out what these representations are, what proper-
ties they represent objects as having, how they interact or fail to interact
with the rest of our mind, etc. These are things that conceptual analysis
is unlikely to be able to do.
Hence, it seems that the only way to find out more about pragmatic
representations is by means of empirical research. We have no other
option but to turn to the empirical sciences if we want to characterize
and analyse them. As pragmatic representations are the representational
components of what makes actions actions, this means that we have no
other option but to turn to the empirical sciences if we want to under-
stand what actions are.12 Relying on empirical evidence is not a nice,
optional feature of action theory: it is the only way action theory can
proceed.13
Notes
1. A notable exception is the recent philosophical literature on the ‘illusion
of free will’: the sense of agency and conscious will (see, e.g., Libet 1985;
Wegner 2002; Haggard and Clark 2003; Pacherie 2007). It is important to
acknowledge that experimental philosophers do use empirical data on our
intuitions about actions and our way of talking about them. But even experi-
mental philosophers of action tend to ignore empirical findings about action
itself (as opposed to our intuitions about it).
2. One important example comes from Fred Dretske’s work. The original link
between perception and knowledge is at least partly due to the works of
Fred Dretske over the decades (starting with Dretske 1969). Dretske’s recent
writings, however, turn the established connection between perception
and knowledge on its head. He is interested in what we perceive, and some
of the considerations he uses in order to answer this question are about
what we know (see Dretske 2007, 2010). Dretske’s work exemplifies a more
general point about the shift of emphasis in contemporary philosophy of
perception.
3. I am using here the widely accepted way of referring to natural kinds as the
real joints of nature because it is a convenient rhetorical device, but I have
my reservations about the very concept, for a variety of reasons (see Nanay
2010b, 2013c).
4. The literature is too large to survey, but an important and philosophically
sensitive example is Jeannerod (1997).
5. Theories of ‘agent causation’ deny this and claim that what distinguishes
actions and bodily movements is that the former are caused by the agent
herself (and not a specific mental state of her). I leave these accounts aside
because of the various criticisms of the very idea of agent causation (see
Pereboom 2004 for a summary).
6. This list is supposed to be representative, not complete. Another important
concept that may also be listed here is John Perry’s concept of ‘belief-how’
(Israel et al. 1993; Perry 2001).
236 Bence Nanay
7. Sometimes they don’t consist of both; the action of Anarchic Hand Syndrome
patients could, e.g., be analysed as actions where the representational
component of the immediate mental antecedent of actions is present but
the ‘moving to act’ component is absent: what moves these patients to act is
something external. The same may be true of some actions of healthy adult
humans as well. Elsewhere, I call actions where one of the two components
of the immediate mental antecedents of action is missing ‘semi-actions’
(Nanay 2013a).
8. Are these all the properties pragmatic representations represent? Can they
also represent the goal state of the action? Some philosophers take the repre-
sentational component of the immediate mental antecedent to be the repre-
sentation of the goal of the action (see, e.g., Millikan 2004; Butterfill and
Sinigaglia [Forthcoming]). I myself think that while these goal states can be
represented, they do not need to be represented in order for the action to be
performed. But the argument I present in the next section can be adjusted
to apply to this ‘goal-state-representation’ way of thinking about pragmatic
representations as well.
9. This interactive demonstration can be found in a number of science exhibi-
tions. I first saw it at the San Francisco Exploratorium. See also Held (1965)
for the same phenomenon in an experimental context.
10. I focus on the three-dimensional Ebbinghaus illusion because of the simplicity
of the results, but it needs to be noted that the experimental conditions of
this experiment have been criticized recently. The main line of criticism is
that the experimental design of the grasping experiment is very different
from that of the perceptual judgment experiment. When the subjects grasp
the middle chip, there is only one middle chip, surrounded by either smaller
or larger chips. When they are judging the size of the middle chip, however,
they are comparing two chips – one surrounded by smaller chips, the other by
larger ones (Pavani et al. 1999; Franz 2001, 2003; Franz et al. 2000, 2003, see
also Gillam 1998; Vishton 2004; Vishton and Fabre 2003 – but see Haffenden
and Goodale 1998; Haffenden et al. 2001 for a response). See Briscoe (2008)
for a good philosophically sensitive overview on this question. I focus on
the three-dimensional Ebbinghaus experiment in spite of these worries, but
those who are moved by the style considerations of Franz et al. can substitute
some other visual illusion – viz., the Müller-Lyer illusion, the Ponzo illusion,
the hollow face illusion or the Kanizsa compression illusion – where there is
evidence that the illusion influences our perceptual judgments but not our
perceptually guided actions.
11. There is a lot of empirical data in favour of the existence of two more or less
separate visual subsystems that may explain the presence of these two different
representations here (Milner and Goodale 1995; Goodale and Milner 2004;
Jacob and Jeannerod 2003; Jeannerod 1997). The dorsal visual subsystem is
(normally) unconscious and is responsible for the perceptual guidance of our
actions. The ventral visual subsystem, in contrast, is (normally) conscious
and is responsible for categorization and identification. I do not want to rely
on this distinction in my argument (partly because of the emerging evidence
of the interactions between the two subsystems, partly because of the debate
about whether and to what extent the dorsal stream needs to be uncon-
scious; see Dehaene et al. 1998; Clark 2001; Brogaard 2011, forthcoming;
Naturalizing Action Theory 237
Briscoe 2008, 2009; Milner and Goodale 2008; Jeannerod and Jacob 2004;
Goodale 2011; Clark 2009; Kravitz et al. 2011). But one consequence of my
argument is that if we are to naturalize action theory, the empirical data on
dorsal perception will be of special importance.
12. I have been focusing on the representational component of the mental ante-
cedent of actions – pragmatic representations – and have argued that they are
not normally accessible to introspection. But it may be worth noting that,
arguably, the other, ‘moving us to act’ or ‘conative’ component of the mental
antecedent of actions is not normally accessible to introspection either. Here
is a nice literary example by Robert Musil:
I have never caught myself in the act of willing. It was always the case that
I saw only the thought – for example when I’m lying on one side in bed: now
you ought to turn yourself over. This thought goes marching on in a state
of complete equality with a whole set of other ones: for example, your foot
is starting to feel stiff, the pillow is getting hot, etc. It is still a proper act of
reflection; but it is still far from breaking out into a deed. On the contrary,
I confirm with a certain consternation that, despite these thoughts, I still
haven’t turned over. As I admonish myself that I ought to do so and see
that this does not happen, something akin to depression takes possession of
me, albeit a depression that is at once scornful and resigned. And then, all
of a sudden, and always in an unguarded moment, I turn over. As I do so,
the first thing that I am conscious of is the movement as it is actually being
performed, and frequently a memory that this started out from some part
of the body or other, from the feet, for example, that moved a little, or were
unconsciously shifted, from where they had been lying, and that they then
drew all the rest after them. (Robert Musil, Diaries [New York: Basic Books,
1999], 101; see also Goldie 2004, 97–98)
13. Various bits of the arguments in this chapter are also presented in Nanay
(2013a), esp. in chs 2 and 4. This work was supported by the EU FP7 CIG
grant PCIG09-GA-2011-293818 and the FWO Odysseus grant G.0020.12N. I
presented a very early version of this paper at the 2011 APA Pacific Division
Meeting in San Francisco. I am grateful for comments from Kent Bach, Keith
Lehrer, Ian Phillips, OlleBlomberg, EmanuelePodio, Declan Smithies, and the
editors of this volume.
References
Aglioti, S., DeSouza, J. F. X., and Goodale, M. A. (1995) ‘Size-contrast Illusions
Deceive the Eye But Not the Hand’. Current Biology, 5, 679–685.
Bach, Kent (1978) ‘A Representational Theory of Action’. Philosophical Studies,
34, 361–379.
Bayne, Tim (2009) ‘Perception and the Reach of Phenomenal Content’.
Philosophical Quarterly, 59, 385–404.
Block, Ned. (1995) ‘A Confusion about Consciousness’. Behavioral and Brain
Sciences, 18, 227–247.
Brand, Myles (1979) ‘The Fundamental Question of Action Theory’. Nous, 13,
131–151.
Brand, Myles (1984) Intending and Acting. Cambridge, MA: MIT Press.
238 Bence Nanay
Bridgeman, B., Peery, S., and Anand, S. (1997) ‘Interaction of Cognitive and
Sensorimotor Maps of Visual Space’. Perception & Psychophysics, 59, 456–459.
Briscoe, R. (2008) ‘Another Look at the Two Visual Systems Hypothesis’. Journal
of Conscious Studies, 15, 35–62.
Briscoe, R. (2009) ‘Egocentric Spatial Representation in Action and Perception’.
Philosophy and Phenomenological Research, 79, 423–460.
Brogaard, B. (2011) ‘Are there Unconscious Perceptual Processes?’ Consciousness
and Cognition, 20, 449–463.
Brogaard, B. (Forthcoming) ‘Unconscious Vision for Action Versus Conscious
Vision for Action?’ Journal of Philosophy.
Bruno, Nicola (2001) ‘When Does Action Resist Visual Illusions?’ Trends in
Cognitive Sciences, 5, 385–388.
Bruno, Nicola, and Bernardis, Paolo (2002) ‘Dissociating Perception and Action in
Kanizsa’s Compression Illusion’. Psychonomic Bulletin & Review, 9, 723–730.
Butterfill, S., and Sinigaglia, C. (Forthcoming) ‘Intention and Motor Representation
in Purposive Action’. Philosophy and Phenomenological Research.
Clark, A. (2001) ‘Visual Experience and Motor Action: Are the Bonds Too Tight?’
Philosophical Review, 110, 495–519.
Clark, A. (2009) ‘Perception, Action, and Experience: Unraveling the Golden
Braid’. Neuropsychologia, 47, 1460–1468.
Daprati, E., and Gentilucci, M. (1997) ‘Grasping an Illusion’. Neuropsychologia,
35, 1577–1582.
Dehaene, S., Naccache, L., Le Clec’H, G., Koechlin, E., Mueller, M., Dehaene-
Lambertz, G., van de Moortele, P. F., and Le Bihan, D. (1998) ‘Imaging
Unconscious Semantic Priming’. Nature, 395, 597–600.
Dretske, Fred (1969) Seeing and Knowing. London: Routledge.
Dretske, Fred (2007) ‘What Change Blindness Teaches about Consciousness’.
Philosophical Perspectives, 21, 215–230.
Dretske, Fred (2010) ‘On What We See’. In B. Nanay (ed.) Perceiving the World.
New York: Oxford University Press.
Franz, V. (2001) ‘Action Does Not Resist Visual Illusions’. Trends in Cognitive
Sciences, 5, 457–459.
Franz, V. (2003) ‘Manual Size Estimation: A Neuropsychological Measure of
Perception?’ Experimental Brain Research, 151, 471–477.
Franz, V. H., Bülthoff, H. H., and Fahle, M. (2003) ‘Grasp Effects of the Ebbinghaus
Illusion: Obstacle Avoidance Is Not the Explanation’. Experimental Brain
Research, 149, 470–477.
Franz, V., and Gegenfurtner, K. (2008) ‘Grasping Visual Illusions: Consistent Data
and No Dissociation’. Cognitive Neuropsychology, 25, 920–950.
Franz, V., Gegenfurtner, K., Bülthoff, H., and Fahle, M. (2000) ‘Grasping Visual
Illusions: No Evidence for a Dissociation Between Perception and Action’.
Psychological Science, 11, 20–25.
Gentilucci, M., Cheiffe, S., Daprati, E., Saetti, M. C., and Toni, I. (1996) ‘Visual
Illusion and Action’. Neuropsychologia, 34, 369–376.
Gentilucci, M., Daprati, E., Toni, I., Chieffi, S., and Saetti, M. C. (1995)
‘Unconscious Updating of Grasp Motor Program’. Experimental Brain Research,
105, 291–303.
Gillam, Barbara (1998) ‘Illusions at Century’s End’. In Hochberg, Julian (ed.)
Perception and Cognition at Century’s End, 95–136. San Diego: Academic Press.
Naturalizing Action Theory 239
242
The Architecture of Higher Thought 243
well stocked with intermediate systems of this type (Carey 2009), which
are not captured by a simple binary distinction. Call this the Richness
Problem: we need our taxonomy to make enough distinctions for all the
systems we find.
More worrisome is that sensorimotor systems themselves may func-
tion by making use of what appear to be some of these very same higher
processes (Rock 1983). Consider categorization. There is no clear and
unambiguous notion of what a categorization process is, except that it
either assigns an individual to a category (‘a is F’) or makes a distinction
between two or more individuals or categories (‘these Fs are G, those
Fs are H’). But many cognitive systems do this. Vision is composed of
a hierarchy of categorization devices for responding to edges, textures,
motion and whole objects of various sorts, and the language faculty
categorizes incoming sounds as having various abstract phrasal bounda-
ries, determines the phonetic, syntactic and semantic classes that words
belong to and so on. Indeed, almost every cognitive process can be seen
as involving categorization in this sense (van Gelder 1993).
Similarly for reasoning. It is widely assumed that perceptual systems
carry out complex inferences formally equivalent to reasoning from the
evidence given at their inputs to the conclusions that they produce as
output, as in models that depict visual analysis as a process involving
Bayesian updating (Knill and Richards 1996). Language comprehen-
sion systems must recover the structure of a sentence from fragmentary
bits of evidence, a task that has essentially the form of a miniature
abductive inference problem. So the mere existence of a certain type of
process cannot draw the needed distinction, since these processes may
occur in both higher and lower forms.2 Call this the Collapse Problem:
the difference between higher and lower faculties must be a stable and
principled one.
A related way of drawing the higher/lower distinction says that lower
cognition takes place within modular systems, while higher cognition
is non-modular. This too requires more precision, since there is a range
of different notions of modularity one might adopt (Coltheart 1999).
In Fodor’s original (1983) discussion of modularity, the line between
modular and non-modular systems coincides roughly with the line
between higher and lower systems: modular systems are, de facto, the
peripheral input and output systems (plus language), while central
cognition – the home of the folk-psychologically individuated propo-
sitional attitudes – is the paradigm of higher cognition. But unlike on
the simple account, there can be a whole sequence of modular processes
that take place before arriving at central cognition.
The Architecture of Higher Thought 245
1. representational abstraction
2. causal autonomy
3. free recombination
here, but I can wish for one and wonder how to make it. In the stronger
form, we can entertain concepts of things even despite the fact that
their referents do not and perhaps cannot exist. Unicorns, perfect circles
and gods do not exist, hence our concepts of these things are necessarily
causally isolated from them. Even so, they have been the subjects of an
enormous amount of cogitation. Concepts enable our representational
powers to persist across absences.
Second, even when the referent of a concept is causally present and
impinging on the senses, the way in which we represent and reason
about it is in principle independent of our ongoing interactions with it
and the world. We may decide to think about the cat that we are looking
at or may decide not to. Even if we do think about it, the way we are
thinking about it may depart from the way we perceive it to be (since
appearances can be deceiving) or believe it to be (if we are reasoning
counterfactually).
What these two characteristics show is that the conceptual system
is to some degree causally autonomous from how the world affects us
perceptually. This idea has been expressed by saying that concepts are
under endogenous or organismic control (Prinz 2002). This autonomy
may be automatic, or it may be intentionally engaged and directed. For
an example of automatic or unintentional disengagement, consider the
phenomenon known as ‘mind wandering’, in which cognition proceeds
according to its own internally driven schedule despite what the crea-
ture is perceiving. When there is no particular goal being pursued and
attention is not captured by any specific object, trains of thought stop
and start based only on our own idiosyncratic associations, interests,
dreams, memories and needs. These also intrude into otherwise organ-
ized and focused cognitive processes, depending on one’s ability to
inhibit them,. This is an uncontrolled form of causal autonomy.
More sophisticated forms of causal autonomy involve the creature’s
being able to direct its own thoughts; for example, to think about quad-
ratic equations rather than pie because that is what the creature wants
to do. Deciding to think about one thing rather than another is a form
of intentional mental action. Hence it is one typical feature of higher
cognitive processes that they can be under intentional control or, more
broadly, can be subject to metacognitive governance. Metacognition
involves being able to represent and direct the course of our psycho-
logical states – a form of causal autonomy.
The capacity to deal with absences through causally autonomous
cognition involves ‘detached’ representations (Gärdenfors 1996; Sterelny
2003), which can be manipulated independently of the creature’s
252 Daniel A. Weiskopf
4 Conclusions
One might, not unreasonably, have supposed that the very idea of
dividing cognitive capacities into higher and lower reflects a kind of
philosophical atavism. Neither the casual way in which the distinc-
tion is made by psychologists nor the poverty of standard attempts to
unpack it inspires hope. However, it turns out that not only can we
make a rather finely structured set of distinctions that capture many
of the relevant phenomena, these distinctions also seem to correspond
to genuinely explanatory psychological categories. Seeing organisms
through this functional lens, then, allows us to gain empirical purchase
on significant developmental, evolutionary and ethological questions
concerning their cognitive structure.11
Notes
1. For discussion of the Aristotelean divisions and related ancient notions, see
Danziger (1997, 21–35).
2. This point is made by Stich (1978), who attempts to distinguish doxastic
states and processes from subdoxastic ones on the grounds that the former
are inferentially integrated and conscious. Inferential integration, although
not consciousness, appears as one of the criteria for higher cognition on the
account developed here.
3. So Benton (1991) comments that historically the prefrontal region was
thought to provide ‘the neural substrate of complex mental processes such as
abstract reasoning, foresight, planning capacity, self-awareness, empathy, and
the elaboration and modulation of emotional reactions’ (3). For the history of
ideas about ‘association cortex’ more generally and about the localization of
intellectual function, see Finger (1994), chs 21 and 22.
4. Danziger (1997, 66–84) provides an excellent survey of the emergence of the
modern term ‘intelligence’ and the debates concerning its proper application
to animal cognition and behaviour.
5. For a reconstruction of the phylogeny of human cognition that depicts it as
emerging from a graded series of broadly holistic modifications to the brain
occurring at many levels simultaneously, see Sherwood, Subiaul and Zawidzki
(2008).
The Architecture of Higher Thought 259
References
Allen, C., and Hauser, M. (1991). Concept attribution in nonhuman animals:
Theoretical and methodological problems in ascribing complex mental proc-
esses. Philosophy of Science 58: 221–240.
Allen, J. S. (2009). The Lives of the Brain. Cambridge, MA: Harvard University
Press.
Amati, D., and Shallice, T. (2007). On the emergence of modern humans. Cognition
103: 358–385.
Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of
the brain. Behavioral and Brain Sciences 33: 245–313.
Barrett, H. C. (2012). A hierarchical model of the evolution of human brain special-
izations. Proceedings of the National Academy of Sciences 109: 10733–10740.
Beck, J. (2013). The generality constraint and the structure of thought. Mind 121:
563–600.
Benton, A. L. (1991). The prefrontal region: Its early history. In H. Levin, H.
Eisenberg and A. Benton (eds), Frontal Lobe Function and Dysfunction (3–12).
Oxford: Oxford University Press.
Camp, E. (2009). Putting thoughts to work: Concepts, systematicity, and stimu-
lus-independence. Philosophy and Phenomenological Research 78: 275–311.
Carey, S. (2009). The Origin of Concepts. Oxford: Oxford University Press.
Carruthers, P. (2004). On being simple-minded. American Philosophical Quarterly
41: 205–222.
——. (2006). The Architecture of the Mind. Oxford: Oxford University Press.
Christensen, W. D., and Hooker, C. A. (2000). An interactivist-constructivist
approach to intelligence: Self-directed anticipative learning. Philosophical
Psychology 13: 7–45.
Coltheart, M. (1999). Modularity and cognition. Trends in Cognitive Science 3:
115–120.
Corballis, M. C. (2011). The Recursive Mind: The Origins of Human Language,
Thought, and Civilization. Princeton, NJ: Princeton University Press.
Danziger, K. (1997). Naming the Mind: How Psychology Found its Language. London:
Sage.
Elman, J. L. (1993). Learning and development in neural networks: The impor-
tance of starting small. Cognition 48: 71–99.
Evans, N., and Levinson, S. (2009). The myth of language universals: Language
diversity and its importance for cognitive science. Behavioral and Brain Sciences
32: 429–492.
Everett, D. (2005). Cultural constraints on grammar and cognition in Pirahã.
Current Anthropology 46: 621–646.
Finger, S. (1994). Origins of Neuroscience: A History of Explorations into Brain
Function. Oxford: Oxford University Press.
Gärdenfors, P. (1996). Cued and detached representations in animal cognition.
Behavioral Processes 35: 263–273.
The Architecture of Higher Thought 261
262
Significance Testing in Neuroimagery 263
2 Klein’s argument
Step 1
1. Any change induced in a variable of a causally dense system causes a
change in all the other variables.
2. The brain is a causally dense system.
3. An experimental task induces a change in the BOLD signal in the brain
areas and voxels functionally involved in completing this task.
4. Hence, whether or not area or set of voxels A is functionally involved
in completing task T, completing T induces a change in the BOLD
signal in A.
5. Hence, changes in the BOLD signal cannot support functional
hypotheses.
in changes in the other nodes of a causally dense system. For the sake of
the argument, I overlook this complication in what follows.
Let’s now turn to Premise 2. When the brain is viewed as a system,
the variables can be specified at several levels of aggregation: neurons,
columns, voxels (a typical voxel contains more than 5 million neurons,
according to Logothetis 2008) and brain areas of various sizes. Since this
chapter is concerned with neuroimagery, voxels and brain areas are the
appropriate levels of brain organization. So the variables that constitute
the brain viewed as a system are the BOLD signals in the voxels or brain
areas of interest.
It is not clear how causally dense the whole brain is. At the organiza-
tion level of neurons, neurons often form modules: They are causally
connected with a small number of neurons which are close to them and
project to only a few other parts of the brain. At a higher level of organi-
zation, smaller brain areas within lobes, gyri or sulci seem to be massively
interconnected, as is illustrated by Van Essen’s famous functional map
of the primate visual cortex (Van Essen, Anderson and Felleman 1992).
Sporns’s network analysis supports the same conclusion (Sporns 2010).
On the other hand, Sporns’s analysis also suggests that different areas
(lobes, gyri, sulci, etc.) of the brain tend to have a modular architecture.
Be it as it may, for the sake of the argument we accept Premise 2 for the
time being, but we revisit this issue in the last section of this chapter.
Premise 3 takes it for granted that specific brain areas – in contrast to
the whole brain – are functionally involved in completing a task. Thus,
mindreading does not involve the whole brain but a specific network of
brain areas. Furthermore, it takes for granted that the involvement of a
brain area in completing an experimental task results in a change in the
BOLD signal.
Conclusion 4 follows from the three premises. Experimental tasks
cause a change in the BOLD signal in the brain areas involved in
completing them (Premise 3). Because the brain is a causally dense
system (and the parameters of the causal relations have the right
values), the change in the BOLD signal in these brain areas and in
the voxels that compose them causes a change in the BOLD signal in
the other brain areas and their voxels (Premises 1 and 3). As a result,
experimental tasks typically cause a change in the BOLD signal in all
brain areas and voxels, whether or not these areas play a causal role in
completing these tasks. If Conclusion 4 is correct, then a change in the
BOLD signal during an experimental task provides no evidence that this
area is causally involved in completing this task and thus provides no
evidence about its function (Conclusion 5).
Significance Testing in Neuroimagery 269
Step 2
1. An empirical result is statistically significant if and only if the rele-
vant p-value is below the significance level.
2. The significance level is set at a particular value to limit the long-term
rate of false positives when the null hypothesis is true.
3. Any experimental task causes a change in the BOLD signal of any brain
area or voxel.
4. Hence, in significance tests in fMRI-based studies, the significance
level should be set at 0.
5. Hence, all changes in the BOLD signal should be treated as being
significant.
describes the justification for setting the significance level at any partic-
ular value. Its value determines the probability of committing a false
positive (rejecting a true null hypothesis) if the null hypothesis is true:
if it is set at .05, in the long run the null hypothesis will be rejected
erroneously in 5 per cent of the cases where it is true. Thus, the signifi-
cance level determines the upper bound of the rate of false positives. If
all null hypotheses that are tested are false, this rate is 0; if all of them
are true, this rate is 5 per cent. The value of the significance level is
determined on the basis of pragmatic considerations (Machery [n.d.]):
it is the largest risk of committing a false positive if the null hypothesis
is true that one finds acceptable, which depends on the possible conse-
quences of committing a false positive. Premise 3 has been established
by the first step of Klein’s argument.
Conclusion 4 follows from Premises 2 and 3: if any task causes a
change in the BOLD signal in any brain area, then the null hypothesis is
never true, and one cannot commit a false positive. As a consequence,
the significance level should be set at 0. But if this is the case, there is no
room for distinguishing statistically significant changes in the BOLD level
from mere changes in the BOLD level, and the response to the first step of
the argument is mistaken.
2.4 Moral
This argument seems devastating. Most studies in neuroimagery rely
on null hypothesis significance testing to test cognitive-neuroscientific
hypotheses about the functions of brain areas or networks. But this
argument seems to show that when the statistical hypotheses derived
from functional hypotheses are tested by means of significance tests,
data obtained by means of fMRI (and many other neuroimagery tech-
niques) provide no evidence for these functional hypotheses. Should
most imagery-based research in cognitive neuroscience be discarded?
for instance, the range null hypothesis that the value of a parameter is
nearly the same in two conditions or the range null hypothesis that the
correlation between two parameters is nearly equal to 0. The range null
hypothesis is rejected or, equivalently, the alternative hypothesis that,
for example, the difference between the value of a parameter in two
conditions is larger than a trivial value or that the correlation between
two parameters is larger than a trivial value is accepted if and only if the
probability of obtaining a statistic of a given size or a larger one condi-
tional on the truth of the point null hypothesis is below the significance
level and thus is very low.
The upper bound of the long-term error rate of false positives when
the range null hypothesis is true (where a false positive consists in
rejecting a true range null hypothesis) is equal to the power of the test
when the latter is computed assuming a trivial effect size. For instance,
if the power of the test, so computed, is equal to .10, in at most 10 per
cent of the cases where the range null hypothesis is true, the p-value,
computed with respect to the point null hypothesis, is below the signifi-
cance level, the range null hypothesis will be rejected, and a false posi-
tive will occur.
So the point of null hypothesis significance testing when the null
hypothesis is bound to be false is obviously not to reject the null
hypothesis and accept the contradictory alternative hypothesis; no test
is needed for this. Rather, the point of significance testing is to reject
a range null hypothesis, for example, that the difference between the
value of a parameter in two conditions is trivial and to accept a contra-
dictory alternative hypothesis – for instance, that this difference is larger
than a trivial value.
We can even go one step further: when the point null hypothesis is
bound to be false, null hypothesis significance testing can be used not
only to accept the alternative hypothesis that, for example, the differ-
ence between the value of a parameter in two conditions is larger than
a trivial value but also to accept the alternative hypothesis that, for
example, this difference is larger than a specific value, provided that the
power of the test, when it is computed assuming an effect size of this
value, is small (say, about .05).
Methodologists have long understood that significance tests are used
to reject a range null hypothesis when the point null hypothesis is bound
to be false, as happens in observational studies and, if Meehl is right,
in experimental studies. Thus, Good (1983, 62) writes that ‘we wish to
test whether the hypothesis is in some sense approximately true’, while
Binder (1963, 110) asserts that ‘[a]lthough we may specify a point null
Significance Testing in Neuroimagery 273
the system is brought to produce this effect. Their task is to identify the
variables that are causally responsible for producing it. The discussion of
null hypothesis significance testing in Section 3 casts light on the use of
significance tests to complete this task.
If the system is causally dense, the point null hypothesis is bound to
be false for the variables that are not responsible for the effect of interest
since their values change as a result of the causally responsible variable
producing the effect of interest. However, a range null hypothesis to the
effect that this change is small – or at least much smaller than the one
expected for the causally responsible variable – may well be true of them.
When this is the case, null hypothesis significance testing can be used to
determine, not whether the point null hypothesis is false – it is false – but
whether the range null hypothesis is false. And scientists will care about
whether the range null hypothesis is false of some variable because this will
occur only if the variable is causally responsible for the effect (or perhaps it
is only likely to occur if the variable is causally responsible for the effect).
When is the change in value of the variables that are not respon-
sible for the effect (V1 ... Vn) small or at least much smaller than the one
expected for the causally responsible variable VC? While this situation
may not occur in all causally dense systems, it occurs in at least the
following circumstance. Suppose that V1 is causally influenced by VC
and V2 and that V2 is not causally influenced (either directly or indi-
rectly) by VC. Suppose also that the value of V1 is an increasing function
of the values of VC and V2. Because only the value of VC changes as a
result of the experimental manipulation, the change in V1 will plausibly
be smaller than what it would have been if it were causally involved in
producing the effect of interest. The further apart VC and V1, the more
likely it is that the change in value of V1 depends on variables that are
not influenced by VC. So whether the change in value of the variables
that are not responsible for the effect of interest is small or at least much
smaller than the one expected for the causally responsible variable
depends on at least two related factors: the causal density of the system
and the distance between the causally responsible variables and the vari-
ables that are not causally responsible.
functionally involved in the task but not of the voxels or brain areas not
functionally involved in the task, and null hypothesis significance testing
can be used to identify those voxels or brain areas for which the range
null hypothesis is false. If one rejects the range null hypothesis when the
probability of obtaining a particular statistic or a more extreme one condi-
tional on the point null hypothesis is below the significance level, then
one will commit at most a small number of false positives (where a false
positive consists in rejecting a true range null hypothesis).
Several considerations support the claim that a range null hypoth-
esis is likely to be true of those voxels and brain areas not functionally
involved in the task of interest. First, as we saw in Section 2, the causal
density of the brain may be only moderate since, at least at the level of
gyri and sulci, the organization of the brain is in part modular (Sporns
2010). If the brain is modular to a significant extent, there are few direct
connections between brain areas and between voxels that belong to
different modules, and indirect connections between areas belonging
to distinct modules are long. As a result, the change in the BOLD signal
incidentally elicited is likely to be small or at least smaller than the one
induced in the voxel or brain area functionally involved in the task.
Second, as noted in Section 2 too, statistical tests bear not on the abso-
lute value of the change in the BOLD signal but on the difference between
the change induced by the task of interest and by the control task. While
unlikely to be null, this difference is likely to be small for the areas that
are not functionally involved or, at any rate, much smaller than the one
expected for the voxels or areas functionally involved in the task.
4.3 Upshot
Why do cognitive neuroscientists use significance tests to provide
evidence for and against functional hypotheses, while the relevant null
hypotheses are bound to be false? The reason is that cognitive neuro-
scientists are not interested in rejecting point null hypotheses, as Klein
erroneously believes, but rather in rejecting range null hypotheses
stating that the effect of interest (e.g., a difference in the BOLD signal
between two conditions) is small. If they reject the range null hypoth-
esis when the probability of obtaining a statistic of a given size or a
larger one conditional on the truth of the point null hypothesis is below
the significance level, they will commit at most only a small number
of false positives (equal to the power of the test computed assuming
this small effect size). That is, in the long run they will rarely reject true
range null hypotheses. Because range null hypotheses are likely to be
false if and only if the relevant functional hypotheses are true, rejection
276 Edouard Machery
5 Conclusion
Notes
1. The seductive allure of neuroimagery may have been overstated (Farah and
Hook 2013). In particular, Gruber and Dickerson (2012) failed to replicate
McCabe and Castel (2008).
2. For discussion of Meehl’s argument and of null hypothesis significance testing
in general, see Machery (n.d.).
3. For further discussion of inferential methods in cognitive neuroscience, see
Machery (2012, forthcoming).
4. For the sake of presentation, I simplify Klein’s argument and address its
substance rather than its specific wording. Nothing of importance has been
lost in my reformulation.
5. In particular, to control for false positives, it is often required that the p-value
be below the significance level in a specific number of adjacent voxels before
rejecting the null hypothesis for these voxels.
6. Event-related designs are also typically used instead of block designs.
7. I owe this point to Richard Scheines (annual meeting of the Philosophy of
Science Association, San Diego, November 2012).
8. Of course, that’s not to say that there is no problem at all with neuroimagery;
see, e.g., Machery (2012, forthcoming).
References
Binder, A. (1963) ‘Further Considerations on Testing the Null Hypothesis and the
Strategy and Tactics of Investigating Theoretical Models’. Psychological Review,
70, 107–115.
Block, N. (2007) ‘Overflow, Access, and Attention’. Behavioral and Brain Sciences,
30, 530–548.
Byrne, A. (2011) ‘Knowing that I Am Thinking’. In A. Hatzimoysis (ed.) Self-
Knowledge. Oxford: Oxford University Press.
Cattaneo, L., and G. Rizzolatti. (2009) ‘The Mirror Neuron System’. Archives of
Neurology, 66, 557.
Significance Testing in Neuroimagery 277
Farah, M. J., and C. J. Hook. (2013) ‘The Seductive Allure of ‘Seductive Allure’.
Perspectives on Psychological Science, 8, 88–90.
Fodor, J. A. (1968) ‘The Appeal to Tacit Knowledge in Psychological Explanation’.
Journal of Philosophy, 65, 627–40.
Fodor, J. A. (1974) ‘Special Sciences (or: The Disunity of Science as a Working
Hypothesis)’. Synthese, 28, 97–115.
Good, I. J. (1983) Good Thinking: The Foundations of Probability and its Applications.
Minneapolis: University of Minnesota Press.
Greenwald, A. G. (1975) ‘Consequences of Prejudice Against the Null Hypothesis’.
Psychological Bulletin, 82, 1–20.
Gruber, D., and J. A. Dickerson. (2012) ‘Persuasive Images in Popular Science:
Testing Judgments of Scientific Reasoning and Credibility’. Public Understanding
of Science, 21, 938–948.
Hardcastle, V. G., and C. M. Stewart. (2002) ‘What Do Brain Data Really Show?’
Philosophy of Science, 69, 72–82.
Kanwisher, N., J. McDermott, and M. M. Chun. (1997) ‘The Fusiform Face Area: A
Module in Human Extrastriate Cortex Specialized for Face Perception’. Journal
of Neuroscience, 17, 4302–4311.
Keehner, M., and M. H. Fischer. (2011) ‘Naive Realism in Public Perceptions of
Neuroimages’. Nature Reviews Neuroscience, 12, 118–165.
Klein, C. (2010) ‘Images Are Not the Evidence in Neuroimaging’. British Journal for
the Philosophy of Science, 61, 265–278.
Logothetis, N. K. (2008) ‘What We Can Do and What We Cannot Do with fMRI’.
Nature, 453, 869–878.
Machery, E. (2012) ‘Dissociations in Neuropsychology and Cognitive
Neuroscience’. Philosophy of Science, 79, 490–518.
Machery, E. (forthcoming) ‘In Defense of Reverse Inference’. British Journal for the
Philosophy of Science.
Machery, E. (n.d). ‘Evidence and Cognition’. Manuscript.
McCabe, D. P., and A. D. Castel (2008) ‘Seeing Is Believing: The Effect of Brain
Images on Judgments of Scientific Reasoning’. Cognition, 107, 343–352.
Meehl, P. E. (1967) ‘Theory Testing in Psychology and Physics: A Methodological
Paradox’. Philosophy of Science, 34, 103–115.
Ramsey, W., S. Stich, and J. Garon (1990) ‘Connectionism, Eliminativism and the
Future of Folk Psychology’. Philosophical Perspectives, 4, 499–533.
Roskies, A. (2010) ‘Saving Subtraction: A Reply to Van Orden and Paap’. British
Journal for the Philosophy of Science, 61, 635–665.
Saxe, R., and A. Wexler (2005) ‘Making Sense of Another Mind: The Role of the
Right Temporo-parietal Junction’. Neuropsychologia, 43, 1391–1399.
Sporns, O. (2010) Networks of the Brain. Cambridge, MA: MIT Press.
Van Essen, D. C., C. H. Anderson, and D. J. Felleman (1992) ‘Information
Processing in the Primate Visual System: An Integrated Systems Perspective’.
Science, 255, 419–423.
Weisberg, D. S., F. C. Keil, J. Goodstein, E. Rawson, and J. R. Gray (2008) ‘The
Seductive Allure of Neuroscience Explanations’. Journal of Cognitive Neuroscience,
20, 470–477.
14
Lack of Imagination: Individual
Differences in Mental Imagery and
the Significance of Consciousness
Ian Phillips
278
Lack of Imagination 279
This cannot be the whole story, however. For there remains the appar-
ently puzzling failure of conscious, experiential imagery to correlate with
objective task performance. In light of this failure, it may seem that
the import of individual differences in imagery is that conscious imagery
does lack a useful function. Moreover, generalizing, we might conclude
by adopting the increasingly widespread view that consciousness per se
lacks a useful function and is best conceived of as an evolutionary span-
drel (e.g., Blakemore 2005). Sections 3 and 4 offer rejoinders to these
two lines of thought. In Section 3, I argue that even if conscious imagery
does lack a useful function, nothing follows about the significance of
consciousness in general. I demonstrate this by arguing that on one
leading account of the significance of perceptual consciousness (namely,
as a condition of demonstrative thought), mental imagery could not
share the same significance. In Section 4, I return to the question of the
significance of conscious imagery. I argue that whilst objective data from
mental imagery tasks do plausibly establish the presence or absence of
imagery in the representational sense, it is not obvious that such data
do settle the presence or absence of imagery in the experiential sense.
Instead, the differences between conscious imagers and non-imagers
may emerge only when we consider exclusively personal-level differ-
ences in the way in which imagers perform imagery tasks: differences in
the personal-level genesis, justification and self-understanding of their
performances.
It may seem that we can imagine particulars that we have not previ-
ously encountered in perceptual experience. However, it is more plau-
sible to think that we are able to recombine features we have previously
experienced in a purely general way. Thus, in the absence of prior
perceptual acquaintance, there will be no possibility of our imagining
qualitatively identical but numerically distinct individuals across imagi-
native episodes, independent of a stipulative act of propositional imagi-
nation. Similarly, there will be no possibility of imagining one but not
the other of two identical twins, neither of whom one has perceptually
encountered, independent of an act of stipulative imagination. In short,
conscious imagery never serves to enable demonstrative reference not
already enabled by perceptual experience. A plausible account of the
explanatory role of conscious experience in relation to perception is thus
not available as an account of the significance of conscious imagery. The
absence of a significant explanatory role for conscious imagery can now
be seen to be consistent with a significant explanatory role for conscious
perceptual experience. In consequence, even if we refuse to recognize a
useful function for conscious imagery, it will not follow that conscious-
ness in general lacks significance. In the next and final section I return
to the antecedent of this conditional; namely, the issue of whether we
really should think of conscious imagery as lacking a useful function.
5 Conclusion
Notes
1. Galton’s work on mental imagery is summarized in Galton (1883/1907,
57–128). See also James 1890 which credits Fechner with the initial recogni-
tion of ‘great personal diversity’ in imagery (vol. II, ch. 18, 50–1). It should
be noted that Galton’s specific suggestion that ‘men of science’ are typically
poor imagers is neither properly supported by his data nor likely true (see
Brewer and Schommer-Aikins 2006).
2. For mental rotation tasks, see Shepard and Metzler (1971) and Shepard and
Cooper (1982). Thomas (2012) contains an excellent introductory supple-
ment. For visual scanning tasks, see Kosslyn (1973), Kosslyn et al. (1978),
Kosslyn (1980), Finke and Pinker (1982) and Borst et al. (2006). See the next
section for references to work on individual differences.
3. See studies based on Betts’s (1909) Questionnaire upon Mental Imagery
(QMI), revised by Sheehan (1967), and Marks’s (1973) Vividness of Visual
Imagery Questionnaire (VVIQ).
4. For criticism of Schwitzgebel on this score, see Humphrey (2011).
5. For Schwitzgebel, this is part of a larger theme concerning the unreliability of
naive introspection. However, it is worth noting that even if we agreed with
Schwitzgebel that underlying differences on a Galtonian scale were implau-
sible, we would not need to blame introspection per se. Another possibility
is that there is wide variation in our understanding of the concept of visual
imagery (cf. Flew 1956; Thomas 1989). Thus, two subjects accurately intro-
specting the same kind of experience might differ as to whether they think
such experience counts as visual imagery, just as, notoriously, two people
might differ in their understanding of what counts as arthritis. Schwitzgebel
shows awareness of this concern but, for reasons he doesn’t make explicit,
‘doubt[s] that the optimist about introspective accuracy can find much
consolation’ in it (2011, 53). These issues are closely connected to long-
294 Ian Phillips
congenitally blind; e.g., Marmor and Zaback (1976), Carpenter and Eisenberg
(1978), Kerr (1983), Zimler and Keenan (1983).
12. Schwitzgebel notes similar arguments go back at least to Angell 1910.
13. This is how Thomas (unpublished) describes the view of Marks (1986, 237).
Cf. discussions of Anton’s syndrome and anosognosia more generally.
14. In fact, is not just that Schwitzgebel gives us little clue as to why the relevant
population should be so strongly and consistently committed to enormous
mistakes about their inner lives. Schwitzgebel does not explicitly indicate
who he thinks is wrong; i.e., what a ‘normal’ stream of conscious imagery
consists of. It is most natural to think that his view is that we all have some
modest degree of imagery and so error is especially pronounced amongst
professed super-imagers (since they do no better than ‘normal’ imagers) and
professed non-imagers (since they do no worse).
15. For this reference and several others, I am much indebted to Thomas (n.d.),
as well as to the hugely helpful Thomas (2012).
16. Although clear advocates of the ‘no function’ view are thin on the ground,
the issue was the source of a heated controversy in scientific circles a century
ago. In addition to Thorndike and Winch, see Fernald (1912, 135–138). There
is also the infamous case of the behaviourist Watson who seems to have been
motivated to deny his own mental imagery, declaring imagery ‘a mental
luxury (even if it really exists) without any functional significance’ (1913,
175). For discussion, see Faw (2009, 7–10) and Thomas (2012).
17. E.g., Marks writes, ‘Imagery, by definition, is a mental experience and verbal
reports therefore provide a necessary, albeit fallible, source of evidence’ (1983,
245). See also Richardson (1969). Thomas (2012) also mentions McKellar
(1957) and Finke (1989).
18. An early example is Neisser (1970), who proposes a distinction between
‘imagery as an experience’ and ‘imagery as a process’. Thomas (2012), whom
I follow here, puts the distinction in terms of experiential and representational
notions of mental imagery. Thomas himself does not endorse the distinc-
tion and indeed elsewhere suggests that ‘It is of the very nature of imagery
to be conscious’ (2003, §3.3). As Bence Nanay pointed out to me, a notion
of imagery which does not imply conscious awareness is plausibly at play in
van Leeuwen 2011, as well as explicitly in Nanay (2010).
19. That said, one’s own conscious mental imagery may bias one’s position in
the debate (Reisberg et al. 2003).
20. Having made his distinction between imagery as experience and imagery as
process, Neisser seems happy to embrace this consequence.
21. Schwitzgebel also argues that resolving the puzzle of individual differences
along the lines developed in this section will force us to think of everyone’s
underlying imagery (conscious or not) as equivalent in detail to that reported
in ‘the grandest self-assessments’. But if that were so, Schwitzgebel suggests
that ‘it is surprising that we don’t all perform substantially better on mental
rotation tasks, visual memory tasks, and the like’ (2011, 51). This objection
is problematic. The grandest self-assessments typically compare imagery to
ordinary perception. In this light, we ought to ask: how well should we expect
subjects to do in rotation or memory tasks equipped with imagery as rich as
perception? Yet then we need to ask: how rich is that? This is a famously
controversial issue, an issue where the gap between our objective capacities
296 Ian Phillips
References
Abelson, R. P. (1979) ‘Imagining the Purpose of Imagery’. Behavioural and Brain
Sciences 2: 548–549.
Lack of Imagination 297
Jelinek, L., Randjbar, S., Kellner, M., Untiedt, A., Volkert, J., Muhtz, C., and
Moritz, S. (2010) ‘Intrusive Memories and Modality-Specific Mental Imagery
in Posttraumatic Stress Disorder’. Zeitschrift für Psychologie/Journal of Psychology
218 (2): 64–70.
Kanai, R., and Rees, G. (2011) ‘The Structural Basis of Inter-individual Differences
in Human Behaviour and Cognition’. Nature Reviews Neuroscience 12: 231–242.
Katz, A. (1983) ‘What Does It Mean to be a High Imager?’ In J. Yuille (ed.) Imagery,
Memory and Cognition. Hillsdale, NJ: Erlbaum.
Kaufmann, G. (1981) ‘What Is Wrong with Imagery Questionnaires?’ Scandinavian
Journal of Psychology 22: 59–64.
Kerr, N. (1983) ‘The Role of Vision in ‘Visual Imagery’ Experiments: Evidence
from the Congenitally Blind’. Journal of Experimental Psychology: General 112:
265–277.
Kerr, N. H., and Neisser, U. (1983) ‘Mental Images of Concealed Objects: New
Evidence’. Journal of Experimental Psychology: Learning, Memory and Cognition 9:
212–221.
Kosslyn, S. M. (1973) ‘Scanning Visual Images: Some Structural Implications’.
Perception and Psychophysics 14 (1): 90–94.
Kosslyn, S. M. (1980) Image and Mind. Cambridge, MA: Harvard University Press.
Kosslyn, S. M. (1994) Image and Brain: The Resolution of the Imagery Debate.
Cambridge, MA: MIT Press.
Kosslyn, S. M., Ball, T. M., and Reiser, B. J. (1978) ‘Visual Images Preserve Metric
Spatial Information: Evidence from Studies of Image Scanning’. Journal of
Experimental Psychology: Human Perception and Performance 4: 47–60.
Kosslyn, S. M., Brunn, J., Cave, K., and Wallach, R. (1985) ‘Individual Differences
in Mental Imagery Ability: A Computational Analysis’. Cognition 18: 195–243.
Kozhevnikov, M., Kosslyn, S., and Shephard, J. (2005) ‘Spatial Versus Object
Visualizers: A New Characterization of Visual Cognitive Style’. Memory &
Cognition 33: 710–726.
Levine, D. N., Warach, J., and Farah, M. (1985) ‘Two Visual Systems in Mental
Imagery: Dissociation of ‘What’ and ‘Where’ in Imagery Disorder Due to
Bilateral Posterior Cerebral Lesions’. Neurology 35: 1010–1018.
Marks, D. F. (1973) ‘Visual Imagery Differences in the Recall of Pictures’. British
Journal of Psychology 1: 17–24.
Marks, D. F. (1983) ‘Mental Imagery and Consciousness: A Theoretical Review’.
In A. A. Sheikh (ed.) Imagery: Current theory research and applications, 96–130.
New York: Wiley.
Marks, D. F. (1986) ‘The Neuropsychology of Imagery’. In D. F. Marks (ed.) Theories
of Image Formation, 225–241. New York: Brandon House.
Marmor, G. S., and Zaback, L. A. (1976) ‘Mental Rotation by the Blind: Does
Mental Rotation Depend on Visual Imagery?’ Journal of Experimental Psychology:
Human Perception and Performance 2: 515–521.
Martin, M. G. F. (2006) ‘On Being Alienated’. In J. Hawthorne and T. Gendler
(eds) Perceptual Experience, 354–410. Oxford: Oxford University Press.
McKellar, P. (1957) Imagination and Thinking. London: Cohen and West.
McKelvie, S. J. (1995) ‘The VVIQ as a Psychometric Test of Individual Differences
in Visual Imagery Vividness: A Critical Quantitative Review and Plea for
Direction’. Journal of Mental Imagery 19: 1–106.
Nagel, T. (1974) ‘What Is it Like to Be a Bat?’ Philosophical Review 83: 435–456.
Lack of Imagination 299
301
302 Georg Theiner
for food, allocating resources and selecting appropriate nest sites, often
using close to optimal strategies (Sasaki and Pratt 2011). The collective
behaviour of the hive is also a multilevel phenomenon insofar as the
adaptive success of the hive crucially depends on complex feedback
mechanisms that socially mediate the behaviour of individual ants
(Moussaid et al. 2009). But importantly, at least in the present context,
we do not consider what single ants do (e.g., foraging, scouting, signal-
ling) as manifestations of a singular mind. This differs from the multi-
level conception of the GMT: for example, in Gilbert’s account, what
people think is a constitutive aspect of their role as members of a plural
subject. In contrast, our ground for attributing group-only cognition to
the hive lies in specific feats of collective information processing (e.g.,
sorting, reaching consensus, optimizing, obeying specific rationality
principles) of which single ants or bees are congenitally unable.
The absence of singular minds in individual ants or bees raises the
question whether the cognitive abilities of hives should really count as
genuine cases of group cognition. Eusocial species are traditionally char-
acterized by a strict reproductive division of labour (e.g., with sterile
work castes), overlapping generations living together in the colony at
any given time and cooperative brood care (Wilson 1971). Because of
this highly specialized division of biological labour, it has been argued
that single bees or ants function more like body parts of a larger, func-
tionally integrated unit which has many characteristics of a normal
biological organism (Hölldobler and Wilson 2008). Rather than speak
of a group mind, it may thus be more appropriate to consider a beehive
as a special kind of singular mind, albeit one that is spatially distributed
over many (insect) bodies. Let us reserve the term hive cognition for this
limiting case of group cognition, to distinguish it from collective cogni-
tion according to the multilevel conception.
The distinction between hive cognition and collective cognition can
be brought out further with a fascinating study of ‘colony-level cogni-
tion’ in ants and honeybees (Marshall et al. 2009). Using mathematical
models of optimal decision making, Marshall and colleagues discovered
striking information-processing parallels between the migration deci-
sions made by house-hunting colonies and the neural decision making
which occurs in the primate visual cortex during motion discrimination
tasks. In both systems, different subpopulations act as integrators of noisy
information about available decision alternatives, both rely on quorum
sensing, and both can vary their decision thresholds in response to
speed-accuracy trade-offs. Because of the underlying functional analogy
between individual bees and single neurons, which clearly lack a mind
A Beginner’s Guide to Group Minds 305
that is, the neural machinery of singular minds. But now, we have seen
that there are also important information-theoretic commonalities
between beehives and human markets. Clearly, there is a tension. If
the collective information-processing performed by beehives does not
amount to group cognition because of its commonalities with brain-
bound singular cognition, then by the same token the collective infor-
mation processing performed by markets should not count as group
cognition either, because of its commonalities with that of beehives.
Perhaps the way out of this conundrum is to concede that the intel-
ligence of markets is indeed more closely comparable to that of a hive
than that of a group, even though the markets we usually speak of are
obviously collections of people with singular minds. Still, for certain
taxonomic or explanatory purposes that are explicitly driven by, for
example, information-theoretic concerns, we may decide to bracket
that difference. Yet another question remains: what’s so special about
the information processing of neural networks in the brain that it
earns the privilege to be called ‘singular’ cognition? If physicalism is
true about the human mind and mental states or processes are at least
token-identical with neural states or processes, shouldn’t we conclude
that singular cognition is really just another name for hive cognition
performed by the brain?
Alternatively, we could try to discount beehives and markets as cases
of group cognition by refining the concept of a social group. Collections
of individuals come in many forms, but not all of them are sufficiently
integrated to constitute a group (Forsyth 2006). The task at hand, then,
would be to shore up the concept of a social group in a way that is
narrow enough to exclude beehives and markets (not to mention brains)
but broad enough to accommodate all forms of what we have dubbed
collective cognition. Perhaps this can be done; but even so, there remains
a potential pitfall I’d like to point out. Consider, for the sake of illustra-
tion, Gilbert’s (1989) aforementioned analysis of collective intention-
ality, which rests on a fairly idiosyncratic conception of what constitutes
a group. For Gilbert, the concept of a social group really is the concept
of a plural subject. Her account of group formation involves two steps
(ibid., ch. 4): (1) the individuals who are bound to become members of
a group must be conditionally committed to joining forces in doing X
‘as a body’; (2) they must mutually express in conditions of common
knowledge (albeit not necessarily verbally), their commitment to doing
so. Provided that both conditions are satisfied, a joint commitment to
X-ing as a body has been generated, and a plural subject has thereby
come into existence. Plainly, since beehives and markets do not form
A Beginner’s Guide to Group Minds 307
plural subjects in this sense, they aren’t genuine Gilbertian groups and,
a fortiori, they cannot possess a group mind.
Whatever the intrinsic merits of Gilbert’s account, the peculiar notion
of a group on which it rests is clearly too narrow to serve as a common
denominator for a comprehensive taxonomy of group cognition. First,
we should acknowledge that the notion of a group, as it is used in
the social sciences, is a complex theoretical term that involves several
different dimensions, such as the modes of social interactions, goals,
social interdependence, organizational structure and social cohesion (cf.
Forsyth 2006, 10–14). Hence it is doubtful that there can be a single
definition of the term group that fits all of its uses. Second, it is a mistake
to think that the relevant psychological aspects of group cognition are
always directly linked to the social factors which determine whether
some collection of individuals does or does not constitute a group. This
is because the degree of ‘groupness’ that is achieved by a collective is not
always directly proportional to its level of cognitive performance.
An interesting study which brings out this point nicely is Weick and
Roberts’s (1993) analysis of so-called high-reliability organizations (such
as aircraft carriers) that require nearly error-free operations around the
clock so as to avoid catastrophic outcomes. Seeking to avoid the mistake
of traditional versions of the GMT, in which ‘the development of mind
is confounded with the development of the group’ (ibid., 374), Weick
and Roberts formulate an original concept of collective mind that can be
applied to but is conceptually disentangled from that of an organization
or of a social system more generally. Following the work of Asch (1952),
Weick and Roberts conceive of an organization as a social system that
emerges from but at the same time shapes and constrains the actions of
individual agents who themselves understand that the system consists of
their interdependent actions and who subordinate their actions accord-
ingly. Like Asch’s work, their conception reflects a recurring theme of
Gestalt psychology that the whole is not only greater than the sum of
its parts but that the nature of the whole can alter the behaviour of its
parts. Building on the concept of heed in the work of Ryle (1949), Weick
and Roberts’s concept of collective mind is then defined in terms of the
amount of heed that is contained in the social patterns by which the
individual actors’ contributions, representations and subordinations are
interrelated. The basic idea of their approach is to explain variations in
organizational performance, such as the likelihood of severe accidents,
in terms of relative variations of collective mind.
Among other things, their analysis reveals the possibility and signifi-
cance of double dissociations between the degree of ‘groupness’ and the
308 Georg Theiner
GMT in this distinctive sense is that the steady stream of creative inven-
tions that are legally attributed to Edison should in fact be attributed to
the collaborative cogitations of the engineers who worked with Edison
at Menlo Park (Milliard 1990). This collaborative type of group cogni-
tion is also known as socially distributed cognition (e.g., Wegner 1987;
Hutchins 1995; Wilson 2002; Stahl 2006). Socially distributed cognition
is neither a singular nor a collective cognitive phenomenon in the clas-
sical sense, both of which put mind and cognition within the purview
of (singular or collective) individuals; rather, it lies somewhere between.
In what sense, then, does it support the GMT? Before we can answer this
question, we must further clarify the relationship between group minds
and group cognition more generally.
Current proponents of the GMT deliberately avoid the group mind idiom.
This is at least partly because our everyday concept of a mind is closely
associated with the possession of consciousness and a privileged first-
person awareness of one’s mental life. But what is it like to be a group?
Can groups experience the collective equivalent of a headache? If we
apply the ‘headache criterion’ for the existence of minds (Harnad 2005),
it seems implausible that groups can have minds (let alone that we knew
about it). Experimental evidence suggests that people readily ascribe the
functional components of agency to a collective entity like Google but
balk at the idea that groups can have phenomenally conscious mental
states (Huebner, Bruno and Sarkissian 2010). Thus, speaking of ‘group
minds’ tout court blurs the distinction between science and cybernetic
fantasies about a technologically driven emergence of collective forms
of consciousness (Heylighen 2011).
The absence of phenomenal consciousness or any other property
X that is deemed to be a central feature of human minds invites the
popular objection that groups cannot have minds because they lack X.3
There are two standard responses to this objection. For one, it seems
suspiciously anthropocentric to insist that our criteria for what consti-
tutes a mind must exactly match those we take to indicate a human
mind. Attributions of minds that stop short of having the full gamut of
properties that human minds have are commonplace. Newborn human
infants, individuals with severe psychological impairments, non-human
animals, divine creatures and certain kinds of machines are widely
recognized as possessing minds of a certain sort, manifesting some but
not all of the psychological states or abilities characteristic of minds of
310 Georg Theiner
normally functioning human adults. The burden of proof, then, lies with
those who insist that attributions of mentality must be an all-or-nothing
affair and do not admit of degrees. Alternatively, it remains open to
proponents of group cognition to settle for the thesis that groups can
constitute collective cognitive systems instead. The explicit use of a
theoretical term that is borrowed from contemporary cognitive science
has the advantage of avoiding the busy associations of our vernacular
conception of minds while leaving us with a wide enough variety of
psychological predicates that have been fruitfully used to characterize
the operation of cognitive systems. In line with the studies cited at the
beginning, a ‘big-tent’ approach to cognition would encompass familiar
folk-psychological predicates, as in discussions of collective belief,
intention or agency, the ascription of psychological capacities such as
memory, decision making or general intelligence, or more theoretically
driven notions of cognition, such as the generation and coherent use
of representations, information processing, adaptive problem solving
or sense making (cf. Theiner and O’Connor 2010; Theiner, Allen and
Goldstone 2010).
Adopting the second response, one may still rightfully ask whether
there isn’t a non-arbitrary threshold that groups must cross in order to
constitute a genuine collective cognitive system. To take an extreme case,
might we consider a group that exhibits only a single type of psycholog-
ical property as a ‘minimal’ cognitive system, as this has been suggested
by Wilson (2004, 290)? Against the validity of such a minimalist crite-
rion, Rupert (2005, n. 4) has objected that minimal-minded groups that
otherwise fail to meet the majority of independently established diag-
nostic features of minds would not warrant a realist construal of the
GMT. In a recent paper, Rupert (2011) returns to this issue in the context
of discussing an argument for group cognition previously proposed by
Theiner, Allen and Goldstone (2010). That argument was based, among
other things, on a study of collective path formation in which people
had to travel to a number of randomly selected destinations in a virtual
environment while minimizing travel costs (Goldstone and Roberts
2006). The study had revealed that the emerging trail system reflects
a compromise between people going to the destinations where they
wanted to go and going where others have previously travelled. As an
individually unintended side effect, the group as a whole was frequently
able to solve the problem of finding, at least approximately, the path
that connects the set of destinations using the minimal amount of total
path length. An intriguing feature of the computational model that
was used to analyze the experimental data is that similar processes may
A Beginner’s Guide to Group Minds 311
moral judgments and negotiate over limited resources. It turns out that a
single c-factor extracted from the overall performance of each group was
the best predictor of how the same group solved an unrelated criterion
task (such as playing checkers or solving an architectural design task).
Importantly, the suggested c-factor is not strongly correlated with the
average or maximum individual intelligence of group members. Instead,
it is correlated with the average social sensitivity of group members, the
equality in distribution of conversational turn taking and the propor-
tion of females in the group (although the last factor appeared to be
mediated by social sensitivity). Other factors that are often considered
important determinants of group behaviour – group cohesion, personal
motivation and member satisfaction – did not play a significant role.
As the authors suggest, the study provides evidence that ‘the collective
intelligence of the group as a whole has predictive power above and
beyond what can be explained by knowing the abilities of the individual
group members’ (687). The upshot of these considerations, then, is that
we should not quell appeals to group cognition prematurely on the basis
of questionable intuitions about intelligence.
of the raft now trap an even bigger layer of air, thereby enhancing the
natural water repellency of their bodies; in addition, the trapped air
allows the bottom ants to breathe and adds buoyancy to the raft. Mlot
and colleagues show that what looks like a fine example of cooperative
behaviour is in fact an emergent result of a ‘random walk’ process on the
level of individual ants who either turn around when they arrive at the
edge of the raft or are forced down by other ants pushing from behind.
What lessons can we draw from this study of collective animal behav-
iour? Surely the term ‘ant raft’ ought to qualify as a collective noun in
at least some of the same sense in which ‘Edison’ does. Let us, there-
fore, temporarily bracket the contentious notion of cognition and
consider what it means for the behaviour of individuals and collectives
to become ‘metaphysically entwined’ in a socially distributed fashion.
A philosophically revealing gloss of ants in raft formation is to say that
they are causally coupled so as to form an integrated system with functional
gains. The three main aspects of this characterization can be unpacked as
follows. (1) Two (or more) elements are causally coupled just in case there
are reliable, two-way causal connections between them. For instance,
the ants need to be capable of reversibly attaching to each other and
also of climbing on top of one another so that ants at the edge can
be coerced into ‘cooperative’ behaviour. (2) Two (or more) coupled
elements form an integrated system in situations in which they operate
as a single causal whole – with causes affecting the resultant system as
a whole and the activities of that system as a whole producing certain
effects. A specific example of this would be the enhanced water repel-
lency of ants in raft formation, which changes their fluid dynamics. (3)
An integratively coupled system shows functional gain just when it either
(a) enhances the existing functions of its coupled parts or (b) manifests
novel functions as a whole relative to those possessed by any of its parts.
An example of (a) would be the raft-building talents of individual ants.
An example of (b) would be the colony’s capacity to stay afloat as a
unit, so that the colony can survive. Note that the functional ascrip-
tions manifest in (a) and (b) are ‘metaphysically entwined’: ant colonies
would not survive without the raft-building behaviours of its members,
and individual ants which do not become parts of an ant raft would not
be building rafts. This makes (a) an instance of a socially manifested
trait in the sense of Wilson, whereas (b) is an instance of a genuine
group-level trait. Rather than be conceived as two opposing forces, they
stand in a mutually reinforcing relationship as real and causally relevant
features of a multilevel system. Let’s now apply this characterization to
the analysis of socially distributed human cognition.
316 Georg Theiner
The goal of this section is to refine the sense in which socially distrib-
uted cognition is an emergent group-level phenomenon, albeit in a
way that straddles Wilson’s distinction between multilevel and group-
only conceptions of the GMT.6 I illustrate my analysis with reference
to Larson and Christensen’s (1993) cognitivist analysis of groups as
problem-solving units. As they explicitly state, ‘[w]e refer to group-level
cognitive activity as social cognition, a term that we apply collectively
to those social processes involved in the acquisition, storage, transmis-
sion, manipulation, and use of information for the purpose of creating a
group-level intellective product. In this context, the word ‘social’ is used
to denote how cognition is accomplished, not its content’ (5). Later, they
add that ‘at the group level of analysis, cognition is a social phenom-
enon’ (6). Using a generic information-processing model of problem
solving that has been used to explain the actions of individuals, they
detail a large number of social-interactional processes that ‘help account
for a parallel category of group-level action – the generation of group
decisions and group problem solutions’ (7). For instance, groups first
must identify and conceptualize the problems they have to solve; during
the acquisition stage, groups must decide how to distribute their atten-
tion to certain kinds of information, which is helped by discussing the
informational needs of the group, allocating members’ cognitive and
material resources or lending assistance and backing one another up
as information is being gathered (14). The group-cognitive functions
that are fulfilled by discussions serve to bring problem-relevant infor-
mation to light, influence individual cognitive processes and serve as
mechanisms by which members’ perceptions, judgments and opinions
are combined to generate a single group solution (22).
Intuitively, what makes us refer to the above analysis as an emergent
case of problem solving is the fact that the observed group outcome
does not simply result from the unstructured aggregation of individual
cognition but depends on an organized division of cognitive labour
among its members. More precisely, we can say that the sense in which
socially distributed cognition is emergent can be conceived of as a failure
of ‘aggregativity’ in the sense of Wimsatt (1986).7 Let P(S) be an aggrega-
tive cognitive property P of a group S with respect to a decomposition of
S into its members if P(S) is invariant with respect to the following four
conditions: (1) intersubstitutability of members, (2) qualitative similarity
with a change in the number of members, (3) stability under decompo-
sition and reaggregation of members, (4) no cooperative or inhibitory
A Beginner’s Guide to Group Minds 317
against group cognition, based on the fact that assembly bonus effects
are relatively rare. But this objection misses the point of our analysis.
The reason why socially distributed cognition can be said to be emer-
gent does not rest on the assumption that group cognition must always
yield optimal results or that group minds must always know more than
the sum of individual minds. Instead, it is meant to indicate the level
of influence that organizational structures and interactive group proc-
esses have on the collaborative production of a group-level cognitive
outcome. In principle, a collaborating group could significantly under-
perform the sum of its parts (e.g., based on a comparison with aggregate
data from nominal groups), and yet its outcome would be classified as
emergent in the sense we have outlined. 8
5 Final thoughts
Notes
1. For a classical discussion of this assumption in the context of split-brain
patients, see Nagel (1971).
2. For historical overviews, see, e.g., Allport (1968), Runciman (1997), Bar-Tal
(2000, ch. 2) and Wilson (2004, ch. 11).
3. Variations of this objection are discussed, among other places, by Sandelands
and Stablein (1987, 149), Gilbert (2004, 10), Rupert (2005, §§1–2), Harnad
(2005, passim), and Giere (2004, 768–772).
4. My tripartite distinction closely mirrors the discussion of socially distributed
remembering by Barnier et al. (2008, esp. 37–38).
5. For an argument in favour of group level emotions in this sense, see Huebner
(2011).
6. For a theoretical comparison of several different ways in which group cogni-
tion is said to be emergent, see Theiner and O’Connor (2010).
7. For an application of Wimsatt’s classification to distributed cognition, see
also Poirier and Chicoisne (2006); for an application to cultural evolution, see
Smaldino (forthcoming).
8. For a detailed discussion of this point in the context of group memory, see
Theiner (2013).
9. I would like to thank Bryce Huebner and Orestis Palermos for helpful
comments on an earlier version of this chapter.
References
Asch, S. (1952) Social Psychology. Englewood Cliffs, NJ: Prentice Hall.
Barnier, A. J., Sutton, J., Harris, C. B., and Wilson, R. A. (2008) ‘A Conceptual
and Empirical Framework for the Social Distribution of Cognition: The Case of
Memory’. Cognitive Systems Research 9 (1): 33–51.
Bar-Tal, D. (2000) Shared Beliefs in a Society: Social Psychological Analysis. Thousand
Oaks, CA: Sage.
320 Georg Theiner
Smith, E. R., Seger, C. R., & Mackie, D. M. (2007) ‘Can Emotions be Truly Group
Level? Evidence for Four Conceptual Criteria’. Journal of Personality and Social
Psychology 93, 431–446.
Stahl, G. (2006) Group Cognition: Computer Support for Building Collaborative
Knowledge. Cambridge, MA: MIT Press.
Stasser, G., and Titus, W. (2003) ‘Hidden Profiles: A Brief History’. Psychological
Inquiry 14 (3–4): 304–313.
Surowiecki, J. (2004) The Wisdom of Crowds. New York: Anchor.
Tajfel, H. (1978) Differentiation Between Social Groups: Studies in the Social Psychology
of Intergroup Relations. London: Academic Press.
Theiner, G. (2013) ‘Transactive Memory Systems: A Mechanistic Analysis of
Emergent Group Memory’. Review of Philosophy and Psychology 4 (1): 65–89.
Theiner, G. (2013) ‘Onwards and Upwards with the Extended Mind: From
Individual to Collective Epistemic Action’. In L. Caporael, J. Griesemer and
W. Wimsatt (eds) Developing Scaffolds. Vienna Series in Theoretical Biology,
191–208. Cambridge, MA: MIT Press.
Theiner, G., Allen, C., and Goldstone, R. (2010) ‘Recognizing Group Cognition’.
Cognitive Systems Research 11: 378–395.
Theiner, G., and O’Connor, T. (2010) ‘The Emergence of Group Cognition’. In A.
Corradini and T. O’Connor (eds), Emergence in science and philosophy, 78–117.
New York: Routledge.
Walsh, J. P., and Ungson, G. R. (1991) ‘Organizational Memory’. Academy of
Management Review 16: 57–91.
Wegner, D. M. (1987) ‘Transactive Memory: A Contemporary Analysis of the
Group Mind’. In B. Mullen and G. R. Goethals (eds), Theories of group behavior,
185–208. New York: Springer Verlag.
Weick, K., and Roberts, K. (1993) ‘Collective Mind in Organizations: Heedful
Interrelating on Flight Decks’. Administrative Science Quarterly 38: 357–381.
Wheeler, G. (1920) ‘The Termitodoxa, or Biology and Society’. Scientific Monthly
10: 113–124.
Wilson, E. O. (1971) The Insect Societies. Cambridge, MA: Belknap Press.
Wilson, D. S. (1997) ‘Incorporating Group Selection into the Adaptationist
Program: A Case Study Involving Human Decision Making’. In J. A. Simpson
and D. T. Kenrick (eds), Evolutionary Social Psychology. Mahwah, NJ: Erlbaum,
345–386.
Wilson, D. S. (2002) Darwin’s Cathedral: Evolution, Religion, and the Nature of
Society. Chicago: University of Chicago Press.
Wilson, R. (2001) ‘Group-level Cognition’. Philosophy of Science 68 (supp.):
262–273.
Wilson, R. (2004) Boundaries of the Mind: The Individual in the Fragile Sciences –
Cognition. Cambridge: Cambridge University Press.
Wimsatt, W. C. (1986) ‘Forms of Aggregativity’. In M. G. Grene, A. Donagan,
A. N. Perovich and M. V. Wedin (eds), Human Nature and Natural Knowledge,
259–291. Dordrecht: Reidel.
Wimsatt, W. C. (2006) ‘Reductionism and its Heuristics: Making Methodological
Reductionism Honest’. Synthese 151 (3): 445–475.
Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., and Malone, T. W. (2010)
‘Evidence for a Collective Intelligence Factor in the Performance of Human
Groups’. Science 330 (6004): 686–688.
Index
323
324 Index