Escolar Documentos
Profissional Documentos
Cultura Documentos
The Implicit Mind
Cognitive Architecture, the Self, and Ethics
MICHAEL BROWNSTEIN
1
1
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.
1 3 5 7 9 8 6 4 2
Printed by Sheridan Books, Inc., United States of America
CONTENTS
Acknowledgments vii
1. Introduction 1
PART T WO SELF
8. Conclusion 206
I’m so grateful for the generous, supportive, and incisive feedback I’ve received
over the years, on this and related projects, from Alex Acs, Marianna Alessandri,
Louise Antony, Mahzarin Banaji, Michael Brent, Lawrence Buell, Brent Cebul,
Jason D’Cruz, James Dow, Yarrow Dunhum, Ellen Fridland, Katie Gasdaglis,
Bertram Gawronski, Maggie Gram, Daniel Harris, Sally Haslanger, Jules Holroyd,
Bryce Huebner, Zachary Irving, Gabbrielle Johnson, Eric Katz, Sean Kelly, Julian
Kiverstein, Joshua Knobe, Victor Kumar, Benedek Kurdi, Calvin Lai, Carole Lee,
Edouard Machery, Eric Mandelbaum, Kenny Marotta, Christia Mercer, Eliot
Michaelson, John Morrison, Myrto Mylopolous, Brian Nosek, Keith Payne,
Jonathan Phillips, Jeremy Pober, Emily Remus, Luis Rivera, Laurie Rudman, Hagop
Sarkissian, Eric Schwitzgebel, Claire Seiler, Rena Seltzer, Paschal Sheeran, Robin
Sheffler, David Shoemaker, Susanna Siegel, Nico Silins, Holly Smith, Chandra
Sripada, Alex Stehn, Shannon Sullivan, Joseph Sweetman, Virginia Valian, Natalia
Washington, Thomas Webb, Alison Wylie, Sunny Yang, and Robin Zheng.
Many thanks, too, to Peter Ohlin, Lacey Davidson, Mary Becker, and two anony-
mous reviewers for Oxford University Press. You have helped to improve this book
in countless ways. Taylor Carman and John Christman gracefully tolerated ham-
handed earlier attempts to work out the ideas contained herein. Thank you to both,
now that I’ve worked it all out flawlessly . . .
I owe extra-special thanks to Daniel Kelly, Tamar Gendler, and Jennifer Saul,
each of whom is an inspiration and a role model. And I owe extra-special super-
duper thanks to Alex Madva, my one-in-a-million collaborator. You flipped-turned
my life upside down, so just take a minute, sit right there, and consider yourself the
prince of this affair.
Mom, Dad, Erick, and Carrie: I can’t promise it’s worth looking, but you’ll find
yourselves in here. Thank you for unwavering love and support.
Above all, my deepest thanks to Reine, Leda, Iggy, and Minerva. I love you,
I love you, I love you, and I love you too.
vii
viii Acknowledgments
Parts of the book were conceived, written, and revised with support—for
which I’m very grateful—from the American Council of Learned Societies, the
American Academy of Arts and Sciences, and the Leverhulme Trust. Some of the
material in this book has appeared in print previously in an earlier form. In par-
ticular, some of the examples in Chapter 1 were presented in Brownstein (2014)
and Brownstein and Madva (2012a,b). Chapter 2 is a much-expanded version of
part of Brownstein and Madva (2012b). Also in Chapter 2, an earlier version of
§4 appeared in Brownstein (2017) and an earlier version of part of §4.2 appeared
in Brownstein and Madva (2012a). Parts of §3 and §5 in Chapter 3 are derived
from Brownstein and Madva (2012a,b), and parts of §6 were published in a differ-
ent form in Madva and Brownstein (2016). Parts of Chapters 4 and 5 are derived
from Brownstein (2014). In Chapter 4 as well, an earlier version of §4.1 appeared
in Brownstein and Madva (2012a), and an earlier version of §4.2 appeared in
Brownstein (2014). Some parts of Chapter 7 are derived from Brownstein (2016)
and a few passages of the Appendix are derived from Brownstein (2015). I’m grate-
ful to these presses and to my co-author, Alex Madva, for permission to edit and
develop our collaborative work here.
The Implicit Mind
1
Introduction
Victims of violent assault sometimes say, after the fact, that “something just felt
wrong” about the person walking on the other side of the street. Or offering to help
carry the groceries into their apartment. Or hanging out in the empty hallway. But
to their great regret, they dismissed these feelings, thinking that they were just being
paranoid or suspicious. In The Gift of Fear, Gavin de Becker argues that the most
important thing people can do to avoid becoming victims of assault is to trust their
intuition when something about a person or situation seems amiss. He writes:
A woman is waiting for an elevator, and when the doors open she sees a
man inside who causes her apprehension. Since she is not usually afraid,
it may be the late hour, his size, the way he looks at her, the rate of attacks
in the neighborhood, an article she read a year ago—it doesn’t matter why.
The point is, she gets a feeling of fear. How does she respond to nature’s
strongest survival signal? She suppresses it, telling herself: “I’m not going
to live like that; I’m not going to insult this guy by letting the door close
in his face.” When the fear doesn’t go away, she tells herself not to be so
silly, and she gets into the elevator. Now, which is sillier: waiting a moment
for the next elevator, or getting into a soundproofed steel chamber with a
stranger she is afraid of? (1998, 30–31)
De Becker offers trainings promising to teach people how to notice their often
very subtle feelings of fear and unease—their “Pre-Incident Indicators”—in
potentially dangerous situations. These indicators, he argues, are responsive to
nonverbal signals of what other people are thinking or planning. For example,
we may feel unease when another’s “micro-expression,” like a quick sideways
glance, or rapid eye-blinking, or slightly downturned lips, signals that person’s
intentions, even though we might not notice such cues consciously. De Becker’s
trainings have been adapted for police officers, who also often say, after vio-
lent encounters, that they could tell that something was wrong in a situation,
but they ignored those feelings because they didn’t seem justified at the time.
1
2 Introduction
This approach has been influential. De Becker designed the MOSAIC Threat
Assessment System that is used by many police departments to screen threats of
spousal abuse, and is also used to screen threats to members of the US Congress,
the Central Intelligence Agency, and federal justices, including the justices of the
Supreme Court.1
But there is a problem with de Becker’s advice. Stack his recommendations
up against research on “implicit bias” and a dilemma emerges. Roughly speaking,
implicit biases are evaluative thoughts and feelings about social groups that can
contribute to discriminatory behavior even in the absence of explicitly prejudiced
motivations.2 These thoughts and feelings are captured by “indirect” measures of
attitudes. Such measures are indirect in the sense that they avoid asking people
about their feelings or thoughts directly. Instead, on the most commonly used indi-
rect measure—the “Implicit Association Test” (IAT; Greenwald et al., 1998)—par-
ticipants are asked to sort names, images, or words that reflect social identity as
quickly as possible. What emerges is thought to reflect mental associations between
social groups and concepts like “good,” “bad,” “violent,” “lazy,” “athletic,” and so on.
Such associations result at least in part from common stereotypes found in con-
temporary societies about members of these groups. On the black–white race IAT,
most white people (more than 70%) demonstrate negative implicit attitudes toward
blacks, and roughly 40% of black participants do too.3 Moreover, tests like the IAT
sometimes predict biased behavior, and in some contexts, they do so better than
traditional self-report measures.4
Consider, for example, research on “shooter bias.” In a computer simulation, par-
ticipants are quickly shown images of black and white men holding either guns or
harmless objects like cell phones. They are told to “shoot” all and only those people
depicted holding guns. The results are unsettling. Participants are more likely to
shoot an unarmed black man than an unarmed white man and are more likely to fail
to shoot an armed white man than an armed black man (Correll et al., 2002; Mekawi
and Bresin, 2015). Measures of implicit bias like the IAT can predict these results.
People who demonstrate strong implicit racial biases (in particular, strong implicit
associations between “black” and “weapons”) are more likely to make these race-
based mistakes than people who demonstrate weaker or no implicit racial biases
(Glaser and Knowles, 2008). These findings are ominous in light of continued and
recent police shootings of unarmed black men in the United States. And there are
1
See http://gavindebecker.com/main/.
2
There is much debate about how to define implicit bias. See Brownstein and Saul (2016) for
discussion.
3
See Nosek et al. (2002, 2007), Ashburn-Nardo et al. (2003), and Dasgupta (2004).
4
See Nosek et al. (2007) and Greenwald et al. (2009). See the Appendix for a more in-depth dis-
cussion of the IAT and behavioral prediction. Readers unfamiliar with the IAT may want to skip ahead
to the Appendix to read the section explaining how the test works.
Int roduc tion 3
[T]here are very few African-American men who haven’t had the experi-
ence of walking across the street and hearing the locks click on the doors
of cars. That happens [sic] to me, at least before I was a senator. There are
very few African-Americans who haven’t had the experience of getting on
an elevator and a woman clutching her purse nervously and holding her
breath until she had a chance to get off. That happens often.7
5
As I discuss in the Appendix, no one psychological study is sufficient for concluding the existence
of some psychological phenomenon. And, sometimes, many studies appearing to demonstrate some
phenomenon can all turn out to be false positives or otherwise flawed, as the so-called replication crisis
in psychology has shown. So I do not mean to suggest that the sheer number of studies on implicit
bias is definitive of the existence of implicit bias. But the breadth of a research program does count as
evidence, however defeasible, of the program’s core findings.
6
Evidence for the fact that shooter bias involves acting on the basis of subtle feelings of fear stems
in part from the fact that shooter bias can be mitigated by practicing the plan “If I see a black face, I will
think ‘safe!’ ” (Stewart and Payne, 2008). But planning to think “quick!” or “accurate!” doesn’t have
the same effect on shooter bias (see Chapter 7). Note also the intersection of race and gender in this
stream of research, as all shooter bias studies use male targets only (so far as I know). White Americans’
implicit associations with black women are likely to be different than their implicit associations with
black men.
7
See http://www.huffingtonpost.com/2013/07/19/obama-racial-profiling_n_3624881.html.
4 Introduction
us, and also true that these very same feelings can be profoundly affected by prej-
udice and stereotypes. Our Pre-Incident Indicators might be a valuable source of
attunement to the world, in other words, but they might also be a tragic source of
moral and rational failing.8 This is a grave point, particularly given de Becker’s rec-
ommendations to police officers to trust their intuition about potential criminal
suspects.
The juxtaposition of de Becker’s recommendations with research on implicit
bias illustrates a tension that is replicated across the sciences of the mind. On
the one hand, our spontaneous inclinations and dispositions are often attuned
to subtle elements of the world around us, sometimes even more so than our
reasoned judgments. Converging research in philosophy and psychology sug-
gests that these unreasoned (but not unreasonable) inclinations—our instincts,
intuitions, gut feelings, sixth senses, heuristics, snap judgments, and so on—
play key roles in thought and action and have the potential to have moral and
rational credibility. On the other hand, one might read the headline of the past
seventy-f ive years of research in cognitive and social psychology as putting us
“on notice” about the moral and rational failings of our spontaneous inclina-
tions and dispositions. From impulsivity, to “moral myopia,” to implicit bias, it
seems as if we must be constantly on guard against the dangers of our “mere”
inclinations.
The central contention of this book is that understanding these “two faces of
spontaneity”— its virtues and vices— requires understanding what I call the
“implicit mind.”9 In turn, understanding the implicit mind requires considering
three sets of questions. The first set focuses on the architecture of the implicit mind
itself, the second on the relationship between the implicit mind and the self, and the
third on the ethics of spontaneity.
First, what kinds of mental states make up the implicit mind? Are both “virtue
cases” and “vice cases” of spontaneity products of one and the same mental system?
What kind of cognitive structure do these states have, if so? Are implicit mental
states basic stimulus-response reflexes? Are they mere associations? Or are they
beliefs?
Second, how should we relate to our spontaneous inclinations and dispositions?
Are they “ours,” in the sense that they reflect on our character? Commonly we think
of our beliefs, hopes, values, desires, and so on in these terms, as “person-level”
states. Do implicit mental states reflect on who we are in this sense? Relatedly, under
8
For related discussion, in which our spontaneous reactions are framed as sometimes at once epi
stemically rational and morally faulty, see Gendler (2011), and see also replies from Egan (2011) and
Madva (2016a).
9
I will speak often of “spontaneity,” but I do not mean it in the Kantian sense of the mind making
active contributions to the “synthesis” of perceptual experience. I use the term in the folk sense, to refer
to actions not preceded by deliberation, planning, or self-focused thought.
Int roduc tion 5
what conditions are we responsible for spontaneous and impulsive actions, that is,
those that result from the workings of the implicit mind?
And, finally, how can we improve our implicit minds? What can we do to
increase the chances of our spontaneous inclinations and dispositions acting as
reliable indicators rather than as conduits for bias and prejudice? It is tempting to
think that one can simply reflect carefully upon which spontaneous inclinations to
trust and which to avoid. But in real-time, moment-to-moment life, when decisions
and actions unfold quickly, this is often not plausible. How then can we act on our
immediate inclinations while minimizing the risk of doing something irrational or
immoral? How can we enjoy the virtues of spontaneity without succumbing to its
vices? A poignant formulation of this last question surfaced in an unlikely place
in 2012. In the climactic scene of Warner Bros.’ The Lego Movie (Li et al., 2014),
with the bad guys closing in and the moment of truth arriving, the old wise man
Vitruvius paradoxically exhorts his protégé, “Trust your instincts . . . unless your
instincts are terrible!”
Indeed. A plethora of research in the sciences of the mind shows that very
often we cannot but trust our instincts and that in many cases we should, unless
our instincts are terrible, which the sciences of the mind have also shown that they
often are.
10
I borrow this example from Hubert Dreyfus and Sean Kelly (2007), who are in turn influenced
by Maurice Merleau-Ponty (1962/2002). I presented this example, along with the example of dis-
tance-standing discussed later, with Alex Madva, in Brownstein and Madva (2012b).
6 Introduction
Vir Heroicus Sublimus, Newman’s largest painting at the time of its com-
pletion, is meant to overwhelm the senses. Viewers may be inclined to
step back from it to see it all at once, but Newman instructed precisely
the opposite. When the painting was first exhibited, in 1951 . . . Newman
tacked to the wall a notice that read, “There is a tendency to look at large
pictures from a distance. The large pictures in this exhibition are intended
to be seen from a short distance.”11
11
Abstract Expressionist New York, Museum of Modern Art, New York, 2010–2011.
Int roduc tion 7
exemplary action. Peter Railton describes skilled spontaneous action in the arts and
in conversation:
12
Quoted in Zadie Smiths’ memoriam on David Foster Wallace. See http://fivedials.com/fiction/
zadie-smith/
13
Quoted in Beilock (2010, 224). See Chapter 6 for a more in-depth discussion of these examples.
14
See my “Rationalizing Flow” (Brownstein, 2014).
15
That is, a task that one subjectively perceives to be worthwhile. Perhaps, though, flow can some-
times also be found in activities one believes to be trivial (e.g., doing crossword puzzles). If so, this
8 Introduction
and writers sometimes talk about being in the flow in this sense. Those who have
experienced something like it probably know that flow is a fragile state. Phone calls
and police sirens can take one out of the flow, as can self-reflection. According to
Csikszentmihalyi, happiness itself is a flowlike state, and the fragility of flow explains
why happiness is elusive.
Other examples are found in research suggesting that our spontaneous inclina-
tions are central to some forms of prosocial behavior. This opens the door to consid-
ering the role of spontaneity in ethics. For example, in some situations, people are
more cooperative and less selfish when they act quickly, without thinking carefully.
David Rand and colleagues (2012), for instance, asked people in groups of four to
play a public goods game. All the participants were given the same amount of money
and were then asked if they would like to contribute any of that money to a common
pool. Whatever was contributed to the common pool, they were told, would then
be doubled by the experimenter and distributed evenly among all four participants.
It turns out that people contribute more to the common pool when they make fast,
spontaneous decisions. The same is true when spontaneous decision-making is
manipulated; people contribute more to the common pool when they are given less
time to decide what to do. This doesn’t just reflect subjects’ calculating their returns
poorly when they don’t have time to think. In economic games in which cooperative
behavior is optimal, such as the prisoner’s dilemma, people act more cooperatively
when they act spontaneously. This is true in one-shot games and iterated games, in
online experiments and in face-to-face interactions, in time-pressure manipulation
experiments and in conceptual priming experiments, in which subjects act more
charitably after writing about times when their intuition led them in the right direc-
tion compared with times when careful reasoning led them in the right direction
(Rand et al., 2012).16
Consider also the concept of “interpersonal fluency,” a kind of ethical expertise
in having and acting upon prosocial spontaneous inclinations.17 Then President-
Elect Obama exhibited interpersonal fluency during his botched attempt in 2009
to take the Oath of Office. In front of millions of viewers, Obama and Chief Justice
of the Supreme Court John Roberts both fumbled the lines of the oath, opening the
possibility of a disastrously awkward moment.18 But after hesitating for a moment,
suggests that the subjective perception of worth can conflict with one’s beliefs about which activities
are worthwhile. See Chapter 4 for related discussion. Thanks to Reine Hewitt for this suggestion.
16
These experiments are different from those used to defend so-called Unconscious Thought
Advantage (UTA) theory. UTA experiments have typically been underpowered and have not been
replicated. For a critique of UTA, see Nieuwenstein et al. (2015).
17
See Brownstein and Madva (2012a) for discussion of the following example. See Madva (2012)
for discussion of the concept of interpersonal fluency.
18
For a clip of the event and an “analysis” of the miscues by CNN’s Jeanne Moos, see http://www.
youtube.com/watch?v=EyYdZrGLRDs.
Int roduc tion 9
Obama smiled widely and nodded slightly to Roberts, as if to say, “It’s okay, go
on.” This gesture received little explicit attention, but it defused the awkwardness
of the moment, making it possible for the ceremony to go on in a positive atmos
phere. Despite his nervousness and mistakes, Obama’s social fluency was on display.
Moreover, most of us know people with similar abilities (aka “people skills”). These
skills require real-time, fluid spontaneity, and they lead to socially valuable ends
(Manzini et al., 2009).19
Similar ideas are taken up in theories of virtue ethics. Aristotle argued that the
virtuous person will feel fear, anger, pity, courage, and other emotions “at the right
time, about the right thing, towards the right people, for the right end, and in the
right way” (2000, 1106b17–20). To become virtuous in this way, Aristotle stressed,
requires training. Like the athlete who practices her swing over and over and over
again, the virtuous person must practice feeling and acting on her inclinations over
and over and over again in order to get them right. This is why childhood is so crucial
in Aristotle’s conception of the cultivation of virtue. Getting the right habits of char-
acter at a young age enables us to be virtuous adults. The result is a picture of virtue
as a kind of skilled and flexible habit, such that the ethical phronimos—the person
of practical wisdom, who acts habitually yet ethically—is able to come up with the
right response in all kinds of situations spontaneously. Contemporary virtue ethi-
cists have voiced similar ideas. Julia Annas writes, “People who perform brave actions
often say afterwards that they simply registered that the person needed to be rescued,
so they rescued them; even when prodded to come up further with reasons why
they did the brave thing, they often do not mention virtue, or bravery” (2008, 22).
Courageous people, in other words, immediately recognize a situation as calling for
a brave act, and act bravely without hesitation. Sometimes these virtuous impulses
run counter to what we think we ought to do. Recall Wesley Autrey, the New York
City “Subway Hero”: Autrey saved the life of an epileptic man who, in the midst of
a seizure, had fallen onto the tracks. Autrey jumped onto the tracks and held the
man down while the train passed just inches overhead. Afterward, Autrey reported
“just reacting” to the situation. But he might well have also thought, after the fact,
that he acted recklessly, particularly given that his children had been standing on the
platform with him. If so, Autrey’s case would resemble those described by Nomy
19
Indeed, research suggests that interpersonal fluency can be impaired in much the same way as
other skills. Lonely people, Megan Knowles and colleagues (2015) find, perform worse than nonlonely
people on social sensitivity tasks when those tasks are described as diagnostic of social aptitude, but
not when the same tasks are described as diagnostic of academic aptitude. In fact, lonely people often
outperform nonlonely people on these tasks when the tasks are described as diagnostic of academic
aptitude. This is arguably a form of stereotype threat, the phenomenon of degraded performance on a
task when the group of which one is a member is stereotyped as poor at that activity and that stereo-
type has been made salient.
10 Introduction
Arpaly (2004) as cases of “inverse akrasia,” in which a person acts virtuously despite
her best judgment.
The pride of place given to spontaneity is not limited to Western virtue ethical
theories either. Confucian and Daoist ethics place great emphasis on wu-wei, or a
state of effortless action. For one who is in wu-wei, “proper and effective conduct
follows as automatically as the body gives in to the seductive rhythm of a song”
(Slingerland, 2014, 7). Being in wu-wei, in other words, enables one to be simul-
taneously spontaneous and virtuous. Or as Edward Slingerland puts it, “The goal
is to acquire the ability to move through the physical and social world in a manner
that is completely spontaneous and yet fully in harmony with the proper order of
the natural and human worlds” (2014, 14). Early Chinese scholars differed on how
to achieve wu-wei. Confucius, for example, emphasized an exacting and difficult set
of rituals—akin in some ways to the kind of training Aristotle stressed—while later
thinkers such as Laozi reacted against this approach and recommended instead let-
ting go of any effort to learn how to act effortlessly.
But for all of this, spontaneity has many profound and dangerous vices, and it is
surely wrong to think that acting with abandon, just as such, promotes a well-led life.
This is the problem with the way in which these ideas have percolated into the con-
temporary American imagination, from Obi-Wan Kenobi teaching Luke Skywalker
to “trust your feelings,” to the Nike corporation motivating athletes to “just do it,”
to Malcolm Gladwell (2007) touting “the power of thinking without thinking.”
One could easily come away from these venues thinking all too straightforwardly, “I
should just stop thinking so much and trust all my instincts!”
In some ways, we all already know this would be a terrible idea. An understand-
ing of the dangers of acting on whims and gut feelings is embedded in common
folk attitudes (e.g., are there any parents who haven’t implored their misbehaving
kid to “think before you act”?). It is also embedded in Western cultural and phil-
osophical history (e.g., in Plato’s tripartite account of the soul). Sentiments like
Kahlil Gibran’s—“ Your soul is oftentimes a battlefield, upon which your reason and
your judgment wage war against your passion and your appetite” (1923, 50)—are
also familiar. These sentiments have been taken up in toned-down form by research
psychologists too. For example, Keith Payne and Daryl Cameron (2010) channel
Hobbes in characterizing our implicit attitudes as “nasty, brutish, and short-sighted,”
in contrast to our reasoned beliefs and judgments.
Perhaps the most familiar worry is with simply being impulsive. Indeed, we often
call a nondeliberative action spontaneous when it has a good outcome but impul-
sive when it has a bad outcome.20 The costs of impulsivity are well documented.
Walter Mischel’s “marshmallow experiments” are a locus classicus. Beginning with a
group of three-to six-year-olds at the Bing Nursery School at Stanford University in
20
See the discussion in Chapter 5 of experimental research on folk attitudes toward the “deep self.”
Int roduc tion 11
the late 1960s, Mischel and colleagues began to test how children delay gratification
by offering them a choice: one marshmallow now or two in fifteen minutes—but
only if the child could wait the fifteen minutes to eat the first marshmallow. What’s
striking about this study only emerged over time: the children who waited for two
marshmallows were better at coping with and responding to stress more than ten
years later, and even had higher SAT scores when they applied to college (Mischel
et al., 1988).
The marshmallow experiments helped give rise to an entire subfield of research
psychology focused on self-control and self-regulation. The objects of study in this
field are familiar. For example, the four chapters dedicated to “Common Problems
with Self-Regulation” in Kathleen Vohs and Roy Baumeister’s Handbook of Self-
Regulation (2011) focus on substance addictions, impulsive eating, impulsive shop-
ping, and attention-deficit/hyperactivity disorder. These are all problems with
impulsivity, and some in the field simply define good self-regulation as the inhibition
of impulses (e.g., Mischel et al., 1988). The consequences of uncontrolled impulses
can be huge, of course (but so too can the conceptualization of all impulses as things
to be controlled, as I discuss later).21
Problems like these with impulsivity are compounded, moreover, when one’s
deliberative capacities are diminished. Behavioral economists have shown, for
example, that people are much more likely to impulsively cheat, even if they iden-
tify themselves as honest people, if they are cognitively depleted (Gino et al., 2011).
This finding is an example of a broad phenomenon known as “ego depletion”
(Baumeister et al., 1998).22 The key idea is that self-control is a limited resource,
akin to a muscle, and that when we stifle impulses or force ourselves to focus on
difficult tasks, we eventually tire out. Our self-regulatory resources being depleted,
we then exhibit poorer than usual self-control on subsequent tasks that require self-
control. Once depleted (by, e.g., suppressing stray thoughts while doing arithme-
tic), individuals are more prone to break their diets (Vohs and Heatherton, 2000),
to drink excessively, even when anticipating a driving test (Muraven et al., 2002),
21
A case in point (of the idea that it is problematic to conceptualize all impulses as things to be con-
trolled) is the marshmallow experiment itself, for which various explanations are available, only some
of which have to do with impulsivity. While the most commonly cited explanation of Mischel’s find-
ings focuses on the impulsivity of the children who couldn’t wait for a second marshmallow, another
possibility is that the marshmallow experiment gauges the degree to which children trust the experi-
menter and have faith in a reliable world. See, for instance, Kidd and colleagues (2013). Thanks to
Robin Scheffler for pointing out this possibility to me. Of course, the degree to which one trusts others
may itself be a product of one’s spontaneous reactions.
22
There is increasing concern about research on ego depletion, however. One recent meta-analysis
found zero ego depletion effects across more than two thousand subjects (Hagger et al., 2016). But
even some who have taken the reproducibility crisis the most seriously, such as Michael Inzlicht, con-
tinue to believe in the basic ego depletion effect. See http://soccco.uni-koeln.de/cscm-2016-debate.
html and see the Appendix for discussion of reproducibility in psychological science.
12 Introduction
and to feel and express anger (Stucke and Baumeister, 2006). In cases like these,
it seems clear that we would be better at reining in our problematic impulses if our
deliberative capacities weren’t depleted.
In addition to undermining well-being, spontaneity can lead us to act irration
ally. For example, in the mid-1990s, AIDS activist and artist Barton Benes caused
controversy with an exhibition called Lethal Weapons, which showed at museums
like the Chicago Art Institute and the Smithsonian. The exhibition consisted of
ordinary objects filled with Benes’s or others’ HIV-infected blood. In an inter-
view, Benes explained how he came up with the idea for Lethal Weapons.23 He had
experienced the worst of the early days of the AIDS epidemic in the United States,
watching nearly half his friends die of the disease. Benes recalled the rampant fear
of HIV-positive blood and how doctors and nurses would avoid contact with him.
(Benes was HIV-positive, and like most people at the time, he knew that HIV
affected patients’ blood, but didn’t know much else.) While chopping parsley in his
apartment one night, Benes accidentally cut his finger. On seeing his own blood on
his hands, Benes felt a rush of fear. “I was terrified I would infect myself,” he thought.
The fear persisted long enough for Benes to find and wear a pair of rubber gloves,
knowing all the while that his fear of contaminating himself made no sense, since
he already had the disease. Later, Benes saw the experience of irrationally fearing
infection from his own blood as signifying the power of the social fears and cultural
associations with HIV and blood. These fears and associations resided within him
too, despite his own disavowal of them. Lethal Weapons was meant to force these
fears and associations into public view.
Benes’s reaction was spontaneous and irrational. It was irrational in that it
conflicted with what he himself believed (i.e., that he couldn’t infect himself
with HIV). In this sense, Benes’s reaction is structurally similar to other “belief-
discordant” behaviors, in which we react to situations one way, despite seeming
to believe that we ought to react another way. Tamar Szabó Gendler has discussed
a number of vivid examples, including people who hesitate to eat a piece of fudge
molded to resemble feces, despite acknowledging that the ugly fudge has the same
ingredients as a regular piece they had just eaten (2008a, 636); sports fans who
scream at their televisions, despite acknowledging that their shouts can’t tran-
scend space and time in order to affect the game’s outcome (2008b, 553, 559);
and shoppers who are more willing to purchase a scarf that costs $9.99 than one
that costs $10.00, despite not caring about saving a penny (2008a, 662). One of
Gendler’s most striking examples is adapted from the “problem of the precipice”
discussed by Hume, Pascal, and Montaigne. The modern-day equivalent of what
these early modern philosophers imagined exists today in Grand Canyon National
Park (among other places, such as the Willis [Sears] Tower in Chicago). It is a
glass walkway extending seventy feet out from the rim of the canyon called the
“Skywalk.”
Tourists who venture out on the Skywalk act terrified. (Search for “Grand Canyon
Skywalk” on YouTube for evidence.) They shake, act giddy, and walk in that slow,
overly careful way that dogs do when they cross subway grates. But as Gendler
makes clear, there is something strange about these behaviors. Presumably the tour-
ists on the Skywalk believe that they are safe. Would they really venture onto it if they
didn’t? Wouldn’t they accept a bet that they’re not going to fall through the platform?
Wouldn’t you?
Research by Fiery Cushman and colleagues (2012) illuminates similar phenom-
ena in controlled laboratory settings. Diabolically, they ask subjects to perform pre-
tend horrible actions, like smashing a fake hand with a hammer and smacking a
toy baby on a table. Participants in these studies acknowledge that their actions are
pretend; they clearly know that the hand and the baby are just plastic. But they still
find the actions far more aversive than equally unharmful analogous actions, like
smashing a nail with a hammer or smacking a broom on a table.
Some of these examples lead to worse outcomes than others. Nothing particularly
bad happens if you shake and tremble on the Skywalk despite believing that you’re safe.
Even if nothing particularly bad happens as a result, however, the skywalker’s fear and
trembling are irrational from her own perspective. Whatever is guiding her fearful reac-
tion conflicts with her apparent belief that she’s safe. One way to put this is that the
skywalker’s reaction renders her “internally disharmonious” (Gendler, 2008b). Her
action-guiding faculties are out of sync with her truth-taking faculties, causing dishar-
mony within her. Plato might call this a failure to be just, for the just person is one who
“puts himself in order, harmonizes . . . himself . . . [and] becomes entirely one, moderate
and harmonious” (Republic, 443de in Plato, 380 bce/1992; quoted in Gendler, 2008b,
572). Whether or not failing to be just is the right way to put it, I suspect that most
people can relate to the skywalker’s disharmony as some kind of shortcoming, albeit a
minor one.
In other cases, of course, irrational belief-discordant behavior can lead to very
bad outcomes. Arguably this is what Benes’s exhibition was meant to illustrate. The
history (and persistence) of horrific social reactions to HIV and AIDS were very
likely driven, at least in part, by the irrational fears of ordinary people. While HIV
can be lethal, so too can mass fear. Gendler discusses research on implicit bias in a
similar vein. Not only do implicit biases cause us to act irrationally, from the perspec-
tive of our own genuinely held beliefs, but they also cause us to fail to live up to our
moral ends.
A plethora of research on the “moral myopia” (Greene and Haidt, 2002) of spon-
taneous judgment supports a similar conclusion. Consider perceptions of “moral
luck.” Most people believe that there is a significant moral difference between
intended and accidental actions, such that accidental harms ought to be judged or
14 Introduction
punished more leniently than intended harms (or not at all).24 But when the out-
comes of accidents are bad enough, people often feel and act contrary to this belief,
and seem to do so on the basis of strong, and seemingly irrational, gut feelings.
Justin Martin and Cushman (2016) describe the case of Cynthia Garcia-Cisneros,
who drove through a pile of leaves, only to find out later (from watching the news
on TV) that two children had been hiding under the pile. The children were tragi-
cally killed. Martin and Cushman described this case to participants in a study and
found that 94% of participants thought that Garcia-Cisneros should be punished.
But when another group heard the same story, except they were told that the bumps
Garcia-Cisneros felt under her car turned out to be sticks, 85% thought that she
deserved no punishment. This seems irrational, given that it was a matter of pure
bad luck that children were hiding in the leaves. Moreover, even though this particu-
lar study focused on participants’ considered verbal judgments, it seems likely that
those judgments were the product of brute moral intuitions. When experimental
participants are placed under cognitive load, and are thus less able to make delib-
erative judgments (compared to when they are not under load), they tend to judge
accidental harms more harshly, as if those harms were caused intentionally (Buon
et al., 2013). This is not due to a failure to detect harm doers’ innocent intentions.
Rather, it seems that people rely on an intuitive “bad outcome = punishment” heu-
ristic when they are under load.25
Experimental manipulations of all sorts create analogous effects, in the sense of
showing how easily our intuitive nondeliberative judgments and actions can “get
it wrong,” morally speaking. People tend to judge minor negative actions more
harshly when they are primed to feel disgust, for example (Wheatley and Haidt,
2005). Worse still, judges in Israel were shown to render harsher verdicts before
lunch than after (Danziger et al., 2011). Both of these are examples of delibera-
tive judgments going awry when agents’ (sometimes literal) gut feelings get things
wrong. As Mahzarin Banaji (2013) put it, summarizing a half century of research on
“bounded rationality” like this, we are smart compared with some species, but not
smart enough compared with our own standards of rationality and morality.26 Often
we fail to live up to these standards because we are the kinds of creatures for whom
spontaneity can be a potent vice.
24
Although see Velleman (2015) for a discussion of interesting cross-cultural data suggesting that
not all cultures distinguish between intended and accidental harms.
25
Martin and Cushman (2016) argue that judgments about moral luck are actually rationally
adaptive. See also Kumar (2017) for thoughtful discussion.
26
In other words, we are error-prone in surprising ways, by our own standards of rationality and
morality, for example, even when we are experts in a domain or when we have perfectly good intentions.
See Kahneman (2003) for a summary of the many “deficits” of spontaneous and impulsive decision-
making. For related discussion of the moral pitfalls of acting on the basis of heuristic judgments, see
Sunstein (2005). For a brief reply to Sunstein that recommends a research framework similar in some
respects to what I present in this book, see Bartsch and Wright (2005).
Int roduc tion 15
2. Implicitness
I have suggested that understanding these virtues and vices of spontaneity requires
understanding the implicit mind. But what does “implicit” mean? Ordinarily, for
something to be implicit is, roughly speaking, for it to be communicated indirectly.27
When a friend repeatedly fails to return your calls, disrespect may be implicit in her
inaction. If she returns your calls, but just to tell you that you’re not worth her time,
then her disrespect is explicit. The disrespect is obvious, intended, and aboveboard.
You don’t have to infer it from anything.28 Similarly, if I won’t make eye contact with
you, my unfocused gaze suggests that perhaps I feel uncomfortable. My discomfort
is implicit in my lack of eye contact. Or imagine a terrific painter. Her deep under-
standing of color, texture, shape, and so on may be implicit in her work. We infer
her skill from what she does on the canvas, notwithstanding whatever she might or
might not say explicitly about her paintings.
The distinction between explicit and implicit may be familiar in these examples,
but its nature is not clear. At least three senses seem to be implied when we say that
so-and-so’s feelings, thoughts, or skills are implicit in her behavior:
27
The Oxford English Dictionary offers two additional common uses: (1) essentially connected
with (e.g., “the values implicit in the school ethos”); and (2) absolute (e.g., “an implicit faith in God”).
See https://en.oxforddictionaries.com/definition/implicit.
28
I’m not referring to inference in any technical sense. Understanding explicit assertions, like
“You’re not worth my time,” may require inference-making.
29
Of course, posture and eye contact and so on are forms of communication, and sometimes they
can convey a message clearly.
16 Introduction
feeling sad or angry without knowing it at the time? This way of thinking
might also elucidate cases in which people seemingly can’t articulate their
feelings or thoughts. Perhaps the painter can’t express her understanding
of how to paint simply because she is unaware of how she paints the way
she does. She isn’t choosing not to say. She can’t explain it because perhaps
it’s outside the ambit of her self-awareness.
Implicit = automatic: A final idea is that while explicit mental states are
under the agent’s control, implicit states are automatic. That I avoid eye
contact with you seems automatic in the sense that at no point did I form
the explicit intention to “avoid eye contact.” In the right circumstances,
with the right cues, it just happens. It might even be the case that if I set
myself the intention to make eye contact with you, I’ll end up avoiding
your eyes all the more, in just the way that you’ll automatically think of a
white bear if you try not to think of one (Wegner et al., 1987). Similarly,
the painter might be able to paint successfully only when “letting it flow,”
giving up the effort to enact a premeditated plan.
Each of these senses of implicitness draws upon a long history, running from
Plato’s and St. Augustine’s ideas about parts of the soul and self-knowledge, through
Descartes’s and Leibniz’s ideas about differences between human and nonhuman
animal minds, to contemporary dual-systems psychology.30 Often these senses of
what it is for something to be implicit overlap, for example, when I fail to articulate
a mood that I’m unaware of being in. I also take it that while implicit mental states
are often unarticulated, unconscious, and/or automatic, sometimes they are artic-
ulated, conscious, and, to some extent, controlled. (Similarly, in my view, explicit
mental states can sometimes be unarticulated, unconscious, and automatic.) For
these reasons, moving forward, I do not define implicit mental states as being unar-
ticulated, unconscious, or automatic, although they often have some combination
of these qualities. Rather, as I describe below (§§3.1 and 3.2), I define implicit men-
tal states in terms of their cognitive components and their distinctive place in psy-
chological architecture.31
What stand out about the history of ideas underlying implicitness, particularly
in the twentieth century, are the striking parallels between streams of research that
focused, more or less exclusively, on either the virtues or the vices of spontaneity.
Consider two examples.
30
For insightful discussion of the history of dichotomous views of the mind, see Frankish and
Evans (2009).
31
I do, however, stress the automaticity of certain processes within implicit states. See Chapters 2
and 3. Note also that I do not offer a companion account of explicit mental states. Given that I argue,
in Chapter 3, that implicit mental states are not beliefs, I am happy to consider explicit mental states
beliefs, or belief-like, and to rely upon an established theory of belief. See Chapter 3.
Int roduc tion 17
Beginning roughly in the middle of the twentieth century, two bodies of research
focused on the phenomenon of, as Michael Polonyi put it, “knowing more than we
can tell.” In The Tacit Dimension (1966/2009), Polonyi argued that it is a pervasive
fact about human beings that we can do many things skillfully without being able
to articulate how we do them (cf. Walter Payton’s claim in §1). Human beings are
adept at recognizing faces, for example, but few of us can say anything the least bit
informative about how we do this. Polonyi’s focal examples were scientists, artists,
and athletes who exemplified certain abilities but nevertheless often lacked an artic-
ulate understanding of those abilities. Contemporary research in the tacit knowl-
edge tradition similarly focuses on skillful but unarticulated abilities, such as the
tacit knowledge of laboratory workers in paleontology labs (Wylie, 2015).
For Gilbert Ryle (1949) and for “anti-intellectuals” writing in his wake, the
reason that people often know more than they can tell is that knowing how to do
something (e.g., knowing how to paint) is irreducible to knowing a set of propo-
sitions (e.g., knowing that red mixed with white makes pink). Ryle’s concept of
“knowledge-how,” the distinct species of knowledge irreducible to “knowledge-
that,” exerts a greater influence on contemporary philosophy than Polonyi’s concept
of tacit knowledge. Perhaps this is in part due to Ryle’s stronger claim, namely, that
knowledge-how isn’t simply unarticulated, but is in fact unavailable for linguistic
articulation.
For most contemporary psychologists, however, the phrase “knowing more
than we can tell” is probably associated with Richard Nisbett and Timothy Wilson’s
classic paper, the title of which tellingly (though unintentionally, I assume) inverts
Polonyi’s phrase. In “Telling More than We Can Know: Verbal Reports on Mental
Processes” (1977), Nisbett and Wilson captured a less admirable side of what peo-
ple often do when they don’t know how or why they do things: they make shit up.
In the case of Nisbett and Wilson’s seminal study, people confabulated their reasons
for choosing a nightgown or pair of pantyhose from a set of options (see discussion
in Chapter 6). Implied in research like this on “choice blindness” and confabulation
is that these phenomena lead people to make decisions for suboptimal reasons. This
study and the large body of research it inspired, in other words, take the potential
gulf between the reasons for which we act and the reasons we articulate to explain
our actions to be a problem. A vice.
What in the tacit knowledge and knowledge-how traditions appears as skillful
activity without accompanying articulation appears in the confabulation literature
as a source of poor decision-making with confabulation covering its trail. Very sim-
ilar phenomena are described in these streams of research, the salient difference
between them being the normative status of the objects of study.
A second example stems from twentieth-century theories of learning. William
James’s (1890) writings on habit arguably instantiated a century-long focus on the
idea that learning involves the automatization of sequences of information. William
Bryan and Noble Harter’s (1899) study of telegraphers exemplified this idea. At
18 Introduction
first, they observed, telegraphers had to search for the letters effortfully. As they
improved, they could search for whole words. Then they could search for phrases
and sentences, paying no attention to individual letters and words. Finally, they
could search for the meaning of the message. This idea—of learning by making auto-
matic more and more sequences of information—was taken up in several distinct
streams of research. In the 1960s, Arthur Reber developed an influential account
of “implicit learning,” the most well-known aspect of which was the artificial gram-
mar learning paradigm. What Reber (1967) showed was that participants who were
presented with strings of letters to memorize automatically absorbed grammatical
rules embedded in those strings (unbeknownst to them).32 In his account of “skilled
coping,” which he used to critique the idea of artificial intelligence, Hubert Dreyfus
(2002a,b, 2005, 2007a,b) described the development of expertise in strikingly simi-
lar ways to Bryan and Harter (1899). George Miller (1956), in his celebrated paper
on “the magic number seven, plus or minus two,” gave this process—of the autom-
atization of action through grouping sequences of action—the name “chunking.”
Some even applied the basic idea to an account of human progress. Alfred North
Whitehead wrote:
32
Reber (1989) called this implicit learning a form of tacit knowledge.
Int roduc tion 19
“overlearned” the location of the keys, many social judgments and behaviors are
the result of repeated exposure to common beliefs about social groups. The semi-
nal work of Russell Fazio and colleagues showing that attitudes about social groups
can be activated automatically was influenced by research in cognitive psychology
like Shiffrin and Schneider’s. Fazio’s (1995) “sequential priming” technique mea-
sures social attitudes by timing people’s reactions (or “response latencies”) to ster
eotypic words (e.g., “lazy,” “nurturing”) after exposing them to social group labels
(e.g., “black,” “women”). Most people are significantly faster to identify a word like
“lazy” in a word scramble after being exposed to the word “black” (compared with
“white”). A faster reaction of this kind is thought to indicate a relatively automatic
association between “lazy” and “black.” The strength of this association is taken to
reflect how well learned it is.
Here, too, the core idea—implicit learning understood in terms of automaticity—
gives rise to structurally similar research on the virtues and vices of spontaneity.
What follows, then, constitutes a hypothesis. The hypothesis is that we can come to
have a better understanding of the virtues and vices of spontaneity by thinking of
both as reflecting the implicit mind.
improving these states. Finally, in Chapter 8, I offer a short conclusion, noting what
I take to be the book’s most significant upshots as well as key unanswered questions,
and in the Appendix, I describe measures like the IAT in more detail and discuss
challenges facing IAT research, as well as challenges facing psychological science
more broadly.
3.1 Mind
I begin in Chapter 2 with a description of the cognitive components of the mental
states that are implicated in paradigmatic cases of spontaneous action. I describe
four components: the perception of a salient Feature in the ambient environment;
the experience of bodily Tension; a Behavioral response; and (in cases of success)
a felt sense of Alleviation.33 States with these “FTBA” components causally explain
many of our spontaneous inclinations. Identifying these components, inter alia,
draws bounds around the class of spontaneous actions with which I’ll be concerned.
First, I characterize the perception of salient features in the environment in
terms of their “imperatival quality.” Features direct agents to behave in a particu-
lar way immediately, in the sense that an imperative—for example, “Heads up!”—
commands a particular response. I offer an account of how ambient features of the
environment can be imperatival in this way. Second, I argue that the feelings that
are activated by an agent noticing a salient feature in the environment paradigmati-
cally emerge into an agent’s awareness as a subtle felt tension. I describe this feel-
ing of tension as “perceptual unquiet.” Felt tension in this sense can, but does not
necessarily, emerge into an agent’s focal awareness. I describe the physiological cor-
relates of felt tension, which are explicable in terms of the processes of low-level
affect that play a core role in guiding perception. Third, I argue that tension in this
sense inclines an agent toward a particular behavior. This is why it is “tension” rather
than simply affect; it is akin to feeling (perhaps subtly, perhaps strongly) pulled or
pushed into a particular course of action. At this stage in the discussion I do not
distinguish between actions, properly speaking, and behavior (but see Chapter 5).
The important claim at this point is that particular features and particular feelings
are tightly tied to particular behaviors. They are, as I discuss later, “co-activating.”
Features, tensions, and behaviors “cluster” together, in other words, and their tight
clustering provides much of their explanatory power. The fourth component of
FTBA states is “alleviation.” When a spontaneous action is successful, felt tension
subsides and the agent’s attention is freed up for the next task. Alleviation obtains
when an action is successfully completed, in other words, and it doesn’t when the
agent isn’t successful. My account of this process of alleviation is rooted in theories
My account of FTBA states is most directly inspired by Gendler’s (2008a,b) account of the
33
3.2 Self
How should we relate to our own implicit minds? This is a hard question given that
implicit attitudes are unlike ordinary “personal” states like beliefs, hopes, and so
on. But implicit attitudes are also not mere forces that act upon us. This ambiguity
renders questions of praise, blame, and responsibility for spontaneity hard to assess.
For example, do people deserve blame for acting on the basis of implicit biases, even
when they disavow those biases and work to change them? Analogous questions are
no less pressing in the case of virtuous spontaneity. When the athlete or the artist
or the ethical phronimos performs well, she often seems to do so automatically and
with little self-understanding. Recall Walter Payton’s statement that he “just acts”
on the field. Payton illuminates the way in which some spontaneous actions seem
to flow through us, in a way that seems distinct from both “activity” and “passivity”
in the philosophy of action.
In Chapter 4, I adopt an attributionist approach to the relationship between
action and the self. Attributionism is broadly concerned with understanding the
conditions under which actions (or attitudes) reflect upon the self.34 Attributionist
theories typically argue that actions can reflect upon the self even when those
actions are outside an agent’s control, awareness, and even, in some cases, reasons-
responsiveness. This makes attributionism well suited to the idea that spontaneous
action might reflect upon who one is. I argue for a “care-based” conception of attri-
butionism, according to which actions that reflect upon the self are those that reflect
upon something the agent cares about. More specifically, I argue for an affective
account of caring, according to which cares are tightly connected to dispositions to
34
My focus will be on attributability for actions, not attitudes as such.
22 Introduction
have particular kinds of complex feelings. To care about something, on this view, is
to be affectively and motivationally tethered to it. To care about the Philadelphia
Phillies, for instance, is to be disposed to be moved in various ways when they win
or lose. My care-based account of attributability is very inclusive. I argue that a great
many of our actions—including those caused by our implicit attitudes—reflect
upon our cares. The result is that implicit attitudes are indeed “ours,” and as such
they open us to “aretaic” appraisals—evaluations of us in our capacity as agents.
It is important to note that my focus in Chapter 4 is not on making a metaphysical
claim about the ultimate nature of the self. The self composed of one’s cares is not a
thing inside one’s body; it is not the Freudian unconscious. Rather, it is a concept to be
used for distinguishing actions that are mine, in the attributability sense, from actions
that are not mine. My use of the term “self” will therefore depart from classical philo-
sophical conceptions of the self, as found in Hume, for instance. Moreover, in the sense
I will use it, the self is typically far from unified. It does not offer a conclusive “final say”
about who one is. I clarify the relationship of the self in this sense to what some theo-
rists call the “deep self.”
The upshot of Chapter 4 is that many more actions, and kinds of action, open us to
aretaic appraisal than other theorists have suggested. Spontaneous actions that conflict
with what we believe and intend to do, that unfold outside our focal awareness and
even outside our direct self-control, still signal to others what kinds of people we are.
Chapter 5 puts these claims into dialogue with received theories of attributionism and
responsibility. What does it mean to say that we are open to praise and blame for actions
that we can’t control? What are the implications of thinking of the self as fundamentally
disunified? In answering these kinds of questions, I respond to potential objections to
my admittedly unusual conception of actions that reflect on the self and situate it with
respect to more familiar ways of thinking about agency and responsibility.
3.3 Ethics
In the broadest terms, Chapters 2 and 3 offer a characterization of the implicit
mind, and Chapters 4 and 5 argue that the implicit mind is a relatively “personal”
phenomenon, in the sense that it reflects upon us as agents. Chapters 6 and 7 then
investigate the ethics of spontaneity by considering both what works to change our
implicit attitudes and how to best conceptualize the most effective approaches.
In Chapter 6, I argue that deliberation is neither necessary nor always warranted
in the effort to improve the ethical standing of our implicit attitudes. In part this
stems from the fact that often we cannot deliberately choose to be spontaneous.
In some cases, this is obvious, as the irony of the Slate headline “Hillary Clinton
Hatches Plan to Be More Spontaneous” makes clear.35 Even in one’s own life,
reports_she_will_show_morehumor_and_heart.html.
Int roduc tion 23
when we encounter the right cues, and they do not require an agent’s awareness or
endorsement. Huang and Bargh draw from some of the same research I discussed
earlier, but cast it in terms that sever the sprigs of human action from human agents.
They focus on studies, as they describe them, that highlight “external influences that
override internal sources of control such as self-values and personality” and models
that “[eliminate] the need for an agentic ‘self ’ in the selection of all behavioral and
judgmental responses” (2014, 122–123).
One way to summarize this is to say, borrowing an apt phrase from Kim Sterelny
(2001), that we are “Nike organisms.” Much of what we do, we just do. This is not
to say that our actions are simple or divorced from long processes of enculturation
and learning. But when we act, we very often act spontaneously. One question at the
heart of this book is whether being Nike organisms is thoroughly ominous in the
way that Huang and Bargh and many others suggest.
A broad account of spontaneity—focused on both its virtues and vices—resists
this kind of ominous interpretation. The result is a book that is pluralist in spirit and
strategy. It is pluralist about what makes a particular kind of mental state a potentially
good guide for action. One way to put this is that mental states can be “intelligent”
without displaying the hallmarks of rational intelligence (such as aptness to inferen-
tial processing and sensitivity to logic, as I discuss in Chapter 3). It is also pluralist
about what makes an action “mine” and about the conditions of praise and blame
for actions. On the view I develop, the self may be conflicted and fractured, because
we often care about incompatible things. Folk practices of praise and blame mirror
this fact. And, finally, the book is pluralist about sound routes to ethical action. The
sorts of self-cultivation techniques I discuss are thoroughly context-bound; what
works in one situation may backfire in another. This means that there may be no
substantive, universally applicable principles for how to cultivate our spontaneous
inclinations or for when to trust them. But this does not foreclose the possibility of
improving them.
Overall, this pluralism may make for a messy story. Nevertheless, the payoffs
are worthwhile. No one (to my knowledge) has offered a broad account of the
structure of implicit attitudes, how they relate to the self, and how we can most
effectively regulate them.36 This account, in turn, contributes to our progress on a
number of pressing questions in the philosophical vicinity, such as how to concep-
tualize implicit/automatic/unconscious/“System 1”-type mental systems and how
to appropriately praise and blame agents for their spontaneous inclinations. Most
broadly, it promises to demonstrate how we might enjoy the virtues of spontaneity
while minimizing its vices.
Thus while I will consider many friendly and rival theories to the various parts of my account,
36
The progress I make can be measured, perhaps, along two concurrent levels,
one broad and the other narrow.37 The broad level is represented by my claim that
understanding spontaneity’s virtues and vices requires understanding what I call
the implicit mind. The narrow level is represented by my core specific claims about
implicit attitudes, their place in relation to the self, and the ethics of spontaneity.
I hope that these narrow claims promote the broad claim. However, it’s possible that
I’m right on the broad level, even if some of the specific views I advance turn out to
be mistaken, for either conceptual or empirical reasons (or both). I’ll be satisfied by
simply moving the conversation forward here. The concept of the implicit mind is
here to stay, which makes the effort of trying to understand it worthwhile.
37
I borrow this broad-and-narrow framework for thinking about the promise of a philosophical
project from Andy Clark (2015).
PA RT O N E
MIND
2
In 2003, near the Brackettsville Station north of the Rio Grande, a writer for the
New Yorker named Jeff Tietz followed US Border Patrol agent Mike McCarson
as he “cut sign” across the desert. “Cutting sign” means tracking human or animal
movement across terrain. McCarson’s job was to cut sign of people trying to cross
the desert that separates Mexico from the United States. His jurisdiction covered
twenty-five hundred square miles. To cover that much land, McCarson had to move
quickly and efficiently. Tietz describes his experience:
29
30 Mind
was a cow path. But McCarson liked some physical attribute, and the rela-
tive arrangement and general positioning of the impressions. Maybe he
could have stood there and parsed factors—probably, although you never
stop on live trails—but he wouldn’t have had any words to describe them,
because there are no words. (Tietz, 2004, 99).
Cutting sign isn’t entirely spontaneous. McCarson’s skill is highly cultivated and,
moreover, it is deployed in service of an explicit (and perhaps objectionable) goal.1
Yet Tietz’s description of McCarson highlights key features of unplanned, on-the-
fly decision-making. In particular, the description contains four features that point
toward the cognitive and affective components of the mental states that I will argue
are distinctive of the implicit mind.
First, McCarson “sees” what to do in virtue of extremely subtle signs in the envi-
ronment, like barely visible pressings in the dirt. Moreover, he has minimal, if any,
ability to articulate what it is he sees or what any particular sign means. This does
not mean that McCarson’s perceptual expertise is magic, despite the implication
of Tietz’s term “sorcerous disturbance.” McCarson’s is not another myth of the
chicken-sexer, whom once upon a time philosophers thought could tell male from
female chicks through some act of seeming divination. Rather, McCarson notes
clues and sees evidence, but he does so on an order of magnitude far greater than
almost anyone else.
Second, McCarson’s perceptual expertise is affective, and in a distinctive way. He
seems to “see by feeling,” saying nothing more than that he “likes” the way something
in the terrain looks. Like the cues McCarson notices in the environment, however,
these action-guiding feelings are extremely subtle. They are a far cry from full-blown
emotions. Tietz’s term “perceptual unquiet” captures this nicely. On seeing the right
kind of disturbance, McCarson seems simply pulled one way or the other.
This experience of being pulled one way or the other leads to the third feature
of spontaneity that this vignette illuminates. McCarson’s particular perceptual and
affective experiences are tied to particular behavioral reactions. On the one hand,
McCarson just reacts, almost reflexively, upon seeing and feeling the right things.
On the other hand, though, his reactions are anything but random. They are coordi-
nated and attuned to his environment, even though McCarson never stops to “parse
factors.” That is, McCarson cuts sign nondeliberatively, yet expertly.
This is made possible by the fourth key feature, which is illuminated by Tietz’s
own experience of seeing sign. On the fly, spontaneous action involves distinctive
ways of learning. McCarson’s expertise has been honed over years through feedback
from trial and error, a process the beginning of which Tietz just barely experiences.
I find McCarson’s skill impressive, and, as I discuss, it exemplifies features of the implicit mind.
1
However, I find the way he uses this skill—to enforce the immigration policies of the United States by
hunting desperate people as if they were a kind of prey—deeply unsettling.
Perception , Emotion , B ehav ior, and Chang e 31
My account of the implicit mind begins with a description of these four com-
ponents of unplanned spontaneous inclinations. These are (1) noticing a salient
Feature in the ambient environment; (2) feeling an immediate, directed, and affec-
tive Tension; (3) reacting Behaviorally; and (4) moving toward Alleviation of that
tension in such a way that one’s spontaneous reactions can improve over time.
Noticing a salient feature (F), in other words, sets a relatively automatic process in
motion, involving co-activating particular feelings (T) and behaviors (B) that either
will or will not diminish over time (A), depending on the success of the action.
This temporal interplay of FTBA relata is dynamic, in the sense that every compo-
nent of the system affects, and is affected by, every other component.2 Crucially, this
dynamic process involves adjustments on the basis of feedback, such that agents’
FTBA reactions can improve over time.
By describing these components, I carve off the kinds of spontaneous actions
they lead to from more deliberative, reflective forms of action. The kinds of sponta-
neous actions with which I’m concerned, in other words, paradigmatically involve
mental states with the components I describe here. These components, and their
particular interrelationships, make up a rough functional profile.
I do not offer what is needed for this conclusion in this chapter alone. In this
chapter, I describe the FTBA components of spontaneous inclinations. These are
the psychological components of the phenomenon of interest in this book, in
other words. I describe how they interact and change over time. I also ground these
descriptions in relevant philosophical and psychological literature, noting alterna-
tive interpretations of the phenomena I describe. I do not, however, offer an argu-
ment in this chapter for their psychological nature. I do this in the next chapter,
where I argue that these FTBA components make up a particular kind of unified
mental state, which I call an implicit attitude. It could be, given the description I give
in this chapter alone, that the FTBA relata represent a collection of co-occurring
thoughts, feelings, and behavior. The argument against this possibility—that is,
the argument for the claim that these FTBA components make up a unified men-
tal state—comes in the next chapter, where I locate states with these components
in cognitive architecture. This and the next chapter combined, then, give the argu-
ment for a capacious conception of implicit attitudes as states that have these FTBA
components.3
2
Clark (1999) offers the example of returning serve in tennis to illustrate dynamic systems: “The
location[s]of the ball and the other player (and perhaps your partner, in doubles) are constantly chang-
ing, and simultaneously, you are moving and acting, which is affecting the other players, and so on. Put
simply . . . [in dynamic systems] . . . ‘everything is simultaneously affecting everything else’ (Port and
Van Gelder, 1995, 23).”
3
For ease of exposition, I will occasionally in this chapter distinguish states with FTBA compo-
nents from an agent’s reflective beliefs and judgments. I beg the reader’s indulgence for doing so, as my
argument that states with FTBA components are not beliefs comes in Chapter 3.
32 Mind
While I do not focus on dual-systems theories as such, I do think that it would be
nearly impossible to understand the human mind without distinguishing between
our spontaneous inclinations and dispositions, on the one hand, and our more
deliberative and reflective attitudes, on the other.5 This and the next chapter com-
bined aim to capture the distinctiveness of the first of these fundamental features
of human minds, the unplanned and immediate thoughts, feelings, and actions that
pervade our lives.
1. Features
Features are properties of the objects of perception that make certain possibilities
for action attractive. When something is seen as a feature, it has an imperatival qual-
ity. Features command action, acting as imperatives for agents, in a sense I try to
clarify in this section. In the next section (§2), I discuss the crucial affective element
of this command. Here I aim to do three things. First, I briefly clarify what I mean
by saying that agents perceive stimuli in the ambient environment in an imperatival
4
See Fridland (2015) for a thoughtful discussion of the penetration of spontaneous and automatic
features of the mind by reflective states and vice versa.
5
The FTBA components I describe clearly share key elements with so-called System I processes.
Should my account of spontaneous thought and action turn out, inter alia, to clarify dual-systems theo-
ries, so much the better.
Perception , Emotion , B ehav ior, and Chang e 33
way (i.e., perceive stimuli as features). In other words, I characterize the role of fea-
tures, which is to tie perception closely to action (§1.1). Second, I argue that stimuli
in the ambient environment are registered as features in virtue of the agent’s goals
and cares, as well as the context (§1.2). Third, I consider what it is like, in perceptual
experience, to encounter features (§1.3).
See Noë (2006), Kelly (2005, 2010), Nanay (2013), and Orlandi (2014).
7
34 Mind
Suppose all the objects in my world have little labels on them telling me
what I ought to do in response to them. My bed says “to-be-slept-in,”
and my chair says “to-be-sat-upon.” All the radios say “to-be-turned-
on.” . . . [But t]his is different from seeing the object and then seeing an
imperative attached to the object. When I am inclined, my attention is nec-
essarily drawn to certain features of the object that appear to me as practi-
cally salient. If I have an inclination to turn on radios, this must involve
seeing radios as to-be-turned-on in virtue of certain features of radios and
of the action . . . [and such] features [must be] practically salient from the
perspective of one who has this disposition. (2009, 252)
account of cares as dispositions in a related sense. The relevant point here, though,
is that the museumgoer’s noticing the sign arises in virtue of some practical connec-
tion she has to it, through her desires, her goals, or, even in the absence of these, her
dispositional cares. If she is not so connected to the sign in a practical sense, she will
likely not notice it at all or she’ll see it in barer representational terms (i.e., it won’t
be imperatival).8
Context also plays a key role in determining whether and when stimuli become
features for agents. In a chaotic space, the museumgoer may be less likely to notice
the restroom sign. If she is with small children, who frequently need to go, she
might be more likely to notice it. These sorts of contextual effects are pervasive
in the empirical literature. Indirect measures of attitudes, like the IAT, are highly
context-sensitive (e.g., Wittenbrink et al., 2001; Blair, 2002; see the discussion in
Chapter 7). I return to this example in more depth in §5.9
8
These observations are consistent with findings that there are tight interactions between the neu-
ral systems subserving “endogenous” attention—attention activated in a top-down manner by internal
states like goals and desires—and “exogenous” attention—attention activated in a bottom-up matter by
environmental stimuli. See Corbetta and Shulman (2002) and Wu (2014b). Note also that Erik Rietveld
and Fabio Paglieri (2012) present evidence suggesting that UB is the result of damage to the neural
regions subserving endogenously driven attention. This, too, suggests that in the ordinary case features
become imperatival for agents in virtue of connections between stimuli and agents’ goals and cares (due
to interaction between the neural systems subserving endogenous and exogenous attention).
9
Jennifer Eberhardt and colleagues make a similar point in discussing the ways in which implicit
bias affects perception, writing that “bidirectional associations operate as visual tuning devices by
determining the perceptual relevance of stimuli in the physical environment” (2004, 877). See §5 for
more on FTBA states and implicit bias. By “bidirectional,” Eberhardt means that social groups and con-
cepts prime each other. For example, exposure to black faces increases the perceptual salience of crime-
related objects, and exposure to images of crime-related objects also increases the perceptual salience
of black faces (Eberhardt et al., 2004). For a fascinating discussion of the neural implementation of the
way in which task-relevant intention affects the perception of features, see Wu (2015).
Perception , Emotion , B ehav ior, and Chang e 37
10
Thomas Stoffregen (2003), for example, argues that affordances are not just embedded “in”
environments; they are also properties of “animal-environment systems.” Chemero (2009) claims that
affordances are relations between aspects of agents, which he calls “abilities,” and features of “whole”
situations (which include agents). Michael Turvey and colleagues (1981) suggest that affordances are
dispositional properties of objects complemented by dispositional properties of agents, which they call
“effectivities,” which are in turn explained by niche-specific ecological laws.
38 Mind
looking angry), and kinds (e.g., being a person or a bird). Rich property theorists
hold that we perceive these properties themselves, that is, not just by way of perceiv-
ing thin properties. For example, we may see a face as angry, rather than judging or
inferring anger from the shape, motion, and so on of the face. There are several rea-
sons to think that rich properties are part of perceptual experience.11 These include
the fact that the rich properties view is consistent with ordinary perceptual phenom-
enology, whereas the thin properties view must explain how agents get from being
presented solely with properties like shape, motion, and the like to seeming to see
anger in a face. The rich property view also appears to be supported by research in
developmental psychology suggesting that infants perceive causality (Carey, 2009).
Finally, the rich property view appears to explain perceptual learning, the effect
of which is that things look differently to you (in vision) if you’ve learned about
them. It is plausible to think that a curveball, for example, looks one way to a Major
League Baseball batter and another way to a novice at the batting cage. And, further,
it seems plausible that this difference is due to the way in which the pitch looks hit-
table (or not), not due to changes in thin properties alone. Perceptual learning is
apropos in vice cases of spontaneity too, where, for example, the color of a person’s
skin comes to be automatically associated with certain traits. These associations are
culturally learned, of course, and the explanatory burden would seem to be on the
thin property view to explain how when blackness, for example, becomes salient, it
does so only through certain thin properties coming to be salient.
The rich property of features is the way in which they are perceived as to-be-φed.
Some theorists suggest that perceiving things in roughly this way—as imperatival—
precludes the possibility that we ever see things descriptively. Imperatives exhaust
perceptual experience, on this view. According to what I’ll call “Exhaust,” when you
see the restroom sign, you don’t see any thin properties like rectangular or white and
red. Instead, all that is presented in your perceptual experience is the action-direct-
ing content of the sign. Your experience is of something like a force compelling you
to head in the direction of the bathroom. Dreyfus’s (2002a,b, 2005, 2007a,b) influ-
ential interpretation of “skilled coping” seems to subscribe to Exhaust. On Dreyfus’s
view, which he in turn derives from Maurice Merleau-Ponty (1962/2012), at least
some kinds of action entail perceiving nothing but action-directing attractive and
repellent forces. For example, Dreyfus writes,
consider a tennis swing. If one is a beginner or is off one’s form one might
find oneself making an effort to keep one’s eye on the ball, keep the racket
perpendicular to the court, hit the ball squarely, etc. But if one is expert
at the game, things are going well, and one is absorbed in the game, what
I present the following considerations—which are drawn largely from Siegel and Byrne
11
(2016)—as prima facie reasons to support the rich properties view. For developed arguments in favor
of this view, see Siegel (2010) and Nanay (2011).
Perception , Emotion , B ehav ior, and Chang e 39
one experiences is more like one’s arm going up and its being drawn to
the appropriate position, the racket forming the optimal angle with the
court—an angle one need not even be aware of—all this so as to complete
the gestalt made up of the court, one’s running opening, and the oncoming
ball. One feels that one’s comportment was caused by the perceived condi-
tions in such a way as to reduce a sense of deviation from some satisfactory
gestalt. But that final gestalt need not be represented in one’s mind. Indeed,
it is not something one could represent. One only senses when one is get-
ting closer or further away from the optimum. (2002b, 378–379)
As I interpret Dreyfus, his claim is that when one is coping intelligently with the fea-
tures of the environment that are relevant to one’s tasks—following signs, opening
doors, navigating around objects—one’s attention is absorbed by an action-guiding
feeling indicating whether things are going well or are not going well. This feeling
exhausts perceptual experience and keeps one on the rails of skilled action, so to
speak. What Dreyfus crucially denies, as I understand him, is any role to represen-
tational content, at least in skillful action-oriented perception, that describes the
world as being a certain way. Dreyfus claims that one only senses one’s distance from
the “optimum gestalt.”12
Exhaust is not entailed by my description of features. To perceive something as
to-be-φed does not preclude seeing that thing as a certain way. To see the painting
as to-be-backed-away-from does not preclude seeing it as rectangular, for example.
Moreover, while I do not want to deny that Exhaust describes a possible experience,
in which one is so absorbed in an activity that all one experiences are deviations and
corrections from a gestalt, I do not think Exhaust offers an informative description
of the perceptual content of paradigmatic cases of spontaneous activity. The basic
12
Of course, Dreyfus doesn’t think that the expert tennis player could play blindfolded. My inter-
pretation of his view is that ordinary thin perceptual properties are registered in some way by the agent,
but not represented in perceptual experience. I take Dreyfus’s view to be consistent with the “Zombie
Hypothesis,” according to which perception-for-action and perception-for-experience are distinct (see
footnote 13). But this is my gloss on Dreyfus and not his own endorsement. Dreyfus is, however, quite
clear about eschewing all talk of representations in explaining skilled action (which may leave his view
at odds with some formulations of the Zombie Hypothesis). His claim is not just that agents don’t
represent the goal of their action consciously, but that the concept of a representation as it is typi-
cally understood in cognitive science is explanatorily inert. For example, the first sentence of Dreyfus
(2002a) reads, “In this paper I want to show that two important components of intelligent behavior,
learning, and skillful action, can be described and explained without recourse to mind or brain repre-
sentations” (367). What I take Dreyfus to mean, in this and other work (though I note that he is not
always consistent), is that in skilled action, perception is saturated by action-imperatives, and action-
imperatives are conceptually distinct from mental or neural representational states (explicit or tacit)
that could be understood as indicating that things are thus-and-so. In this sense, Dreyfus would use the
term “content” differently from me, given that I defined perceptual content earlier in terms of mental
representations.
40 Mind
13
Note what might be considered a spin-off of Exhaust, that whatever content there is in imperatival
perception plays no role in guiding action. Consider briefly Milner and Goodale’s (1995, 2008) influ-
ential dual-streams account of vision, according to which visual processing is divided into two kinds,
the ventral stream that informs conscious visual experience and the dorsal stream that directs motor
action. Originally, Milner and Goodale supported a strong dissociation of the two streams. This entails
what is known as the Zombie Hypothesis (Koch and Crick, 2001; Wu, 2014a): the dorsal stream pro-
vides no information contributing to conscious visual experience. Persuasive arguments have been
given against the Zombie Hypothesis, however, and Milner and Goodale themselves have backed off
from it, recognizing that the ventral and dorsal streams are, to some extent, integrated (Wu, 2014a).
What this suggests is that at the level of physical implementation of visual processing, directive infor-
mation for action in the dorsal stream influences what the objects of visual perception appear to be like,
that is, the way vision describes them as being. Assuming the neural locations of the ventral and dor-
sal streams are correct, then the physical location of integration of information from the two streams
might help settle the question of what gets represented in action-oriented perception.
Perception , Emotion , B ehav ior, and Chang e 41
14
Firestone and Scholl show that some of the experiments taken to support Shift have thus far
failed to replicate. But Firestone and Scholl (2015) also report many successful replications, which
they then reinterpret in terms that do not support Shift.
15
To establish an instance of cognitive penetration, one must hold constant the object being per-
ceived, the perceiving conditions, the state of the agent, and the agent’s attentional focus (Macpherson,
2012). For an elaboration of the claim that one can accept a rich theory of perceptual content without
thereby being committed to cognitive penetration or anti-modularism, see Siegel and Byrne (2016),
Toribio (2015), and Newen (2016).
42 Mind
that this claim does not deny a role for descriptive content in perceptual experience,
pace Exhaust. And I have also argued that this claim does not entail a claim about
the cognitive penetration of perception. What remains to be seen, though, is how
perceiving features is tied to behavior. For it is one thing to see the big painting
as commanding a particular response and another to actually feel commanded to
respond in that way. Why, for example, does seeing blood as to-be-fled-from make
Benes flee? I now argue that, in paradigmatic cases of spontaneous behavior, affect
plays a crucial role.
2. Tension
Tension refers to the affective component of states with FTBA components, but it
is not a blanket term for any emotion. Tension has two key characteristics: (1) it is
“low-level”; and (2) it is “geared” toward particular behavioral responses.
The term “perceptual unquiet”—used by Tietz to describe the affective qual-
ity of McCarson’s perceptual experience—captures the low-level affective qual-
ity of tension. The museumgoer sees things in a way that are not quite right, or as
McCarson says, “There’s somethin’ gone wrong there.” This is a kind of “unquiet” in
the sense that things seem out of equilibrium.16 In the context of discussing intui-
tive decision-making about how much money to give to charity, Railton describes
a similar phenomenon:
16
See Kelly (2005) for discussion of motor action and perception being drawn toward equilibrium.
Perception , Emotion , B ehav ior, and Chang e 43
context, positive and negative are not moral terms, but rather are more closely akin
to approach and avoid tendencies.
Second, consider the strength of a felt tension. The museumgoer might feel only
the slightest unease, perhaps in the form of a barely noticed gut feeling.17 This unease
might not enter into her focal awareness, but she feels it nevertheless, in just the
way that one can be in a mood—feeling grumpy or excitable or whatever—without
noticing it.18 Tension in this sense is marked by an array of bodily changes in an
agent’s autonomic nervous system, including changes in cardiopulmonary param-
eters, skin conductance, muscle tone, and endocrine and immune system activities
(Klaassen et al., 2010, 65; Barrett et al., 2007). These changes in the agent’s body
may be precipitating events of full-fledged emotions, but they need not. They are
most likely to become so in jarring or dangerous situations, such as standing on
glass-bottomed platforms and facing diseased blood. It is an open empirical ques-
tion when, or under what conditions, felt tensions emerge into focal awareness.
One possibility is that this emergence is tied to a degree of deviation from ordinary
experience. When the features one encounters are ordinary, one may be less likely
to notice them explicitly. Perhaps the degree to which one notices them is propor-
tional to the degree to which they are unexpected.
It is also important to note that affective and action-directive felt tensions need
not be the product of an evaluative judgment or be consistent with an agent’s evalu-
ative judgments. The museumgoer likely has no commitment to the correctness of
a claim that stepping back from the big painting is the correct thing to do.19 A candy
bar in the checkout lane of the grocery store might call to you, so to speak, pulling
you to grab it, despite your judging that you don’t want to buy it. The dissociation
of felt tension and evaluative judgment can go in the other direction too. You might
think that spoiling yourself a little with a spontaneous impulse purchase is a good
idea, but feel no inclination to buy the candy bar.20
Research by Lisa Feldman Barrett can be understood as exploring the processes
involved in the unfolding of felt tension. She argues that all visually perceived
17
James Woodward and John Allman (2007) propose that von Economo neurons (VENs), which
are found in humans and great apes but not in other primates, may be integral in the processing under-
lying these experiences. They write, “The activation of the serotonin 2b receptor on VENs might be
related to the capacity of the activity in the stomach and intestines to signal impending danger or pun-
ishment (literally ‘gut feelings’) and thus might be an opponent to the dopamine D3 signal of reward
expectation. The outcome of these opponent processes could be an evaluation by the VEN of the rela-
tive likelihood of punishment versus reward and could contribute to ‘gut feelings’ or intuitive decision-
making in a given behavioral context” (2007, 188).
18
On unconscious affect, see Berridge and Winkielman (2003) and Winkielman et al. (2005).
19
These affective states are also typically nonpropositional, and so differ from the concept-laden
emotional evaluations that Richard Lazarus (1991) calls “appraisals.” See Colombetti (2007) and
Prinz (2004); see also Chapter 4.
20
I take this example from Susanna Siegel (2014).
44 Mind
objects have “affective value” and that learning to see means “experiencing visual
sensations as value added” (Barrett and Bar, 2009, 1327). Value is defined in terms
of an object’s ability to influence a person’s body, such as her breathing and heart
rate (Barrett and Bar, 2009, 1328). Objects can have positive or negative value,
indicating an approach or avoid response. In a novel way, Barrett’s research extends
models of associative reward-learning—in which the brain assigns values to actions
based on previous outcomes—to the role of affect in visual perception. In short,
when features of the environment become salient, the brain automatically makes
a prediction about their value for us by simulating our previous experience with
similar features (see §4). Barrett calls these predictions “affective representations”
that shape our visual experiences as well as guide immediate action.21 In her words:
When the brain detects visual sensations from the eye in the present
moment, and tries to interpret them by generating a prediction about
what those visual sensations refer to or stand for in the world, it uses not
only previously encountered patterns of sound, touch, smell and tastes, as
well as semantic knowledge. It also uses affective representations—prior
experiences of how those external sensations have influenced internal sen-
sations from the body. Often these representations reach subjective aware-
ness and are experienced as affective feelings, but they need not. (Barrett
and Bar, 2009, 1325)
21
Note that this picture of affective representation does not necessarily entail cognitive penetra-
tion of perception. For discussion of the relationship between predictive coding, reward learning, and
cognitive penetration, see Macpherson (2016).
22
See footnote 13.
Perception , Emotion , B ehav ior, and Chang e 45
the affective significance of the apple but also prepares the perceiver to
act—to turn away from the apple, to pick it up and bite, or to ignore it,
respectively.23
Barrett’s claim that all visually perceived objects have affective value is consistent
with work on “micro-valences,” or the low-level affective significance with which
perceived entities can be tagged. Sophie Lebrecht and colleagues (2012) also pro-
pose that valence is an intrinsic component of all object perception. On their view,
perceptions of “everyday objects such as chairs and clocks possess a micro-valence
and so are either slightly preferred or anti-preferred” (2012, 2). Striking demon-
strations of micro-valences are found in social cognition, for example, in a dem-
onstration of the evaluative significance of stereotypes in Natasha Flannigan et al.
(2013). They find that men and women who are perceived in counterstereotypical
roles (e.g., men nurses, women pilots) are “implicitly bad”; that is, the sheer fact that
these stimuli are counterstereotypical leads individuals to implicitly dislike them.24
My claim, however, is not that all perception is affect-laden. An ultimate theory
of perception should clarify what role affect does and does not play in object rec-
ognition and action initiation. I cannot provide such a theory. It is clear that ordi-
nary cases of explicit dissociation between affect and perception, such as seeing the
candy bar but having no desire for it, don’t cut strongly one way or the other on the
question of the role of affect in perception. For the view of Barrett and colleagues
is that “the affect-cognition divide is grounded in phenomenology” (Duncan and
Barrett, 2007); thus even apparently affectless perception may be affectively laden
after all. Other cases of dissociation may be harder to explain, though. Patients with
severe memory deficits due to Korsakoff ’s syndrome, for example, appear to retain
affective reactions to melodies that they have previously heard, even if they can’t
recognize those melodies; similarly, while they cannot recall the biographical infor-
mation about “good” and “bad” people described in vignettes, they prefer the good
people to the bad people when asked twenty days later ( Johnson et al., 1985).25
In this critical vein, it seems unclear, on the basis of the claim that there is no such
thing as affectless perception, how an agent who has “no real affective response” to
an apple because she is satiated (as Barrett and Bar describe in the earlier quote),
recognizes the object as an apple.
For my purposes, what I take Barrett and colleagues’ research to offer is a plau-
sible account of how specific affective values can combine with specific features
23
For more on “affective force,” see Varela and Depraz (2005, 65).
24
This and other findings regarding evaluative stereotypes are inconsistent with the view that
“implicit stereotypes” and “implicit prejudices” represent separate constructs, reflecting separate men-
tal processes and neural systems (Amodio and Devine, 2006, 2009; Amodio and Ratner, 2011). See
Madva and Brownstein (2016) for critique of this view. See also §5.
25
Thanks to an anonymous reviewer for suggesting this example.
46 Mind
3. Behavior
Perceived features and felt tensions set a range of anatomical and bodily reactions
in motion, including limb movements, changes in posture, and vocalizations. In this
context, behavior refers to motor activity, rather than mental action. The anatomical
and bodily reactions to features and tension I describe feed into not only the explicit
behavior of people like Benes, museumgoers, skywalkers, and the like, but also into
marginal actions, like nonverbal communicative acts. I consider whether the motor
26
I take this point about weapons bias from Madva and Brownstein (2016).
Perception , Emotion , B ehav ior, and Chang e 47
routines set in motion by perceived features and felt tension rise to the level of full-
fledged action in Chapters 4 and 5.27
It is clear, though, that the ordinary bodily changes and movements associated
with the action-guiding states of agents in these cases are, at least minimally, integral
parts of coordinated response patterns.28 The perceptual and affective components
of agents’ automatic states in these cases are oriented toward action that will reduce
their subtle feelings of discomfort. Upon seeing the big painting, the museumgoer
feels that something is not quite right and moves in such a way as to retrieve equilib-
rium between herself and her environment.29 Agents’ spontaneous behavioral reac-
tions, in other words, are typically oriented toward the reduction of felt tension. I call
this state of reduced tension “alleviation” (§4). In a crude functional sense, allevia-
tion is akin to the process whereby a thermostat kicks the furnace off. The tempera-
ture that has been set on the thermostat is the optimum point for the system, and
deviations from this point trigger the system to pull the temperature back toward
the optimum, at which point the system rests. Alleviation is the “resting point” with
respect to spontaneous action. The particular bodily changes and motor routines
that unfold in a moment of spontaneity are coordinated to reach this point, which
is signaled by the reduction of felt tension (i.e., a returning to perceptual quiet, so
to speak). When this point is reached, the agent is freed to engage in other activities
and motor routines.
The thermostat analogy is inapt, however, because the interactions of FTB relata
are temporally extended and influence each other reciprocally. The salience of a per-
ceived feature of the environment influences the duration and vivacity of tension,
27
As I do not distinguish in this chapter between behavior and action, I also do not distinguish
between mental activity and mental action. In Chapters 4 and 5, I address the conditions under which
behavior reflects upon the character of an agent. This is distinct from—albeit related to—the basic
conditions of agency. To foreshadow: I intend my account of actions that reflect on the character of an
agent to cover both motor and mental action. But here, in §3, I restrict my discussion to motor reac-
tions to perceived features and felt tension.
28
Woodward and Allman (2007) argue that insular cortex is the prime input into this process. They
write, “Just as the insula has the capacity to integrate a large array of complex gustatory experience into
visceral feelings leading to a decision to consume or regurgitate, so fronto-insular cortex integrates a
vast array of implicit social experiences into social intuitions leading to the enhancement or withdrawal
from social contact” (2007, 187).
29
Rietveld (2008) captures the directedness of action-guiding tension by distinguishing what he
calls “directed discontent” from “directed discomfort.” Directed discontent—which Rietveld derives
from an analysis of Ludwig Wittgenstein (1966)—is characterized by affective experiences of attrac-
tion or repulsion that unfold over time and are typically not reportable in propositional form (Klaassen
et al., 2010, 64). States of directed discontent are typically accompanied immediately by perceptions of
opportunities for action (or affordances). Directed discomfort, by contrast, characterizes a “raw, undif-
ferentiated rejection” of one’s situation, which does not give the agent an immediate sense of adequate
alternatives (Rietveld, 2008, 980). The kind of affect I am describing is akin to directed discontent, not
directed discomfort.
48 Mind
which in turn influences the strength of the impulse to act. The salience of features
and vivacity of tension is itself influenced by how the agent does act. For example,
the degree of salience of Benes’s blood would be influenced by the state of his goals
and cares; if he had just finished watching a fear-inducing news report about HIV,
the sight of his own blood might dominate his attention, setting in motion a strongly
aversive feeling. This pronounced sense of tension would cause him to feel a rela-
tively stronger impulse to act. How he acts—whether he grabs a nearby towel to
stanch the bleeding from his cut or whether he screams and rushes to the sink—will
affect the duration of his tension, and in turn will affect the degree to which the
blood continues to act as an action-imperative to him. Similarly, the museumgoer
who is distracted, perhaps by overhearing a nearby conversation, may be slower to
step back from the large painting, in which case the impulse to step back may per-
sist. Or the museumgoer who encounters the large painting in a narrow hallway
may be unable to step far enough back, thus needing to find other ways to reduce
her felt tension, perhaps by turning around to ignore the painting or by moving on
to the next one. Conflicts in behavioral reactions to felt tensions are possible too.
The frightened moviegoer who simultaneously covers her eyes with her hands but
also peeks through spread fingers to see what’s happening may be reacting to two
opposed felt tensions. How these conflicting tensions are (or are not) resolved will
depend on the success of the agent’s behavioral reaction, as well as the changing
stimuli in the theater.30
Thus while the bodily movements and readjustments involved in spon-
taneity may be relatively isolated from the agent’s reflective beliefs and judg-
ments, they constantly recalibrate as FTB relata dynamically shift in response
to one another over tiny increments of time. This is the process I describe as
Alleviation.
4. Alleviation
States with FTBA components modify themselves by eliminating themselves. In
response to perceived features and felt tensions, agents act to alleviate that ten-
sion by changing their bodily orientation to the world. Gordon Moskowitz and
30
I thank an anonymous reviewer for this last example. The same reviewer asks about the case of
people who are drawn to the feeling of tension. Horror movie aficionados, for example, seem to enjoy a
kind of embodied tension, as perhaps do extreme sports junkies and daredevils. While the psychology
of agents like these is fascinating, I take them to be outliers. Their inclination toward felt tension illumi-
nates the paradigmatic case in which agents’ spontaneous behavioral reactions are aimed at reducing
felt tension. I also suspect that, in some cases at least, agents like these make explicit, unspontaneous
decisions to put themselves in a position in which they will feel extreme tension (e.g., making a delib-
erative decision to skydive). They might then represent an interesting case of discordance between
spontaneous and deliberative inclinations. See Chapter 6 for discussion of such cases.
Perception , Emotion , B ehav ior, and Chang e 49
Emily Balcetis (2014) express a similar notion when talking about goals more
broadly (and offering a critique of Huang and Bargh’s [2014] theory of “selfish
goals”):
[G]oals are not selfish but instead suicidal. Rather than attempting to pre-
serve themselves, goals seek to end themselves. To be sure, goals are strong;
they remain accessible as time passes (Bargh et al. 2001) and do not dissi-
pate if disrupted (e.g., Zeigarnik 1927/1934). However, goal striving decays
quickly after individuals attain their goals (e.g., Cesario et al. 2006; Förster
et al. 2007; Martin & Tesser 2009; Moskowitz 2002; Moskowitz et al. 2011;
Wicklund & Gollwitzer 1982). That is, goals die once completed. They do not
seek to selfishly propagate their own existence, but instead seem compelled to
work toward self-termination and, in so doing, deliver well-being and need-
fulfillment. (2014, 151)
In this sense, states with FTBA components are suicidal. How does this work?
As behavior unfolds, one’s felt sense of rightness or wrongness will change in turn,
depending on whether one’s initial response did or did not alleviate the agent’s sense
of tension. These feelings suggest an improvement in one’s relation to a perceived envi-
ronmental feature or a relative failure to improve. The temporal unfolding and interplay
between behavior and tension can be represented as a dynamic feedback system. It is a
dynamic system in the sense, as I have said, that every component of the system affects,
and is affected by, every other component. It is a feedback system in the sense that,
ceteris paribus, the agent’s spontaneous inclinations change and improve over time
through a process of learning from rewards—reduced tension, enabling the agent to
notice new features and initiate new behavior—and punishments—persisting tension,
inhibiting the agent’s ability to move on to the next task. Feelings of (un)alleviated ten-
sion feed back into further behavior, as one continues to, say, crane one’s neck or shift
one’s posture in order to get the best view of the painting. This concept of dynamic
feedback is crucial for two reasons: (1) it shows how states with FTBA components
improve over time; and (2) as a result of (1), it shows how these states guide value-
driven action spontaneously, without requiring the agent to deliberate about what to
do. These states are self-alleviating.
Research in computational neuroscience illuminates how spontaneous incli-
nations like these may change and improve over time, both computationally and
neurologically. I refer to “prediction-error” models of learning (Crockett, 2013;
Cushman, 2013; Huebner, 2015; Seligman et al., 2013; Clark, 2015). Later I’ll
describe two distinct mechanisms for evaluative decision-making: “model-based”
and “model-free” systems (§4.1). My aim is not to provide anything like a com-
prehensive account of these systems, but rather to suggest that model-free systems
in particular provide a compelling framework for understanding how spontaneous
inclinations self-alleviate (i.e., change and improve over time without guidance from
50 Mind
an agent’s explicit goals or beliefs) (§4.2).31 This is not to say, however, that model-
free learning is invulnerable to error, shortsightedness, or bias (§4.3). Indeed, the
abilities and vulnerabilities of model-free learning make it a compelling candidate
for the learning mechanisms underlying both the virtues and vices of our spontane-
ous inclinations.
31
As I have already mentioned, my argument for why these processes do not issue in beliefs them-
selves comes in Chapter 3.
32
Parts of this section are adapted from Brownstein (2017).
33
I am indebted to Cushman for the metaphor of navigating through the city. Note that there are, of
course, important differences between maps and decision trees. Thanks to Eliot Michaelson for point-
ing this out.
Perception , Emotion , B ehav ior, and Chang e 51
does this based on calculations of the size of any discrepancy between how valuable
turning left was in the past and whether turning left this time turns out better than
expected, worse than expected, or as expected. Suppose that in the past, when the
agent turned left, the traffic was better than she expected. The agent’s “prior” in this
case for turning left would be relatively high. But suppose this time, the agent turns
left, and the traffic is worse than expected. This generates a discrepancy between
the agent’s past reinforcement history and her current experience, which is negative
given that turning left turned out worse than expected. This discrepancy will feed
into her future predictions; her prediction about the value of turning left at this
corner will now be lower than it previously was. Model-free systems rely on this
prediction-error signaling, basing new predictions on comparisons between past
reinforcement history and the agent’s current actions. While there is no obvious
folk-psychological analogue for model-free processing, the outputs of this system
are commonly described as gut feelings, spontaneous inclinations, and the like.34
This is because these kinds of judgments do not involve reasoning about alternative
possibilities,35 but rather they offer agents an immediate positive or negative sense
about what to do.
It is important to note a distinction between two views about these kinds of
learning systems. A “single-system” view claims that all mental functioning can be
(or ultimately will be) explained via these mechanisms, including deliberative rea-
soning, conscious thought, and so on. This seems to be what Clark (2015) thinks.
By contrast, a “multiple-system” view (Crockett, 2013; Cushman, 2013; Railton,
2014; Huebner, 2016) aims to show how different mechanisms for evaluative learn-
ing underlie different kinds of behavior and judgment. The single-system view is
more ambitious than the multiple-system view. I adopt the latter.
Proponents of multiple-system views disagree about how many systems there
are, but they agree that, in Daniel Gilbert’s words, “there aren’t one.” For example,
in the domain of social cognition, Cushman (2013) holds that a prediction-error
system subserves representations of actions, but not of outcomes of actions. For
example, recall the work of Cushman and colleagues (2012) on pretend harm-
ful actions, discussed in Chapter 1. This research shows that people find pretend
harmful actions, like smacking a toy baby on a desk, more aversive than analogous
34
It is important to note that while model-free processing involves mental representations of the
expected outcomes of actions, it is not necessary that these representations be propositionally struc-
tured or otherwise formatted in belief-like ways. Indeed, some view prediction-error models as com-
prising a “store” of practical know-how, only on the basis of which we are able to form states like beliefs
and intentions. Railton, for example, argues that “it is thanks to our capacity for such nonpropositional,
bodily centered, grounded mental maps and expectations that we are able to connect human proposi-
tional thought to the world via de re and de se beliefs and intentions” (2014, 838). Of course, this view
of the relationship between model-free processing and belief depends upon what one takes beliefs to
be. I address this issue in Chapter 3.
35
But see §4.2 for a discussion of model-free systems and counterfactual reasoning.
52 Mind
harmless actions, like smacking a broom on a desk. Cushman (2013) suggests that
the key difference between these scenarios is that the first triggers model-free rep-
resentations of a harmful action, regardless of its consequences, while the second
does not trigger model-free representations of a harmful action, thus enabling
agents to represent the action’s outcome. Swinging something perceived as lifelike
(i.e., a toy baby), in other words, carries a negative value representation—a “gist”-
level impression—which has been refined over time via prediction-error feedback.
Swinging something perceived as an object doesn’t carry this kind of negative value
representation. This enables the broom-smacking to be represented in terms of
its outcome. Representations of outcomes, on Cushman’s view, are model-based.
These representations resemble models of decision trees, on the basis of which dif-
ferent actions, and the values of their outcomes, can be compared.36
Model-based systems are often described as flexible, but computationally costly,
while model-free systems are described as inflexible, but computationally cheap.
Navigating with a map enables flexible decision-making, in the sense that one can
shift strategies, envision sequences of choices several steps ahead, utilize “tower of
Hanoi”–like tactics of taking one step back in order to take two more forward, and
so on. But this sort of navigation is costly in the sense that it involves computing
many action–outcome pairs, the number of which expand algorithmically even in
seemingly simple situations. On the other hand, navigating without a map, on the
basis of information provided by past experiences for each particular decision, is
comparatively inflexible. Model-free systems enable one to evaluate one’s current
action only, without considering options in light of alternatives or future conse-
quences. But navigating without a map is easy and cheap. The number of options
to compute are severely constrained, such that one can make on-the-fly decisions,
informed by past experience, without having to consult the map (and without risk-
ing tripping over one’s feet while one tries to read the map, so to speak).
It is important not to oversell the distinction between model-free and model-
based learning systems. There is increasing evidence to suggest mutual interaction.37
But if one accepts the aforementioned rough distinction—that model-based sys-
tems are flexible but costly and that model-free systems are inflexible but cheap—
then one might be disinclined to think that model-free learning provides an apt
framework for the mechanisms underlying the process of alleviation. For it is this
process, ex hypothesi, that accounts for the persistent recalibration and improve-
ment over time of spontaneous inclinations. An inflexible system for evaluative
36
My aim is not to endorse Cushman’s view per se, but to illustrate how a multiple-systems view
explains various kinds of judgment and behavior in terms of multiple mechanisms for evaluative
learning.
37
See Kishida et al. (2015) and Pezzulo et al. (2015) for evidence of information flow between
model-free and model-based evaluative learning systems. Kenneth Kishida and colleagues, for exam-
ple, offer evidence of fluctuations in dopamine concentration in the striatum in response to both actual
and counterfactual information.
Perception , Emotion , B ehav ior, and Chang e 53
learning seems like a bad model for the process I have called alleviation. But, pace
the common characterization of model-free learning systems as dumb and inflex-
ible, there is reason to think that this kind of system can support socially attuned,
experience-tested, and situationally flexible behavior.
For example, such a system may initially respond to the delicious taste
of a fine chocolate bar. But when this taste is repeatedly preceded by
38
Nathaniel Daw and colleagues (2011) do find, however, that statistical learning typically involves
the integration of predictions made by both model-based and model-free systems. To what extent the
integration of these systems is necessary for statistical learning and other spontaneous competencies is
an open and important empirical question.
39
So far as I know, Railton (2014) first discussed wide competence in spontaneous decision-
making in this sense.
54 Mind
seeing that chocolate bar’s label, the experience of seeing that label will
be treated as rewarding in itself—so long as the label remains a clear
signal that there is delicious chocolate is on the way. Similarly, if every
trip to the chocolate shop leads to the purchase of that delicious choco-
late bar, entering the shop may come to predict the purchasing of the
chocolate bar, with the label that indicates the presence of delicious
chocolate; in which case entering the shop will come to be treated
as rewarding. And if every paycheck leads to a trip to the chocolate
shop . . . (2016, 55)40
My view is much indebted to Huebner (2016), who argues that implicit attitudes are constructed
41
by the aggregate “votes” cast by basic “Pavlovian” stimulus–reward associations, model-free reward
predictors, and model-based decision trees. Pavlovian stimulus–reward associations are distinguished
from model-free reward predictors in that the former passively bind innate responses to biologically
salient rewards, whereas the latter compute decisions based on the likelihood of outcomes and incor-
porate predictors of predictors of rewards. The view I present here is different from Huebner’s, how-
ever, as I argue that states with FTBA components—in particular the process of alleviation—are best
explained by model-free learning in particular. My view is that behavior is the result of the competition
between Pavlovian, model-free, and model-based systems. That is, behavior is the result of the com-
bined influence of our reflexive reactions to biologically salient stimuli (which are paradigmatically
subserved by Pavlovian mechanisms), our FTBA states (paradigmatically subserved by model-free
mechanisms), and our explicit attitudes (paradigmatically subserved by model-based mechanisms).
Now, this picture is surely too simplistic, given evidence of mutual influence between these systems
(see footnote 37). But to accept the mutual penetration of learning systems is not tantamount to adopt-
ing the view that these systems mutually constitute FTBA states. It is one thing to say that these states
paradigmatically involve model-free learning, which is in important ways influenced by other learning
systems. It is another thing to say that these states are produced by the competition between these
learning systems themselves. Architecturally, my somewhat loose way of thinking is that biologically
attuned reflexes are the paradigmatic causal outputs of Pavlovian mechanisms, FTBA states are the
paradigmatic outputs of model-free learning mechanism, and explicit attitudes are the paradigmatic
outputs of model-based learning mechanisms.
Perception , Emotion , B ehav ior, and Chang e 55
Note that Seligman and colleagues say that this learning process attunes individuals
to changing environments. This is saying more than that prediction-error learning
attunes agents to previously encountered regularities in the environment. Consider
the museumgoer, whose impulse to step back from the large painting isn’t fixed to
any one particular painting. Any painting of sufficient size, in the right context, will
trigger the same response. But what counts as sufficient size? Does the museum-
goer have a representation of a particular square footage in mind? Perhaps, but then
she would also need representations of metric units tied to behavioral routines for
paintings of every other size too, not to mention representations of size for every
other object she encounters in the environment, or at least for every object that
counts as a feature for her. This does not seem to be how things work. Instead, as
the museumgoer learns, she becomes more sensitive to reward-predicting stimuli,
which over time can come to stand in for other stimuli.
The key point is that prediction-error models of learning like these help to show
how states with FTBA components can improve over time.42 Experiences of tension
initiate coordinated responses directed toward their own alleviation. In turn, token
This and the paragraphs following in this section are adapted from Brownstein and Madva
42
(2012b).
56 Mind
43
This possibility of FTBA states failing even in stable and typical environments distinguishes them
from rote reflexes. See Chapter 3.
58 Mind
analogue to stepping back from the large painting should be.44 Perhaps repeated
failures to alleviate tension are additive, leading to a kind of (physiological and/or
metaphorical) hypertension.
Other kinds of failure reflect conflicts with agents’ reflective values and/or with
moral goods. As Huebner (2009, 2016) emphasizes, these learning systems will be
only as good as the environment in which they are trained.45 Prediction-error learn-
ing is vulnerable to small and unrepresentative samples of experience. A person who
lives in a world in which most high-status jobs are held by men and most low-status
jobs are held by women is likely to come to implicitly associate men with concepts
like competence, leadership, authority, and so on. This association will likely be rein-
forced by experience, not (rapidly or slowly) undermined by it. Similar outcomes
will happen in the case of merely perceived environmental regularities, for example,
when a person is bombarded with images in the media of black or Arab men acting
in violent ways. In cases like these, agents’ spontaneous social inclinations may still
be thought of as attuned to the social world, but to the wrong features of it.
I consider the significance of these kinds of successes and failures in Chapter 3,
where I argue that states with paradigmatic FTBA components are implicit atti-
tudes, a sui generis kind of mental state. My aim here has been to describe a plau-
sible model of the evaluative learning system underlying the process of alleviation,
that is, the process through which our spontaneous inclinations change over time
and (sometimes) attune us to the realities and demands of the local environment.
44
Thanks to Daniel Kelly for suggesting this metaphor.
45
But this is not to say that intra-individual differences don’t exist or don’t matter.
Perception , Emotion , B ehav ior, and Chang e 59
player’s behavioral reaction to these features and tensions—a hard charge in the
case of the dropshot—is, as I emphasized, highly context-sensitive. If she is up
40–love, the player’s felt tension might be relatively anemic, thus diminishing the
imperatival force of the dropshot, considering that she cares little about the point.
By comparison, at four games all and deuce in the third set, the dropshot will likely
trigger explicit tension, which the player is almost sure to notice explicitly, as she
is immediately inclined to give it everything she has. In other contexts, as in the
museumgoer case, the player’s goals and cares might be revealed to her through
these reactions; she might find herself charging harder than she realized or dogging
it when she believes she ought to be playing her hardest.
In all of these cases, the successes and failures of the player’s reaction feed back
into future FTBA reactions. The player who overreacts to dropshots, perhaps by
misperceiving the arc of the shot, feeling too much tension, or charging too hard,
such that she fails to have her racquet ready to play her own shot, is left with a dis-
crepancy between the expectation associated with these F-T-B reactions and the
actual outcome of her response. This discrepancy will update her priors for expecta-
tions of outcomes for future reactions, in much the same way that prediction-error
learning has been shown to underlie batters’ ability to predict where pitches will
cross the plate (Yarrow et al., 2009). And it will do so in a very finely individuated
way, such that a love–40 context might generate different expectations than a deuce
context. In the moment, the success or failure of her responses will or will not allevi-
ate tension, as the player either is able to move on to the next shot ( A1: “ready!”) or
dwells with the feeling of having misplayed the dropshot (A2: “damn!”). Of course,
the player’s improvement is not guaranteed. Her reactions may fail to self-alleviate
for a host of endogenous or exogenous reasons (as in the case of hearing someone
whisper to you; see §4.3). Perhaps the player feels too much tension because she
cares excessively about the point; maybe money is on the line, and her concern for
it interferes with the successful unfolding of her FTBA reaction. Perhaps she can-
not avoid ruminating on her fear of failure and, like Richie Tennenbaum, takes her
shoes off and simply gives up on the game (but hopefully not on life too).
Of course, different interpretations of this scenario are possible, some of which
are rivals and some of which are simply open questions. For example, as I discussed
earlier (§1.3), there are multiple possible interpretations of the perceptual content
of perceived features. Likewise, there are open questions about whether some forms
of perception are relatively less entwined with affect than what I have depicted, or
perhaps are entirely nonaffective (§2). Other, broader strategies for understanding
these cases are not consistent with my account, however. One is that the tennis play-
er’s reaction to the dropshot can be explained in terms of a cultivated disposition to
react to particular shots in particular ways. On this interpretation, there is no occur-
rent state, with FTBA components or otherwise, that explains her action. Another
rival interpretation is that the player’s action is best explained in terms of ordinary
belief–desire psychology. On this interpretation, upon seeing the oncoming ball
60 Mind
and judging it to be a dropshot, the player forms a set of beliefs, to the effect of “That
is a dropshot,” “One must charge to retrieve dropshots,” or the like, and in addition,
the player forms a desire (or accesses a standing desire), to the effect of “I want to
retrieve the dropshot” or “I want to win the point.” These beliefs and desires would
rationalize her action, with no need of mentioning special mental states with FTBA
components. These are the sort of rival interpretations of cases of spontaneity that
I consider in depth in the next chapter. I don’t deny that these rival interpretations
describe possible cases, but I deny that they offer the best explanation of the para-
digmatic cases I’ve described.
Much of the same can be said in cases involved implicit bias. Consider a professor
evaluating a stack of CVs. A salient feature (F: “female name on CV!”) induces ten-
sion (T: “risky hire!”) and behavioral reactions (B: “place CV in low-quality pile!”),
which either do (A1: “next CV!”) or do not self-alleviate (A2: “something wrong!”).
Or consider a standard shooter bias case: a salient feature (F: “black face!”) induces
tension (T: “danger!”) and behavioral reactions (B: “shoot!”), which either do
(A1: “fear diminished!”) or do not self-alleviate (A2: “still afraid!”).
As in the analysis of the tennis player, there are many open questions and unsaid
details here. What exactly the CV evaluator or shooter bias test participant perceives
is not clear. Indirect measures like the IAT don’t typically distinguish behavior from
underlying mental processes, although some suggestive evidence regarding partici-
pants’ perceptual experience exists. Using eye-trackers, for example, Joshua Correll
and colleagues (2015) found that participants in a first-person shooter task require
greater visual clarity before responding when faced with counterstereotypic targets
(i.e., armed white targets and unarmed black targets) compared with stereotypic
targets (i.e., unarmed white targets and armed black targets). This is suggestive that
stereotypic targets are relatively imperative for these agents. The imperatival qual-
ity of the perceived feature in these trials appears to be stronger, in other words.
Whether this means that the feature is easier to see, or relatively more affective, or
something else is not clear from this experiment. Future research could perhaps
try to replicate Correll and colleagues’ results while testing whether salient feel-
ings of fear on stereotype-consistent trials moderate the outcome. Such a finding
would build on previous research showing that people can and do make use of their
implicit biases when directed to focus on their “gut feelings” ( Jordan et al., 2007;
Ranganath et al., 2008; Richetin et al., 2007; Smith and Nosek, 2011).
This finding would speak to an open question in the literature on implicit
bias: how putatively “cold” affectless stereotypes relate to “hot” affective prejudices.
A number of researchers hold a “two-type” view of implicit biases, that is, that there
are fundamentally two kinds: implicit stereotypes and implicit evaluations (Correll
et al., 2007, 2015; Glaser, 1999; Glaser and Knowles, 2008; Stewart and Payne,
2008). On this view, a white person (for example) might have warm feelings toward
black people (i.e., positive implicit evaluations) but stereotype them as unintelligent
(Amodio and Hamilton, 2012). There is, however, both conceptual and empirical
Perception , Emotion , B ehav ior, and Chang e 61
evidence against the two-type view.46 For example, Jack Glaser (1999) and Bertram
Gawronski and colleagues (2008) found that retraining implicit stereotypes changes
agents’ implicit evaluations. David Amodio and Holly Hamilton’s (2012) own data
reflected a difficulty participants had in associating blacks with intelligence. This
suggests that, rather than tracking coldly cognitive stereotypes alone, their implicit
stereotyping measure tracks the insidious and plainly negative stereotype that black
people are unintelligent. This negative evaluative stereotype combines the perception
of perceived features with particular affective responses in order to induce action; a
salient feature (F: “black face!”) induces tension (T: “danger!”) and specific behav-
ioral reactions (B: “shoot!”). Research supports similar conclusions in the case of
biased CV evaluations. Dan-Olof Rooth and colleagues, for example, found that
implicit work-performance stereotypes predicted real-world hiring discrimination
against both Arab Muslims (Rooth, 2010) and obese individuals (Agerström and
Rooth, 2011) in Sweden. Employers who associated these social groups with lazi-
ness and incompetence were less likely to contact job applicants from these groups
for an interview. In both cases, measures of evaluative stereotypes were used, and
these predicted hiring discrimination over and above explicit measures of attitudes
and stereotypes.
As in the case of the tennis player, these co-activating responses will vary with
agents’ cares and context. Joseph Cesario and Kai Jonas (2014) find, for example,
differential effects of priming white participants with the concept “young black
male” depending on the participants’ physical surroundings and how threatening
they perceive black males to be. Using as an outcome measure the activation of fight
versus flight words, they compared the effects of the prime on participants who
were seated in an open field, which would allow a flight response to a perceived
threat, with the effects of the prime on participants seated in a closed booth, which
would restrict a flight response. Cesario and Jonas write:
46
See Madva and Brownstein (2016) and brief discussion in the Appendix. “One-type” theo-
rists hold that only one type of implicit mental state exists (Greenwald et al., 2002; Gawronski and
Bodenhausen, 2006, 2011). Different one-type theories conceptualize the relationship between ste-
reotypes and affect within implicit mental states in different ways.
47
For more on the context sensitivity of implicit mental states, see Chapter 7.
62 Mind
I discuss in the Appendix how findings like these speak to concerns about the
predictive validity of indirect measures of attitudes—like the IAT—as well as to
broader concerns about replicability in psychological science. The point here is that
familiar implicit biases are activated and affect agents’ behavior in the way that is
paradigmatic of our spontaneous inclinations.
This is true, too, of the ways in which implicit biases change over time, incorpo-
rating successes and failures through the feedback processes of alleviation. “Success”
and “failure” are tricky notions in this context, however. Within the narrow terms of
model-free learning, an agent’s spontaneous reaction to a situation will be success-
ful when her expectations match the outcome of her actions and her feelings of felt
tension subside, thus eliminating the FTBA response (§§4.2 and 4.3). A boy who
holds gender–math stereotypes and who also lives in an environment that appears
to confirm these stereotypes—for example, he might go to a school in which no
girls take or succeed in math classes, because of implicit or explicit social norms,
long-standing discriminatory organizations of career paths, and so on—may auto-
matically think of his father and not his mother when he needs help with his math
homework. In this context, this reaction will be rewarded, updating and strengthen-
ing the boy’s prior association of males with math. The boy’s reaction might not be
successful as well, and in (at least) two different senses. His FTBA reaction might
go awry, as in the whispering case. He might misperceive a crucial detail of the math
problem, or over-or underreact affectively, or fail to behave at all. But in the sec-
ond, more weighty sense in this context, his FTBA reaction might be (or is!) an
epistemic and moral failure. His mother might be better suited to helping him with
his math homework than his father. More broadly, his spontaneous reaction fails to
track the truth about women’s mathematical ability as well as the overriding reasons
to treat people with equal respect.48 I discuss the relationship between successful
FTBA reactions and moral goods in Chapter 7.
Finally, as I said in the case of the tennis player, there are rival interpretations on
offer of the aforementioned claims. The potentially most threatening ones to my
account interpret implicit biases in entirely different ways than I have, for example,
as ordinary beliefs rather than as implicit associations. I address these rival interpre-
tations in the next chapter.
48
Regarding his epistemic failure, I am assuming that the boy’s gender–math association is general-
ized to women’s innate mathematical ability. His gender–math association could be thought of as accu-
rate, if applied more narrowly to the girls and women in his town, contingent on their being raised in
this particular environment. But implicit attitudes aren’t sensitive to contingencies like this, as I discuss
in Chapter 3. Regarding the boy’s moral failure, I am here assuming a kind of moral realism, such that
his FTBA reactions are morally problematic regardless of his explicit moral beliefs.
Perception , Emotion , B ehav ior, and Chang e 63
6. Conclusion
The descriptions given in this chapter are meant to show how a range of seemingly
disparate examples of spontaneity can be identified in terms of their FTBA com-
ponents. I have not yet given an argument that these components are structured in
a unified way. I do so in the next chapter, where I consider cognitive architecture.
I now turn to my argument that states with co-activating FTBA components consti-
tute implicit attitudes. They are not akin to physiological reflexes, mere associations,
beliefs (whether conscious or nonconscious, truth-tracking or modular), or traits
of character. While they share a number of properties with what Gendler (2008a,b)
calls “aliefs,” implicit attitudes are different in crucial respects from these too.
3
1
I use this term in the way that theorists of virtue use it, to describe patterns of action that are
durable and arise in a variety of contexts, and are also connected to a suite of thoughts, feelings, etc.
(e.g., Hursthouse, 1999).
64
Implicit Attit ud e s and the A rchitec t ure o f the Mind 65
2
See Railton (2011) for discussion of “habitudes.” Pierre Bourdieu’s “habitus” presents a related
option. Bourdieu defines the habitus as being composed of “[s]ystems of durable, transposable dis-
positions, structured structures predisposed to function as structuring structures, that is, as principles
which generate and organize practices and representations that can be objectively adapted to their
outcomes without presupposing a conscious aiming at ends or an express master of the operations
necessary in order to attain them” (1990, 53). While I think Bourdieu’s concept of habitus has much
to offer, I decline to adopt it, largely because I think it is too wedded to Bourdieu’s broader sociologi-
cal theories.
3
See Gendler (2008b, 557, n 5) for a similar point about the sui generis nature of alief.
4
See Evans and Frankish (2009) for more on this history.
5
Neil Levy (2014, 2016) argues that we cannot answer questions about the implicit mind—for
example, whether we are responsible for implicit biases—by consulting our intuitions, because the way
the implicit mind works is radically foreign to the perspective of folk psychology. While I am sympa-
thetic with this point and think that our intuitions about implicit attitudes are likely to be challenged by
empirical facts, I find more continuity between empirical research and folk psychology than Levy does.
66 Mind
the relationship between implicit attitudes and the self, and the ethics of sponta-
neity. But in this chapter, where my focus is on psychological architecture, I focus
elsewhere. Awareness and control can be understood as conditions under which
implicit attitudes sometimes operate. Instead of focusing on these conditions, my
aim is to provide an account of the psychological nature of implicit attitudes.6
Moreover, while there is research on the role of implicit cognition in relatively
banal contexts like brand preferences (e.g., Coke vs. Pepsi), by far the greatest focus
in the field is on pernicious and unwanted biases. One often sees the terms “implicit
attitudes” and “implicit bias” used interchangeably. This is perhaps the most obvi-
ous sense in which my use of the term “implicit attitudes” is broader than the stand
ard usage. My focus is on the explanatory role of implicit attitudes in the context of
both the virtues and vices of spontaneity.
With these caveats in mind, I now move on to consider where to locate implicit
attitudes in the architecture of the mind. Given that these states are activated rela-
tively automatically when agents encounter the relevant cues, one possibility is that
they are basic stimulus-response reflexes.
1. Reflexes
One sense of “reflex” describes involuntary motor behavior.7 In this context, invol-
untariness often means “hard-wired.” For example, infants’ “primary reflexes”—
such as the palmar grasp reflex and the suckling reflex—are present from birth (or
before) and then disappear over time. Some somatic reflexes that are present at
birth, like the pupillary reflex, don’t disappear over time, while others not present
at birth emerge through development, such as the gag and patellar reflexes. All of
these reflexes are “hard-wired” in the sense that they involve no social learning. They
are, however, involuntary to various degrees. While infants will suckle at any stimuli
with the right properties, adults can control their somatic reflexes, albeit indirectly.
To control the patellar reflex, one can wear a stiff leg brace that prevents one’s leg
from moving. To control the pupillary reflex, one can put dilation drops in one’s
eyes. This suggests that involuntariness alone is not sufficient for carving reflexes off
from non-reflexes. Better are the following four properties.
Reflexes are (1) instantaneous, taking very little time to unfold. They are
(2) involuntary, not in a grand sense that threatens free will, but in the local sense
A different sense of “reflex” is used in physiology, in which “reflex arcs” describe the neural path-
7
ways that govern autonomic and somatic processes. These pathways are reflexive in the sense that they
are physically located outside the brain. It is not this sense of reflexes that I have in mind when con-
sidering whether and how implicit attitudes are reflex-like states. What makes these states potentially
reflex-like has little to do with their physical implementation in (or outside of) the brain.
Implicit Attit ud e s and the A rchitec t ure o f the Mind 67
that they are relatively isolated from the agent’s intentions, goals, or desires.8 Note
the term “relatively.” Reflexes are not isolated from an agent’s intentions and so on
in the sense that one’s intentions are utterly powerless over them. As I said, while
one cannot directly prevent one’s pupil from constricting in bright light, one can put
hydroxyamphetamine drops in one’s eyes to force the iris dilator muscles to relax.
A related way to put this is that reflexes are (3) relatively isolated from changes in
context. To put it bluntly, the patellar reflex doesn’t care if it’s morning or night, if
you’re alone or in the company of others. When the right nerve is thwacked, the
response unfolds. Of course, on a very capacious interpretation of context, one
could construe leg braces and hydroxyamphetamine drops as elements of the con-
text that affect how a reflex unfolds (or doesn’t). But in the narrower sense I mean it,
contextual factors affect behavioral events when they bear some relationship to the
meaning of the event for the agent. This need not be a conscious relation, nor one
involving the agent’s beliefs. Moderation by context can mean that some feature of
the situation is related to the agent’s goals or cares. Finally, reflexes are (4) ballistic.
Once a reflexive reaction starts, it unfolds until it’s complete (unless its unfolding is
otherwise prevented).
Implicit attitudes display some of these characteristics. They are instantaneous,
in the sense that they unfold rapidly, and they are involuntary, in the sense I just
described namely, that they can be initiated without being intended, planned, or the
like. However, implicit attitudes are neither isolated from the agent’s context nor are
they ballistic.9
Recall that agents are constantly confronted by myriad stimuli, many of which
could in principle stand as features for them. The large painting stands as a fea-
ture for the museumgoer, but the light switch on the wall, ceteris paribus, doesn’t.
Benes’s blood sets off his reaction, but someone else’s blood probably wouldn’t. This
is because the restroom sign and Benes’s own blood are connected to the agent’s rel-
evant goals or cares, as I discussed in Chapter 2 (also see Chapter 4). Paradigmatic
cases of reflexive behavior lack this connection. Much as in the case of reflexes,
implicit attitudes are set in motion more or less instantaneously and automatically
when the agent encounters particular stimuli. But which stimuli set them in motion
has everything to do with other elements of the agent’s psychology (her goals, cares,
and her past value-encoding experiences). While the skywalker’s knee-knocking
and quivering might persist despite her beliefs, she does stay on the Skywalk after
all. Similarly, while implicit biases are recalcitrant to change by reason or willpower
8
Of course, I haven’t said anything to convince a determinist that free will is unthreatened in these
cases. All I mean is that reflexes are involuntary in a sense distinct from the one raised by questions
about free will.
9
For a broadly opposing view, focused on understanding intuitions as primitive and relatively
automatic “alarm signals” that paradigmatically lead us astray, both prudentially and normatively, see
Greene (2013).
68 Mind
alone, they are not altogether isolated from agents’ explicit attitudes and are in fact
quite malleable (see Chapters 6 and 7).
Psychological research on implicit attitudes also demonstrates that these states
are highly affected by context. Both physical and conceptual context cues shape
the way in which they unfold.10 For example, Mark Schaller and colleagues (2003)
show that the relative darkness or lightness of the room in which participants
sit shifts scores on implicit racial evaluations across several indirect measures,
including the IAT.11 Conceptual elements of context influence implicit attitudes
primarily in light of perceived group membership and social roles. Jamie Barden
and colleagues (2004), for example, varied the category membership of targets
by presenting the same individual in a prison context dressed as a prisoner and
dressed as a lawyer, and found that implicit evaluations of the person dressed
as a prisoner were considerably more negative. Similarly, Jason Mitchell and col-
leagues (2003) showed that implicit evaluations of the same individual—Michael
Jordan—depended on whether the individual was categorized by race or occupa-
tion. The agent’s own emotional state also affects the activation of implicit atti-
tudes. Nilanjana Dasgupta and colleagues (2009) found that participants who
were induced to feel disgust had more negative evaluations of gay people on an
IAT, although their implicit evaluations of Arabs remained unchanged. However,
participants who were induced to feel anger had more negative evaluations of
Arabs, while their evaluations of gay people remained unchanged.12 Context simi-
larly influences other cases. Being on the Skywalk on a blustery day might increase
a typical tourist’s aversive reaction, while taking a stroll when the wind is perfectly
still might make things slightly more tolerable.
Finally, implicit attitudes are not ballistic. This is because they involve dynamic
feedback learning, which enables the agent’s behavior to adjust on the fly as her
tension is or is not alleviated and as circumstances change. Some reflexes similarly
shift and adjust, such as pupillary dilation in a room with the lights being turned
off and on. But there is no learning going on there. Each reaction of the iris dilator
muscles to the brightness of the room is independent of the others. Moreover, the
dilator muscles will find the optimum point unless interrupted by further stimuli.
In the case of spontaneous action, however, the agent’s reaction is a reaction to felt
tension, and its alleviation (or lack of alleviation). The distance-stander adjusts how
far she stands from her interlocutor in respect of her feelings of tension and allevia-
tion. This is an integrated and ongoing process, not a series of one-and-done reflex-
ive reactions. This process is, as I discussed in the preceding chapter, what allows
10
W hat follows in this paragraph is adapted from Brownstein (2016). See Chapter 7 for further
discussion of context and implicit attitude malleability.
11
Although I note the relatively small number of participants in this study (n = 52 in experiment
2). I note also that these results obtained only for subjects with chronic beliefs in a dangerous world.
12
See Gawronski and Sritharan (2010) for a summary and discussion of these data.
Implicit Attit ud e s and the A rchitec t ure o f the Mind 69
2. Associations
While the F, T, B, and A relata of implicit attitudes are associatively linked, these
states themselves are not mere associations. By “mere” associations, I mean mental
states that are activated and that change exclusively according to associative prin-
ciples (Buckner, 2011). Mere associations are wholly insensitive to the meaning of
words and images, as well as to the meaning of an agent’s other thoughts. They are
thoroughly “subpersonal.”
In the usual sense, associations are mental states that link two concepts, such as
salt and pepper or donald and duck. But associations can also stand between
concepts and a valence—for example, in the association between snake and
“bad”—as well as between propositions—for example, in the association between
“I don’t want to grow up” and “I’m a Toys R Us kid.” To say that these are asso-
ciations or, more precisely, to say that they have associative structure is to say that
thinking one concept, like salt, will activate the associated concept, pepper.13 Why
does this happen? In these examples, it is not typically thought to be because the
subject has a structured thought with content like “Salt and pepper go together.”14
The proposition “Salt and pepper go together” reflects the way an agent takes the
world to be. Propositions like this often express one of an agent’s beliefs. An asso-
ciation of salt with pepper, however, simply reflects the causal history of the agent’s
acquisition of the concepts salt and pepper. More specifically, it reflects what
Hume (1738/1975) called contiguity: salt and pepper (either the concepts or the
actual substances) were frequently paired together in the agent’s learning history,
such that every time, or most times, the agent was presented with salt, she was also
presented with pepper. Eric Mandelbaum (2015b) distinguishes between mental
states with propositional and associative mental structure this way: “Saying that
someone has an associative thought green/toucan tells you something about the
causal and temporal sequences of the activation of concepts in one’s mind; saying
that someone has the thought there is a green toucan tells you that a person
is predicating greenness of a particular toucan.” Hume’s other laws of associative
learning—resemblance and cause and effect—similarly produce mental asso-
ciations (i.e., mental states with associative structures). These ways of coming to
associate concepts, or concepts with valences, are well known to be exploited by
behaviorist theories of learning.
13
See Mandelbaum (2015b) for discussion of associative structure, compared with associative
theories of learning and thinking.
14
For an exception, see Mitchell et al. (2009).
70 Mind
This is not to say, though, that states like belief can’t change due to a change in an agent’s associa-
15
tions. Culinary beliefs about seasonings are surely influenced or even driven by frequency of pairings.
No claim of modularity is being made here.
Implicit Attit ud e s and the A rchitec t ure o f the Mind 71
implicit stereotyping. Smiling black faces elicit lower levels of implicit stereotyping
of black men as dangerous compared with angry black faces (for white subjects).
This change in implicit attitudes, due to changes in the perception of the emotional
expression of others’ faces, appears to be driven by sensitivity to the meaning of the
relationship between facial expression and danger (i.e., to the thought that people
who smile are less likely to be dangerous than people who look angry).16 Other
evidence for content-driven changes in implicit attitudes stems from research on
“halo” and “compensation” effects. These are adjustments along particular dimen-
sions that reflect quite a lot of complexity in agents’ social attitudes. “Benevolent
sexism” is a classic compensation effect, for example. Stereotyping women as
incompetent leads to greater feelings of warmth toward them (Dardenne et al.,
2007). Perceptions of competence and feelings of warmth are sometimes inversely
related like this (i.e., a compensation effect), but other times are positively related,
such as when liking one’s own group leads one to think of it as more competent
(i.e., a halo effect). These phenomena are found in research on implicit attitudes.
Rickard Carlsson and Fredrik Björklund (2010), for example, found evidence for
implicit compensation effects toward out-groups, but not toward in-groups. While
psychology students implicitly stereotyped lawyers as competent and cold, and
preschool teachers as incompetent and warm, preschool teachers implicitly stereo-
typed their own group as both warm and competent.
Other cases that I’ve discussed display similar patterns. Consider the skywalker’s
acrophobia. Exposure therapies appear to be effective for treating fear of heights.
Over time, simply being exposed to the Skywalk is likely to cause the agent to
cease her fear and trembling. This looks to be a case of extinguishing a high up–
danger association by repeatedly presenting one element of the association to the
agent without the other. But most exposure therapies don’t just expose the agent
to one element of the associative pair without the other. Rather, most also include
16
Alternative explanations are available here, however. One is that presentation of the stimuli
changes subjects’ moods, and their implicit attitudes shift in response to their change in mood. In this
case, the change wouldn’t be due to sensitivity to the meaning of the relationship between facial expres-
sion and danger. (Thanks to an anonymous reviewer for suggesting this possibility.) An associative
explanation is possible here as well. Gawronski and colleagues (2017) argue that multilayer neural
networks involving both excitatory and inhibitory links are capable of explaining these phenomena.
This is an open question for future research, one which depends upon experimental designs that can
isolate changes in implicit attitudes themselves rather than changes in controlled processes that affect
the activation of implicit attitudes. New multinomial “process dissociation” models, which are used
for breaking down the contributing factors to performance on tests such as the IAT, are promising for
this purpose. Note also that Gawronski and colleagues’ proposal regarding multilayer neural networks
departs from the basic claim that I am evaluating here, which is that implicit attitudes are mere associa-
tions. So if their interpretation of this kind of experiment were to be vindicated, it wouldn’t entail that
implicit attitudes are mere associations.
72 Mind
cognitive elements, such as teaching the agent to replace problematic thoughts with
helpful ones. This looks to be a case of content-driven change.17
Of course, it’s difficult, if not impossible, to tell what’s happening on a case-
by-case basis. Apparent content-driven change, for example, can sometimes be
explained using associative tools. This is possible, for example, in cases in which
implicit attitudes appear to demonstrate “balance” and “cognitive dissonance”
effects. The thought “I am a good person” (or a good–me association), paired with
the thought “Bad people do bad things” (or a bad people–bad things associa-
tion), can shift one’s assessment of something one just did from bad to good, thus
“balancing” one’s identity concept with a positive self-assessment. This sort of effect
is found in implicit attitudes (Greenwald et al., 2002). Mandelbaum (2015b) argues
that this sort of effect is evidence that implicit attitudes cannot be associatively
structured, since it appears that these are content-driven transitions. But Anthony
Greenwald and colleagues (2002) offer a model purporting to explain precisely
these balance effects in implicit cognition using associative principles. More evi-
dence is needed to sort through these interpretations, particularly given the reliance
of each interpretation on behavioral measures, which are only proxies for changes in
the attitudes themselves.
That implicit attitudes appear to change in both content-driven and associative
ways is suggestive of my claim that these states are sui generis. All I hope to have
shown so far, though, is that these states are not mere associations, in the strong
sense that they are implemented exclusively by associative structures.18 Implicit
attitudes are sensitive to the meaning of words and images. Does this mean that
they are sensitive to the meaning of words and images in the way that beliefs
paradigmatically are?
3. Belief
In all of the cases I’ve discussed, it’s clear that agents have many beliefs about what
they’re doing that are relevant to explaining their actions. Benes believes that HIV is
carried in blood; the skywalker believes that she really is suspended over the Grand
Canyon; athletes believe that they are trying to score goals and hit winners; and
17
See Chapter 7 for more on cognitive therapies. Of course, acrophobia is complex, involving not
only implicit attitudes, but also explicit reasoning. One possibility is that the counterconditioning
involved in exposure therapy shifts the agent’s implicit attitude, while the cognitive elements of therapy
affect the agent’s beliefs. This leaves open the possibility that the agent’s implicit attitude is functioning
in a merely associative way, that is, in response to counterconditioning alone and not in response to the
meaning of one’s desired thoughts about heights. More research is needed in order to learn how these
kinds of interventions actually work, as I mentioned in footnote 16 as well. See also Chapter 6.
18
I give more detail in §3 about hypothesized conditions under which changes in attitudes are in
fact content-driven changes.
Implicit Attit ud e s and the A rchitec t ure o f the Mind 73
so on. On closer inspection, though, in many of these cases, agents seem to lack
crucial beliefs about what they’re doing, or even seem to have mutually contradic-
tory beliefs. Ask the museumgoer what the optimal distance from which to view an
8′ × 18′ painting is, and your question will likely be met with a puzzled stare.19 And
Benes’s and the skywalker’s behaviors seem to conflict with other of their beliefs.
These agents seem to believe one thing—that my own blood can’t infect me with
a disease I already have, or that I am safe on the glass platform—but their behav-
ior suggests otherwise. These cases raise questions about belief-attribution—that
is, about determining what agents really believe—and questions about belief-
attribution depend in turn on questions about what beliefs are. So one way to deter-
mine whether implicit attitudes are beliefs is to consider whether the kinds of action
in which they are implicated are explainable in terms of the agents’ beliefs.
On the one hand, beliefs are something like what one takes to be true of the
world.20 On the other, beliefs are also thought to guide action, together with one’s
desires and ends. Belief-attribution in the case of spontaneous and impulsive action
becomes tricky on the basis of these two roles that states of belief are traditionally
thought to play in the governance of thought and action. Many people can relate to
the feeling of having beliefs but failing to live by them, particularly when they act
spontaneously or impulsively. One potential explanation for this is that beliefs are
truth-taking but not necessarily action-guiding. This explanation would be consist
ent with a truth-taking view (a), given the following interpretive options in cases of
apparent belief–behavior discord:
(a) A truth-taking view, which attributes beliefs on the basis of agents’ reflective
judgments and avowals (Gendler, 2008a,b; Zimmerman, 2007; Brownstein
and Madva, 2012b)21
(b) An action-guiding view, which attributes beliefs on the basis of agents’ sponta-
neous actions and emotions (Hunter, 2011)
(c) A context-relative view, which takes both judgment and action to be relevant
to belief-attribution, and attributes to agents beliefs that vary across contexts
(Rowbottom, 2007)
(d) A contradictory-belief view, which takes both judgment and action to be inde-
pendently sufficient for belief-attribution, and attributes to agents contradictory
19
My point is not that beliefs must be articulable, but that an agent lacking an articulable answer to
a question like this offers prima facie reason to suspect that the agent lacks the relevant belief.
20
See Gilbert (1991) for a psychological discussion of belief. See Schwitzgebel (2006/2010) for
a review of contemporary philosophical approaches to belief. The following presentation and some
of the analysis of interpretive options for belief-attribution are adapted from Brownstein and Madva
(2012b).
21
By judgment, I mean what the agent takes to be true. While judgments are typically tied to avow-
als, I do not define judgment (or belief) in terms of avowals. See footnote 19.
74 Mind
beliefs (Egan, 2008; Gertler, 2011; Huddleston, 2012; Huebner, 2009; Muller
and Bashour, 2011)
(e) An indeterminacy view, which takes neither judgment nor action to be independ
ently sufficient, and attributes to agents no determinate belief at all, but rather
some “in-between” state.22
Each of these views fits more naturally with some cases than others, but (a), the truth-
taking view, which attributes belief on the basis of agents’ reflective judgments and
avowals, outperforms the alternatives in paradigmatic cases. First consider (b)–(e) in
the context of the skywalker case.
Proponents of (b), the action-guiding view, which attributes belief on the basis of
agents’ spontaneous actions and emotions, would have to argue that the skywalker
merely professed, wished, or imagined the platform to be safe. But in this case, if the
skywalker simply professed, wished, or imagined the platform to be safe, and thus failed
to believe that the platform was safe, she would have to be clinically ill to decide to walk
on it. Harboring any genuine doubt about the platform’s safety would keep most people
from going anywhere near it. Perhaps some agents venture onto the Skywalk in order to
look tough or to impress a date. In this case, perhaps the agent does indeed believe that
the platform is only probably safe and yet walks onto it anyway, due to the combination
of some other set of beliefs and desires. But this explanation can’t be generalized.23
Are the skywalker’s beliefs just (c) unstable across contexts? While it is surely the
case that agents’ beliefs sometimes flip-flop over time, the skywalker seems to treat
the platform as both safe and unsafe in the same context. Perhaps, then, she (d) both
believes and disbelieves that the Skywalk is safe. But in attributing contradictory
beliefs to her in this case, one seems to run up against Moore’s paradox, in the sense
that one cannot occurrently endorse and deny a proposition.24 Is the skywalker then
22
This sketch of the possible responses is drawn from Eric Schwitzgebel (2010), who points out
that a similar array of interpretive options arises in the literature on self-deception (see Deweese-Boyd,
2006/2008). Also see Gendler (2008a,b) for some discussion of historical predecessors of these
contemporary views.
23
See Gendler (2008a, 654–656). Another possibility, consistent with the action-guiding view, is
that the skywalker believes both that the platform is safe and that the platform is scary to experience.
But this seems to posit unnecessary extra beliefs in the skywalker’s mental economy. The skywalker’s
fear and trembling do not seem to be a reaction to a belief that being on the platform is scary. Rather,
they seem to be a reaction to being on the platform. In other words, it seems likely that the skywalker
does indeed believe that the platform is scary to experience, but this belief seems to be a result of the
skywalker’s fear-inducing experience, not a cause of it. Thanks to Lacey Davidson for suggesting this
possibility.
24
I take slight liberty with Moore’s paradox, which more accurately identifies a problem with
asserting P and asserting disbelief in P. I take it that at least one reason this is problematic is that these
assertions suggest that the agent occurrently believes and disbelieves the same thing. And this causes
problems for belief-attribution; it seems impossible to know what the agent believes or it seems that
the agent doesn’t have a relevant belief. One might avoid Moore’s paradox by positing unconscious
Implicit Attit ud e s and the A rchitec t ure o f the Mind 75
We should not confuse mere seemings with beliefs. Even if one knows that
the two lines in a Müller-Lyer figure are of the same length, it will still seem
to one that they differ in length. And as Bealer (1993) has pointed out, the
same point applies not only to perceptual but also to intellectual seemings,
[sic] it can still seem to one that the naïve axiom of set theory is true even
contradictory beliefs. See below for unconscious belief approaches. But see also Huddleston (2012)
and Muller and Bashour (2011).
25
For more on the indeterminacy view (e), see §4. See also Zimmerman (2007, 73–75).
26
Chirstopher Peacocke (2004, 254–257) similarly endorses what he calls the “belief-indepen-
dence” of emotional responses, citing Gareth Evans’s (1982, 123–124) discussion of the belief-inde-
pendence of perception (evident in perceptual illusions like the Müller-Lyer illusion). While Peacocke
(2004) seems to defend (a), the truth-taking view of belief, he is sometimes cited as a defender of (b),
an action-guiding view, because of his (1999, 242–243) discussion of a case akin to aversive racism. It is
natural to interpret Peacocke’s considered position as privileging the role of judgment in belief attribu-
tion, while acknowledging that in some cases so many of an agent’s other decisions and actions may fail
to cohere with her reflective judgments that it would be wrong to attribute the relevant belief to her.
76 Mind
though one does not believe that it is true, because one knows that it leads
to a contradiction. (2011, 124)
Further reasons to support the truth-taking view of belief stem from common
reactions to dispositionally muddled agents.27 Although we might judge the sky-
walker to be phobic or lacking in (some ideal form of) self-control, we would not
impugn her with ignorance or irrationality. The skywalker is met with a different
kind of reaction than one who both believes and disbelieves P.
The same considerations apply to other relevant cases. Proponents of the action-
guiding view (b) with respect to cases of implicit bias would have to argue that an
agent’s egalitarian values aren’t really reflective of her beliefs. Perhaps in some cases
this is true. Perhaps some agents just say that they believe, for instance, that men and
women are equally qualified for a lab-manager position, but don’t really believe it.
But in many cases this is implausible. Activists and academics who have spent their
entire careers fighting for gender and racial equity have implicit biases. These agents
clearly genuinely believe in egalitarianism (broadly understood). As in the case
of the skywalker, these genuine beliefs also guide agents’ intentional actions. One
might, for instance, attend diversity trainings or read feminist philosophy. Similarly,
the concerns I raised earlier about the context-relative (c), contradictory-belief
(d), and indeterminacy (e) views obtain here too, regardless of whether the target
behavior is the skywalker’s trembling or the biased agent’s prejudiced evaluations
of CVs. This leaves the truth-taking view (a) again. Here, too, the way in which the
agent in question reacts to information is crucial. Paradigmatic implicit biases are
remarkably unaffected by evidence that contradicts them (see discussion below of
Tal Moran and Yoav Bar-Anan [2013] and Xiaoqing Hu et al. [2017]). Just as in the
skywalker’s case, implicit biases are yoked to how things seem to an agent, regardless
of what the agent judges to be true.
But this is too quick, one might argue, given the evidence that beliefs can be
unconscious, automatic, and unresponsive to an agent’s reflective attitudes and
deliberations (Fodor, 1983; Gilbert, 1991; Egan, 2008, 2011; Huebner, 2009;
Mandelbaum 2011). This evidence suggests a picture of “fragmented” or “compart-
mentalized” belief that supports the contradictory-belief view (d) in the kinds of
cases under discussion. Andy Egan offers a helpful sense of what a mind full of frag-
mented beliefs looks like. “Imagine two machines,” he suggests.
Both do some representing, and both do some acting on the basis of their
representations. One is a very fast, very powerful machine. It keeps all
of its stored information available all of the time, by just having one big,
master representation of the world, which it consults in planning all of its
27
See Zimmerman (2007).
Implicit Attit ud e s and the A rchitec t ure o f the Mind 77
A fragmented mind is like the second machine, and “we are, of course, machines of
the second kind,” Egan writes. If this is correct, then perhaps implicit attitudes may
be just one kind of belief, which guide particular bits of behavior but fail to respond
to the rest of what the agent knows and believes.
Our minds are like the second machine, but this doesn’t necessarily support a
contradictory-belief (d) interpretation of the present cases (i.e., that paradigmatic
implicit attitudes are beliefs). As I’ve presented them, implicit attitudes are belief-
like. They encode risk, reward, and value. They update in the face of an agent’s chang-
ing experiences and have the potential to provide normative guidance of action (see
§6). And they reflect some connection to an agent’s practical concerns (i.e., her
goals and cares). However, beliefs as such have additional properties.28 Most central
among these is the way in which beliefs communicate and integrate inferentially
with one another and with other mental states.
28
R ailton makes a similar point in arguing that soldiers who are skilled in fearing the right
things—in “picking up” signs of danger—express belief-like states that are not, nevertheless, just
as such beliefs: “Well-attuned fear is not a mere belief about the evaluative landscape, and yet we
can speak of this fear, like belief, as ‘reasonable,’ ‘accurate,’ or ‘justified.’ It thus can form part of
the soldier’s ‘practical intelligence’ or ‘intuitive understanding,’ permitting apt translation from
perception to action, even when conscious deliberation is impossible and no rule can be found”
(2014, 841).
78 Mind
29
By inferences, I mean transitions between mental states in which the agent in some way takes the
content of one state (e.g., a premise) to support the content of another (e.g., a conclusion). I take this
to be roughly consistent with Crispin Wright’s view that inference is “the formation of acceptances for
reasons consisting of other acceptances” (2014, 36). For discussion of a comparatively less demanding
conception of inference than the one I use here, see Buckner (2017).
Implicit Attit ud e s and the A rchitec t ure o f the Mind 79
30
I note the relatively small number of participants in Moran and Bar-Anan’s (2013) study. I take
the conceptual replication of this study in Hu et al. (2017), discussed in the next paragraph, as some
reason for placing confidence in these findings.
80 Mind
conditional statements have the same logical form; they mean the same thing. But
there are cases in which it appears that implicit attitudes treat statements with minor
differences in form like this differently. This is particularly the case in the literature
on “implementation intentions,” or “if–then plans.”31 These sorts of plans appear to
respond to semantically identical but syntactically variable formulations differently.
For example, in a shooter bias scenario, rehearsing the plan “I will always shoot a
person I see with a gun” appears to have different effects on one’s behavior than
does rehearsing the plan “If I see a person with a gun, then I will shoot” (Mendoza
et al., 2010).
These points touch on the ongoing “associative” versus “propositional” debate
in research on implicit social cognition. Findings such as those reported by Moran
and Bar-Anan (2013) and Hu et al. (2017) have been taken to support the associa-
tive view, which is most prominently spelled out in the “Associative-Propositional
Evaluation” model (APE; Gawronski and Bodenhausen, 2006). Meanwhile, research
I discussed in the preceding section, suggesting that implicit attitudes process the
meaning of propositions, has been taken to support the propositional view (e.g.,
Mitchell et al., 2009; De Houwer, 2014). One reason this debate is ongoing is that
it is unclear how to interpret key findings. For example, Kurt Peters and Gawronski
(2011) find that the invalidation of evaluative information doesn’t produce disso-
ciation between implicit and explicit evaluations when the invalidating information
is presented during encoding. This suggests that implicit attitudes are processing
negation in the way that Madva (2016b) suggests they don’t. But when the invali-
dating information is presented after a delay, implicit and explicit attitudes do disso-
ciate. This complicates both the associative and propositional pictures. And it does
so in a way that suggests that neither picture has things quite right.32 In my view, this
supports Levy’s contention that while implicit attitudes integrate into patterns of
inference in some ways, they don’t do so in ordinary and fully belief-like ways.
Alternatively, one might bypass the claim about the difference between infer-
entially promiscuous and inferentially impoverished states by adopting a radically
expansive view of belief.33 The theory of “Spinozan Belief Fixation” (SBF), for exam-
ple, argues that minds like ours automatically believe everything to which they are
exposed (Huebner, 2009; Mandelbaum 2011, 2013, 2014, 2015b). SBF rejects the
claim that agents are capable of evaluating the truth of an idea before believing or
disbelieving it. Rather, as soon as an idea is encountered, it is believed. Mandelbaum
(2014) illustrates SBF vividly: it proposes that one cannot entertain or consider or
imagine or even encounter the proposition that “dogs are made out of paper” with-
out immediately and unavoidably believing that dogs are made out of paper. Inter
alia, SBF provides a doxastic interpretation of implicit biases, for as soon as one
31
See Chapter 7 for more in-depth discussion of implementation intentions.
32
See Gawronski, Brannon, and Bodenhausen (2017) for discussion.
33
W hat follows in this paragraph is adapted from Brownstein (2015).
Implicit Attit ud e s and the A rchitec t ure o f the Mind 81
encounters, for example, the stereotype that “women are bad at math,” one puta-
tively comes to believe that women are bad at math. The automaticity of believing
according to SBF explains why people are likely to have many contradictory beliefs.
In order to reject P, one must already believe P (Mandelbaum, 2014).
Proponents explain the unintuitive nature of SBF by claiming that most peo-
ple think they have only the beliefs that they consciously know they have. So, for
example, a scientifically-informed person presented with the claim that dinosaurs
and humans walked the earth at the same time will both believe that dinosaurs and
humans walked the earth at the same time and that dinosaurs and humans did not
walk the earth at the same time, but they will only think they have the latter belief
because it is consistent with what they consciously judge to be true.
But this doesn’t go far enough in explaining the unintuitive nature of SBF. At
the end of the day, SBF requires abandoning the entire normative profile of belief.
It scraps the truth-taking role of belief altogether. This in turn means that there is
no justified epistemic difference between believing P and considering, testing, or
rejecting P. For when we reject P, for instance, we come to believe not-P, and accord-
ing to SBF we will have done so just upon encountering or imagining not-P.
SBF asks a lot in terms of shifting the ordinary ways in which we think about
belief and rationality. More familiar conceptions of belief don’t shoulder nearly so
heavy a burden. If one accepts some existing account of belief that preserves belief ’s
two principal roles—of action-guiding and truth-taking—then it appears that the
truth-taking view (a) of belief-attribution in the case of spontaneous and impulsive
action is still preferable to the other options. If the truth-taking view is best, then
these actions are not most persuasively explicable in terms of agents’ beliefs, since
the agents’ affective and behavioral reactions in these cases don’t reflect what they
take to be true. The upshot of this is that implicit attitudes are not beliefs.
4. Dispositions
In §3, I cited Eric Schwitzgebel (2010) as a defender of the indeterminacy view.
This view takes neither judgment nor action to be independently sufficient for
belief-attribution. Instead, in cases of conflict between apparent belief and behavior,
this view attributes to agents no determinate belief at all, but rather an “in-between”
state. Schwitzgebel (2010) explicitly calls this state “in-between belief.” This follows
from Schwitzgebel’s (2002, 2010) broader account of attitudes, in the philosophi-
cal sense, as dispositions. All propositional attitudes, on this view—beliefs, inten-
tions, and so on—are broad-based multitrack dispositions. To believe that plums
are good to eat, for example, is to be disposed to have particular thoughts and feel-
ings about eating plums and to behave in particular ways in particular situations.
These thoughts, feelings, and behaviors make up what Schwitzgebel calls a “disposi-
tional profile.” You are said to have an attitude, on this view, when your dispositional
82 Mind
profile matches what Schwitzgebel (2013) calls the relevant folk-psychological ster
eotype. That is, you are said to have the attitude of believing that plums are good
to eat if you feel, think, and do the things that ordinary people regard as broadly
characteristic of this belief.
This broad dispositional approach to attitudes—particularly to the attitude of
belief—is hotly contested (see, e.g., Ramsey, 1990; Carruthers, 2013). One core
challenge for it is to explain away what seems to be an important disanalogy between
trait-based explanations of action and mental state–based explanations of action.
As Peter Carruthers (2013) points out, traits are explanatory as generalizations. To
say that Fiona returned the money because she is honest is to appeal to something
Fiona would typically do (which perhaps matches a folk-psychological stereotype
for honesty). But mental states are explanatory as token causes (of behavior or
other mental states). To say that Fiona returned the money because she believes
that “honesty is the best policy” is to say that this token belief caused her action.
Schwitzgebel’s broad dispositional approach seems to elide this disanalogy (but for
replies see Schwitzgebel, 2002, 2013).
In §3, I noted several challenges this approach faces specifically in making sense
of cases of conflict between apparent belief and behavior. One challenge is that
the broad dispositional approach seems to defer the problem of belief-attribution
rather than solve it. Another is that it underestimates the intuitive weight of avowals,
for example, in the case of a social justice activist who explicitly decries racism but
nevertheless holds implicit racial biases. It seems appropriate to attribute the belief
that racism is wrong to her (which is not the same as saying that she is blameless; see
Chapters 4 and 5). In addition, because this approach takes folk-psychological ste-
reotypes as foundational for attitude-ascription, it must be silent on cases in which
folk-psychological stereotypes are lacking. This is precisely the case with implicit
attitudes. Our culture lacks a stereotypical pattern of thought, feeling, and action
against which an agent’s dispositional profile could be matched. Earlier I noted that
implicit attitudes are a version of what have historically been called animal spirits,
appetite, passion, association, habit, and spontaneity, but there is no obvious folk-
psychological stereotype for these either.
The problem with the dispositional approach, in this context, is that it ultimately
tells us that we cannot attribute an implicit attitude to an agent, not at least until
ordinary people recognize an implicit attitude to have signature features. But in
some cases, I think, science and philosophy can beat folk psychology in attitude-
ascription. Consider by analogy the question of ascribing beliefs to nonhuman
animals. As Carruthers (2013) puts it, there is good (scientific) reason to think
that apes share some basic concepts with human beings, concepts like grape and
ground. And so there is good reason to think that apes have the capacity to believe
that “there is a grape on the ground.” But my sense is that there is no discernible folk-
psychological stereotype for belief-attribution in apes. Here it seems obvious that
folk psychology is outperformed by science and philosophy for attitude-ascription.
Implicit Attit ud e s and the A rchitec t ure o f the Mind 83
34
Bar-Anan and Nosek use several criteria: “internal consistency, test-retest reliability, sensitivity
to known-groups effects, relations with other indirect measures of the same topic, relations with direct
measures of the same topic, relations with other criterion variables, psychometric qualities of single-
category measurement, ability to detect meaningful variance among people with nonextreme attitudes,
and robustness to the exclusion of misbehaving or well-behaving participants” (2004, 682). See the
Appendix for definitions and discussion.
35
See the Appendix.
Implicit Attit ud e s and the A rchitec t ure o f the Mind 85
5. Alief
If implicit attitudes are not reflexive processes, mere associations, beliefs, or dis-
positional traits, what are they? Perhaps they are what Gendler (2008a,b, 2011,
2012) calls “aliefs.” More primitive than belief, an alief is a relatively inflexible dis-
position to react automatically to an apparent stimulus with certain fixed affective
responses and behavioral inclinations (Gendler 2008b, 557–60). While aliefs are
said to dispose agents to react to stimuli in particular ways, aliefs are not proposed
to be dispositions in the sense of traits of character. Rather, they are proposed to
be a genuine mental state, one that deserves to be included in the taxonomy of the
human mind. In Gendler’s parlance, a firm believer that superstitions are bogus
may yet be an abiding aliever who cowers before black cats and sidewalk cracks. An
agent may be sincerely committed to anti-racist beliefs, but simultaneously harbor
racist aliefs. Indeed, Gendler proposes the concept of alief in order to make sense
of a wide swath of spontaneous and impulsive actions. She goes so far as to argue
that aliefs are primarily responsible for the “moment-by-moment management” of
behavior (2008a, 663). Moreover, she proposes an account of the content of alief
in terms of three tightly co-activating Representational, Affective, and Behavioral
components. This account of the RAB content of alief is clearly closely related to my
description of implicit attitudes.36
I will elaborate on the similarities between my account of implicit attitudes and
alief later. However, I will also argue that implicit attitudes are not aliefs.37
Gendler proposes the notion of alief in order to elucidate a set of cognitive
and behavioral phenomena that appear to be poorly explained by either an
agent’s beliefs (in concert with her desires) or her mere automatic reflexes. She
writes:
We often find ourselves in the following situation. Our beliefs and desires
mandate pursuing behaviour B and abstaining from behaviour A, but we
nonetheless find ourselves acting—or feeling a propensity to act—in A-like
ways. These tendencies towards A-like behaviours are highly recalcitrant,
persisting even in the face of our conscious reaffirmation that B-without-A
36
Note that Gendler uses the term “content” in an admittedly “idiosyncratic way,” referring to the
suite of RAB components of states of alief and leaving open whether aliefs are propositional or concep-
tual, as well as insisting that their content in some sense includes “affective states and behavioral dispo-
sitions” (2008a, 635, n. 4). This is a different usage of the term “content” than I deployed in Chapter 2.
37
Although it would not be far off to say that what I am calling implicit attitudes are aliefs but
that my characterization of alief differs from Gendler’s. In previous work (Brownstein and Madva,
2012a,b), I have presented my view as a revised account of the structure of alief. Ultimately, I think this
is a terminological distinction without much substance. “Alief ” is a neologism after all, and Gendler is
notably provisional in defining it. What matters ultimately is the structure of the state.
86 Mind
Aliefs are, Gendler argues, relations between an agent and a distinctive kind of inten-
tional content, with representational, affective, and behavioral (or RAB) components.
They involve “a cluster of dispositions to entertain simultaneously R-ish thoughts,
experience A, and engage in B” (2008a, 645). These components are associatively
linked and automatically co-activating. On the Skywalk, for instance, the mere percep-
tion of the steep drop “activates a set of affective response patterns (feelings of anxi-
ety) and motor routines (muscle contractions associated with hesitation and retreat)”
(2008a, 640). The RAB content of this alief is something like “Really high up, long long
way down. Not a safe place to be! Get off!” (2008a, 635). Likewise, the sight of feces-
shaped fudge “renders occurrent a belief-discordant alief with the content: ‘dog-feces,
disgusting, refuse-to-eat’ ” (2008a, 641).
Aliefs share an array of features. Gendler writes:
But as formulated, I do not think the concept of alief can fulfill the promise of
integrating belief-desire psychology with the dual-systems framework. One rea-
son for this is based on a concern about dual-systems theorizing. It is hard to find
a list of core features of System 1 states or processes that are not shared by some
or many putative System 2 states or processes. As in Hume’s example of hearing a
knock on the door causing me to believe that there is someone standing outside
it, beliefs can be formed and rendered occurrent automatically, without attention,
and so on.39 This may not be an insurmountable problem for dual-systems theo-
rists.40 Regardless, there is a broader interpretation of Kriegel’s point about inte-
grating the concept of alief with dual-systems theorizing. What is more broadly at
stake, I think, is whether the concept of alief is apt in the full suite of the virtues
and vices of spontaneity. Does alief—as formulated—provide a satisfying account
of the psychology of both poles of spontaneity? Note how this is not an exogenous
demand I am putting to the concept of alief. Rather, it follows from Gendler’s
formulation of the concept. If aliefs are causally implicated in much of moment-
to-moment behavior, then they must be operative in both Gendler-style cases of
38
For reviews of dual-systems theories, see Sloman (1996) and Evans and Frankish (2009). In the
popular press, see also Kahneman (2011).
39
Thanks to an anonymous reviewer for pushing me to clarify this and for reminding me of the
example. See Mandelbaum (2013) for a discussion.
40
Jonathan Evans and Keith Stanovich (2013) respond to counterexamples like this by presenting
the features of System 1/2 as “correlates” that should not be taken as necessary or sufficient. On the
value of setting out a list of features to characterize a mental kind, see Jerry Fodor’s (1983) discussion
of the nine features that he claims characterize modularity. There, Fodor articulates a cluster of traits
that define a type of mental system, rather than a type of mental state. A system, for Fodor, counts as
modular if it shares these features “to some interesting extent.”
88 Mind
Notice that Gendler cases are by and large characterised by their conserva-
tiveness; they are cases where behaviour is guided by habit, often in ways
which subvert or constrain the subject’s openness to possibilities. There is
a class of human behaviours at the opposite extreme: cases where, with-
out the mediation of conscious reasoning or exercise of will, we generate
creative solutions to practical or theoretical problems. Creative processes
involve unexpected, unconventional and fruitful associations between
representations and between representations and actions—without the
subjects generally having access to how that happens. We suggest that both
creative and habitual processes are triggered by an initial representation—
not always specifiable at the personal level and sometimes occurring
beyond conscious will—that leads, through a chain of barely conscious
associations, to various other states, the last of which can be either another
representative state or an action. While in Gendler cases the associations
lead to familiar patterns of thought and behaviour, in creative processes
they lead to unpredictable, novel outcomes. Paradigmatic instances of
those outcomes—jazz improvisations, free dance/speech performances—
are unlikely to derive from some kind of (creative) conceptual represen-
tation, since they exceed in speed and fineness of grain the resources of
conceptual thought. (2012, 797–798)
41
See Lewicki et al. (1987) for a related experiment.
Implicit Attit ud e s and the A rchitec t ure o f the Mind 89
card turns before they can say why they prefer to pick from the good decks. But after
about fifty turns, most participants can say that they prefer the good decks, even if
they aren’t sure why. And, most strikingly, after about only twenty turns, while most
participants do not report having any reason for distinguishing between the decks
(i.e., they don’t feel differently about them and say that they don’t see any difference
between them), most participants do have higher anticipatory skin conductance
responses before picking from the bad decks. This suggests that most participants
have some kind of implicit sensitivity to the difference between the decks before
they have any explicit awareness that there is such a difference. Nagel takes this to
mean that agents’ putative aliefs—represented outwardly by a small change in affect
(and measured by anticipatory skin conductance)—may be, in some cases, more
sensitive to reality than their explicit beliefs.
Gendler concedes in reply to these commentators that alief-like processes may
be implicated in creative and reality-sensitive behavior, and thus seems to allow
room for “intelligent aliefs” (2012, 808). This is not a surprising concession, given
Gendler’s earlier claim that “if alief drives behavior in belief-discordant cases, it
is likely that it drives behavior in belief-concordant cases as well” (2008a, 663).
Gendler understandably emphasizes vivid cases of belief-discordance—skywalkers
and fudge-avoiders and implicitly biased agents—because these are the cases in
which an agent’s aliefs are forced into the light of day. These cases are a device for
the presentation of alief, in other words.
But Gendler’s claim that alief is primarily responsible for the moment-to-moment
management of behavior entails that these states ought to be attuned to changes in
the environment and be implicated in (some forms of) intelligent behavior. While
“conservative” aliefs—like those implicated in the behaviors of skywalkers, fudge-
avoiders, and the like—are pervasive, they don’t dominate our moment-to-moment
lives. Mostly agents more or less cope successfully with the world around them
without constantly doing combat with gross conflicts between their automatic and
reflective dispositions. We move closer to see small objects and step back from large
ones in appropriate contexts; we extend our hands to shake when others offer us
theirs (in some cultures); we frown sympathetically if someone tells us a sad story
and smile encouragingly if the story is a happy one; and so on. Ordinary success-
ful coping in cases like these requires rapid and efficient responses to changes in
the environment—exactly those features of alief that one would expect. If aliefs
are responsible for the moment-to-moment management of behavior, then these
states must be sufficiently flexible in the face of changing circumstances and attuned
to subtle features of the environment to drive creative, reality-sensitive, intelligent
behavior.
The deeper concern is not about Gendler’s emphasis on particular kinds of cases,
that is, cases of norm-and belief-discordant automaticity. The deeper concern is
that the RAB structure of alief cannot make sense of putative intelligent aliefs. It
90 Mind
cannot do so because it lacks a mechanism for dynamic learning and internal feed-
back, in short, what I called “alleviation” in Chapter 2.
Another related concern about Gendler’s formulation of alief stems from what
I will call the “lumpiness” question. Gendler argues that aliefs are a unified men-
tal kind. The RAB components are operative, as she says, “all at once, in a single
alief ” (2008b, 559). But why make the strong claim that alief represents a distinct
kind of psychological state rather than a set of frequently co-occurring appearances
or beliefs, desires, and behaviors?42 Why say, for example, that the skywalker has
a singular occurrent alief—as Gendler does—with content like “Really high up,
long long way down. Not a safe place to be! Get off!” rather than something like
a co-occurring representation of being high up, a feeling of being in danger, and a
response to tremble and perhaps flee? What is the value, as Nagel (2012) asks, of
explaining action in terms of “alief-shaped lumps?”
Gendler’s answer is that aliefs are “lumpy” in a way that reflective states like
beliefs are not. Belief–desire–action trios are composites that can be combined
in any number of ways (in principle). One way to see this is to substitute a differ-
ent belief into a common co-occurrence of a belief, desire, and action (Gendler,
2012). For instance, around noon each day I may experience this belief–desire–
action trio: I believe that cheese is reasonably healthy for me; I desire to eat a
healthy lunch; I make a cheese sandwich. Should I overhear on the radio, how-
ever, just as I am reaching in the fridge, that the kind of cheese I have is defini-
tively linked to an elevated risk for cancer and will soon be banned by the FDA,
and I have credence in the source of the information, it is likely that I’ll avoid
the cheese and eat something else. If the acquisition of a new belief changes
my behavior on the spot like this, then I know that this particular composite
of a belief, desire, and action is, as Gendler puts it, psychologically contingent.
By contrast, no new belief, it seems, will immediately eliminate the skywalker’s
fear and trembling. The skywalker’s representations (“high up!”), feelings (“not
safe!”), and behavior (“get off!”) are too tightly bound, forming a fairly cohesive
lump, as it were.
This reply to the lumpiness question is partly successful. It offers a clear criterion
for marking the alief–belief distinction: the constituent RAB relata of alief auto-
matically co-activate, while ordinary beliefs, feelings, and behavior do not (they are
“psychologically continent” or, as Gendler also puts it, “fully combinatoric”). In a
related sense, Gendler’s reply to the lumpiness question provides a test of sorts for
determining when this criterion is met. Provide a person with some evidence that
runs contrary to her putative alief and see what happens. If it interrupts the auto-
matic co-activation of the RAB relata, as in the case of the cheese sandwich, then this
was not an alief-guided action.
42
See Nagel (2012), Currie and Ichino (2012), and Doggett (2012).
Implicit Attit ud e s and the A rchitec t ure o f the Mind 91
But several open questions remain. The first is how and why the presentation of
“evidence” affects the agent in such a way as to mark the alief–belief divide. On a
broad interpretation, evidence might just refer to any information relevant to the
agent. But on this interpretation, an evidence-insensitive state would seem inher-
ently uncreative and unresponsive to changing environments. It would by defini-
tion be incapable of playing a role in the cases of virtuous spontaneity discussed by
critics like Currie and Ichino (2012) and Nagel (2012). Moreover, it is clear that
the presentation of evidence that contradicts agents’ beliefs sometimes utterly fails
to dislodge their attitude. Something more specific has to be said about the condi-
tions under which evidence does and does not affect the agent’s occurrent mental
states and her related affective and behavioral reactions. Finally, Gendler’s reply to
the lumpiness question relies on the idea that the RAB relata of alief automatically
co-activate. But what properties does the process of co-activation have? Do all puta-
tive aliefs co-activate in the same way?
I have in mind here the situation in which a CV is not quite worthy of one evaluation but a bit
43
too good for another. Cases of ambiguity like this—when one says something like “Hmmm . . . not
this pile . . . but not this other pile . . . hmmm . . . I guess this pile”—are those in which gut feelings and
the like are most evident. These are the situations in which implicit attitudes are most clearly affecting
behavior. Ambiguous situations like this are also those in which measures of implicit attitudes like the
IAT are most effective for making behavioral predictions.
Implicit Attit ud e s and the A rchitec t ure o f the Mind 95
44
I follow Ginsborg’s formulation that agents “make a claim” to primitive normativity. I take
Ginsborg’s idea to be that there is a normative claim implicit in the agent’s behavior. The agent acts as
she does because doing so is, for her, in some sense, appropriate. This is not to say that she endorses
what she does or consciously takes her behavior to be appropriate. Rather, the idea is that her behavior
is in part motivated by a sense of appropriateness, which is akin, I suggest, to the felt sense of rightness
or wrongness embedded in the process of alleviation (see Chapter 2, §4).
45
In this example, it is not just that the child lacks verbal mastery of the concept green. Rather,
the child (by hypothesis) lacks the concept itself (some evidence for which may be that the child can-
not use the word “green” correctly). That is, the child may be able to discriminate between green and
not-green things without having the concept green. Note that this is a controversial claim. See Berger
(2012).
46
See the discussion in Chapter 4 of theories of skill learning. Interestingly, this sense of appropri-
ateness may be found in nonhuman animals, or so suggests Kristen Andrews (2015), who makes use
of Ginsborg’s concept of primitive normativity in discussing what she calls “naive normativity.” Naive
normativity offers agents a specifically social sense of “how we do things around here.” Andrews and
Ginsborg seem to differ on whether, or when, nonhuman animals make claims to primitive or naive
normativity. Ginsborg discusses the case of a parrot that has been trained to say “green” when shown
green things. She argues that the parrot does not make a claim to primitive normativity, because, while
it has had the necessary learning experiences and is suitably motivated to go on in the right way, it
does not, Ginsborg thinks, experience a sense of appropriateness when saying “green” at the right time.
Andrews discusses the case of goslings having imprinted on a human being rather than their biological
mother, and she thinks these animals are in fact likely to experience a sense of appropriateness when
following their adopted “mother” around (in addition to having the right experiences and motivation).
The goslings will appear to feel calm and comforted when the human “mother” is around and upset in
the “mother’s” absence. This empirical question about what parrots and goslings do or don’t experience
provides an interesting route to considering whether or how claims to primitive normativity are shared
by human beings and nonhuman animals.
96 Mind
47
Although it is important to note that these learning experiences can be very brief in some cases.
See, for instance, research on the “minimal group paradigm” (Otten and Moskowitz, 2000; Ashburn-
Nardo et al., 2001).
48
Although, of course, much more would have to be said in order to substantiate this claim. My
aim is not to defend Ginsborg’s account of skill learning. Rather, my aim is to show why it is plausible
Implicit Attit ud e s and the A rchitec t ure o f the Mind 97
Ginsborg’s middle ground also provides a model for the sort of normative guidance
implicit attitudes offer agents via their spontaneous inclinations and dispositions.
As the kinds of cases I’ve discussed show, sometimes our implicit attitudes go
awry from the perspective of moral or rational normativity. Implicit bias illustrates
this, as do the other kinds of cases I discussed in Chapter 1 of impulses and intu-
itions leading to irrational and immoral ends. Other times, our implicit attitudes
are better, but in mundane ways that we barely notice, such as in the distance-
standing and museumgoer cases. And other times still, our implicit attitudes get it
right in spectacular ways, such that they seem to provide action guidance in ways
that our reflective beliefs and values don’t. The cases of spontaneous reactions and
decisions in athletic and artistic domains illustrate this. Here it seems as if agents’
claims to primitive normativity are not just defeasibly warranted. Rather, they seem
to be illustrative of a kind of action at its best. The world-class athlete’s spontane-
ous reactions show the rest of us how to go on in a particular context; the master
painter’s reactions reveal features of perception and emotion we wouldn’t otherwise
experience.
7. Conclusion
I’ve argued in this chapter that states with FTBA components are best understood
as implicit attitudes, which are distinct from reflexes, mere associations, beliefs, dis-
positions, and aliefs. Along the way, and in summary, I’ve described what makes
implicit attitudes unique. By no means have I given a complete account of implicit
attitudes, but I hope to have shown why they deserve to be considered a unified and
distinct kind of mental state. With these points in mind, I now turn to the relation-
ship between these states and agents themselves. In what sense are implicit attitudes
features of agents rather than features of the cognitive and affective components
out of which agents minds’ are built? Are implicit attitudes, and the spontaneous
actions for which they seem to provide guidance, really ours?
and how it could then play a role in illuminating the peculiar nature of implicit attitudes, as states that
neither merely cause behavior nor rationalize it in any full-blooded sense.
PA RT T W O
SELF
4
In “The View from in Here,” an episode of the This American Life podcast, reporter
Brian Reed (2013) plays recorded conversations between prisoners and prison
guards at the Joseph Harp Correctional Center in Lexington, Kentucky. The pris-
oners and guards had both just watched a documentary film critical of the prison
industry in the United States. The conversations were meant to allow prisoners and
guards to reflect on the film. At the center of Reed’s story is a conversation between
Antoine Wells, an inmate serving a nine-year sentence, and Cecil Duly, a guard at
the prison. Their conversation quickly turned from the film itself to personal prob-
lems between Wells and Duly. Wells speaks at length about his perception of being
treated disrespectfully by Duly. He resents being ignored when asking for help,
being told to do meaningless tasks like picking daisies so that he’ll remember “who’s
in charge,” and generally being viewed as if he’s “nothing.” In response, Duly holds
his ground, offering a different interpretation of each of the events Wells describes
and claiming that he never shows disrespect to Wells. Duly explicitly defends the
notion that Wells, as a prisoner, has “no say over anything.” That’s just what being in
prison means. The conversation is heart-wrenching, in part because Wells and Duly
speak past one another so profoundly, but also because both Wells and Duly see that
Wells’ experiences will do him no good. They both recognize that Wells is likely to
leave prison bereft of resources and skills, angry and indignant toward the society
that locked him up, and anything but “rehabilitated.”1
The episode takes a surprising turn when Reed returns to the prison several
months later to see if anything has changed between Wells and Duly as a result of
their conversation. Duly, the guard, states unequivocally that nothing has changed.
He stands by his actions; he hasn’t thought about the conversation with Wells much
at all; he feels no different; and he continues to claim that he treats all of the pris-
oners fairly and equally. Wells, however, feels that quite a lot has changed since he
expressed his feelings to Duly. And Wells claims that it is Duly who has changed.
1
To his credit, Lieutenant Duly also laments that much of the contemporary prison system in the
United States is a for-profit industry, seemingly incentivized to fail to rehabilitate prisoners.
101
102 Self
Duly is less dismissive, he says. Duly takes the time to answer questions. He looks at
Wells more like a human being.
If we accept both of their testimonies at face value, it seems that Duly has begun
to treat Wells kindly on the basis of inclinations that run contrary to Duly’s explicit
beliefs. These inclinations seem to unfold outside the ambit of Duly’s intentions
and self-awareness.2 In this sense, Duly’s case resembles those discussed by Arpaly
(2004), in particular that of Huckleberry Finn. On Arpaly’s reading, Huck’s is a case
of “inverse akrasia,” in which an agent does the right thing in spite of his all-things-
considered best judgment. Huck’s dilemma is whether to turn in his friend Jim, an
escaped slave. On the one hand, Huck believes that an escaped slave amounts to a
stolen piece of property and that stealing is wrong. On the other, Huck is loyal to his
friend. He concludes from his (less than ideal) deliberation that he ought to turn
Jim in. But Huck finds himself unable to do it.3
While Huck is fictional, there is good reason to think that he is not, in the rel-
evant respects, unusual. Consider, for example, one of the most landmark studies in
research on intergroup prejudice. Richard LaPiere published “Attitudes vs. Actions”
in 1934, a paper about the two years he spent visiting 251 hotels and restaurants
with a couple of Chinese ethnicity. LaPiere and the couple would enter establish-
ments asking for accommodations, and were turned down only once. However,
when LaPiere followed up the visits with phone calls to the hotels and restaurants,
asking, “Will you accept members of the Chinese race in your establishment?” 92%
of the 128 responses were “no.” LaPiere’s study is usually discussed in terms of the
gap between attitudes and actions, as well as the importance of controlled labora-
tory studies.4 (For instance, it is not clear whether LaPiere spoke to the same per-
son at each establishment when he visited and when he phoned subsequently.) But
what is perhaps most striking about LaPiere’s study is that the hoteliers and restaura-
teurs seemed to act ethically when LaPiere visited, despite their explicit prejudiced
beliefs. That is, their explicit beliefs were prejudiced but their behavior (in one con-
text, atleast) was comparatively egalitarian. It seems that when these people were
put on the spot, with the Chinese couple standing in front of them, they reacted
in a comparatively more egalitarian way, despite their beliefs to the contrary.
2
As I say, this depends on our accepting Duly’s testimony—that he denies feeling any differently
toward Wells—at face value. It is possible that Duly is misleading Reed, the interviewer. It is also quite
possible that Duly is aware of his changes in feeling and behavior, and has intended these changes in an
explicit way, despite his avowals to the contrary. Perhaps Duly is conflicted and just doesn’t know what
to say. This interpretation also runs contrary to his testimony, but it doesn’t quite entail his misleading
the interviewer in the same way. The story of Wells and Duly is only an anecdote, meant for introducing
the topic of this chapter, and it is surely impossible to tell what is really driving the relevant changes.
See §4 for further discussion.
3
The case of Huckleberry Finn was discussed in this context earlier, most notably by Jonathan
Bennett (1974). See Chapter 6.
4
See, e.g., the discussion in Banaji and Greenwald (2013).
Car ing , Implicit Attit ud e s , and the S el f 103
These are all cases in which agents’ spontaneous behavior seems to be praise-
worthy, relative, at least, to their explicit beliefs. But are agents’ spontaneous reac-
tions really praiseworthy in these cases? Praise typically attaches to agents, and many
people associate agents with their reflective beliefs and judgments.
Similar questions arise with respect to the many vice cases in which agents’ spon-
taneous inclinations seem to be blameworthy. In these cases it is equally difficult to
assess the person who acts. Does the skywalker’s fear and trembling tell us anything
about her? Are judges who are harsher before lunch (Chapter 1) on the line for
the effects of their glucose levels on their judgment, particularly assuming that they
don’t know or believe that their hunger affects them? These questions are perhaps
clearest in the case of implicit bias. Philosophers who think of agential assessment
in terms of a person’s control over or consciousness of her attitudes have rightfully
wondered whether implicit biases are things that happen to people, rather than fea-
tures of people (as discussed later).
Across both virtue and vice cases, spontaneity has the potential to give rise to
actions that seem “unowned.” By this I mean actions that are lacking (some or all
of) the usual trappings of agency, such as explicit awareness of what one is doing or
one’s reasons for doing it; direct control over what one is doing; responsiveness to
reasons as such; or perceived consistency of one’s behavior with one’s principles or
values. But these are actions nevertheless, in the sense that they are not mere hap-
penings. While agents may be passive in an important sense when acting spontane-
ously, they are not thereby necessarily victims of forces acting upon them (from
either “outside” or “inside” their own bodies and minds).
The central claim of this chapter is that spontaneous actions can be, in central
cases, “attributable” to agents, by which I mean that they reflect upon the charac-
ter of those agents.5 Attributability licenses (in principle) what some theorists call
“aretaic” appraisals. These are evaluations of an action in light of an agent’s charac-
ter or morally significant traits. “Good,” “bad,” “honorable,” and “childish” are all
aretaic appraisals. I distinguish attributability—which licenses aretaic appraisals—
from “accountability,” which I take to denote the set of ways in which we hold one
another responsible for actions.6 Accountability may license punishment or reward,
for example. I develop an account of attributability for spontaneous action in this
chapter, in virtue of the place of implicit attitudes in our psychological economy.
5
Note that I use the terms “character” and “identity” interchangeably. Note also that while some
theorists focus on attributability for attitudes as such, my focus is on attributability for actions.
However, as will be clear, I think that judgments about attributability for actions are, and should be,
driven primarily by considerations of the nature of the relevant agent’s relevant attitudes. This stands in
contrast to questions about accountability, or holding responsible, as I discuss here and in more depth
in Chapter 5.
6
For related discussion on distinguishing attributability from accountability for implicit bias, see
Zheng (2016).
104 Self
In the next chapter, I situate this account with respect to broader questions about
accountability and holding responsible.
1. Attributability
At its core, attributability is the idea that some actions render the person, and not
just the action itself, evaluable. Imagine that I inadvertently step on your toes in a
crowded subway car. It’s the New York City subway, which means the tracks are old,
and the cars shake and bump and turn unpredictably. I’m holding onto the overhead
rail, and when the car jostles me, I shift my feet to keep my balance, and in the pro-
cess step on your toes. Chances are that you will be upset that I’ve stepped on your
toes, because something bad has happened. And, perhaps momentarily, you will be
upset at me, not just unhappy in general, since I’m the cause of your pain. Had I not
been on the train this day, your toes wouldn’t be throbbing. Or perhaps you have
negative feelings about people on the subway in general, because you think they’re
clumsy or self-absorbed and now I’ve just confirmed your beliefs. None of these
responses, though, is really about me. The first is about me as nothing more than the
causal antecedent of your pain; you’d be equally angry if an inanimate object fell on
your foot. The second is about me as an anonymous member of a general class you
dislike (i.e., subway riders). In neither case would your reaction signal taking the
outcome as reflecting well or poorly on my personality or character. It is doubtful,
for example, that you would think that I’m a thoughtless or selfish person, if you
granted that my stepping on your toes was really an accident. A contrasting case
makes this clear too. Imagine that I stomped on your toes because I felt like you
weren’t listening to me in a conversation. In this case, you might understandably
feel as if you know something about what kind of person I am.7 You might think I’m
a violent and childish person. The charge of childishness is telling, since this kind of
behavior isn’t unusual in children, and it curries a different kind of moral assessment
in their case. A five-year-old who stomps on his teacher’s toes because she’s ignoring
him is behaving badly,8 of course, but it is far less clear what we should think about
what kind of a person the child is compared with the adult who stomps on your toes
on purpose.
Back to the subway for a second: it is also not hard to imagine cases in which
something significant about my character is open for evaluation, even if I’ve done
nothing more than inadvertently step on your toes. If I aggressively push my way
toward my favorite spot near the window on the subway and crunch your toes on
the way, then something about my character—something about who I am as an
agent in the world—gets expressed, even if I didn’t mean to step on your toes. In
You might know about what Peter Strawson (1962) called my “quality of will.”
7
this last case, I am open for evaluation, even if other putatively “exculpating condi-
tions” obtain.9 In addition to lacking an intention to step on your toes, I might also
not know that I’ve stepped on your toes, and I might even have tried hard to avoid
your toes while I raced to my favorite spot. Regardless, what I do seems to express
something important about me. I’m not just klutzy, which is a kind of “shallow” or
“grading” evaluation (Smart, 1961). Rather, you would be quite right to think “what
a jerk!” as I push by.10
Theorists have developed the notion of attributability to make sense of the
intuition that agents themselves are open to evaluation even in cases where
responsibility-exculpating conditions might obtain. Here’s one standard example:
Loaded Gun Fifteen year old Luke’s father keeps a shotgun in the house
for hunting, and last fall started to teach Luke how to shoot with it. Luke
is proud of the gun and takes it out to show his friend. Since the hunting
season has been over for months, it doesn’t occur to Luke that the gun
might be loaded. In showing the gun to his friend, unfortunately he pulls
the trigger while the gun is aimed at his friend’s foot, blowing the friend’s
foot off.11
To some people at least, there is a sense in which Luke himself is on the line, despite
the fact that he didn’t intend to shoot his friend and didn’t know that the shotgun
was loaded. Luke seems—to me at least—callow or reckless. This sense is, of course,
strongly mitigated by the facts of the case. Luke would seem monstrous if he meant
to shoot his friend in the foot, and he would seem less reckless if he were five rather
than fifteen. But in Luke’s case, as it stands, the key facts don’t seem to exculpate all
the way down. It should have occurred to Luke that the gun might be loaded, and his
failure to think this reflects poorly on him. Some kind of aretaic appraisal of Luke
seems appropriate.
Of course, I’m trading in intuitions here, and others may feel differently about
Luke. There is a large body of research in experimental moral psychology testing the
9
I borrow the term “exculpating conditions” from Natalia Washington and Daniel Kelly (2016).
10
I say this loosely, though, without distinguishing between the thought “What a jerk!” and “That
guy is acting like a jerk!” The latter reaction leaves open the possibility that the way in which this action
reflects on my character might not settle the question of who I really am, at the end of the day. As I dis-
cuss in Chapter 5, in relation to the idea of the “deep self,” I am skeptical about summative judgments of
character like this. But I recognize that some assessments of character are deeper than others. Perhaps
I am having an extraordinarily bad day and am indeed acting jerky, but I don’t usually act this way. As
I discuss in §3, dispositional patterns of thought, feeling, and behavior are an important element of
characterological assessment.
11
This version of Loaded Gun is found in Holly Smith (2011). Smith’s version is an adaptation
from George Sher (2009). For related discussions, see Smith (2005, 2008, 2012), Watson (1996),
Hieronymi (2008), and Scanlon (1998).
106 Self
conditions under which people do and don’t have these kinds of intuitions. I discuss
some of this research in the next chapter. For now, all I need is that there exists at
least one case in which an action seems rightly attributable to an agent even if the
action is nonconscious in some sense (e.g., “failure to notice” cases), nonvoluntarily,
nontracing (i.e., the appropriateness of thinking evaluatively of the person doesn’t
trace back to some previous intentional or voluntary act or omission), or otherwise
divergent from an agent’s will.
The difficulty for any theory of attributability is determining that on the basis of
which actions are attributable to agents. What distinguishes those actions that are
attributable to agents from those that aren’t? Influential proposals have focused on
those actions that reflect an agent’s reflective endorsements (Frankfurt, 1988); on
actions that reflect an agent’s narrative identity or life story (Schechtman, 1996);
and on actions that stem from an agent’s rational judgments (Scanlon, 1998; Smith,
2005, 2008, 2012).12 However, a case like Loaded Gun seems to show how an action
can render an agent open to evaluative assessment even if that action conflicts with
the agent’s reflective endorsements, narrative sense of self, and rational judgments.
Similarly, something about the “moral personality” (Hieronymi, 2008) of agents like
Duly, Huck, and the skywalker seem to be on the line, even if these agents’ actions
don’t have much to do with their reflective endorsements, narrative sense of self,
and rational judgments, and as such aren’t “owned” in the full sense. I’ll say more
about these other approaches to attributability in Chapter 5, although nowhere will
I offer a full critical assessment of them. Rather, in this chapter I’ll lay out my own
view, and in the next I will situate it with respect to other prominent theories.
12
Properly speaking these are theories of “identification.” Identification names the process through
which some trait, experience, or action is rendered one’s own (Shoemaker, 2012, 117). Naming the
conditions of identification has proved difficult to say the least; it has been called a “holy grail” in the
philosophy of action (Shoemaker, 2012, 123). I focus on the conditions of attributability rather than
identification because of the line I want to draw between attributability and accountability. That is, I am
making a case for how certain “minimal” (unintentional, nontracing, etc.) actions reflect on agents’
character; I am not making a case for the conditions of full ownership of action or for what it means to
be responsible for an action.
13
Of course, one can provide a theory of caring itself, with no concern for questions about attrib-
utability. My claim is more narrowly focused on the relationship between caring and attributability.
Car ing , Implicit Attit ud e s , and the S el f 107
me that the ice cubes in my drink are square rather than round, even if, reflectively,
I think this is a trivial thing to care about. Or I might care about whether I get to the
next level on my favorite phone app, all the while wishing that I spent less time star-
ing at a little screen. Both the ice cubes and the phone app might even matter to me
if I’m focally unaware of caring about them. All I might notice is that something feels
off about my drink or after I’ve put the phone down. It might become clear to me
later that it was the ice cubes or the app that were bothering me, or it might not ever
become clear to me that I care about these things. Conversely, sometimes we don’t
care about things, even when we think we should. Despite believing that I should
probably care about the free-will debate, usually it just makes me feel . . . meh.14
It is crucial to note that I am referring to what Agnieska Jaworska (2007b)
calls ontological, rather than psychological, features of caring. In the psychologi-
cal sense, what one cares about are the things, people, places, ideas, and so on that
one perceives oneself as valuing. More specifically, the objects of one’s cares in the
psychological sense are the things that one perceives as one’s own. Cares in the psy-
chological sense track the agent’s own perspective. In the ontological sense, cares
are, by definition, just those attitudes (in the broad sense) that belong to an agent, in
contrast to the rest of the “sea of happenings” in her psychic life ( Jaworska, 2007b,
531; Sripada, 2015).15 One can be wrong about one’s cares in the ontological sense,
and one can discover what one cares about too (sometimes to one’s own surprise).
For example, I might discover that I have, for a long time, cared about whether other
people think that I dress stylishly. All along I might have thought that I don’t care
about such a trivial thing. But on days that people complimented my clothes, or
even merely cast approving glances at me, I felt peppier and more confident, with-
out consciously recognizing it. Both the content of my care (looking stylish) and
the emotions connected to it (e.g., feeling confident) might have escaped my con-
scious awareness. (Recall the point I made in Chapter 2, that emotion is often suf-
ficiently low-level as to escape one’s own focal awareness.) The care-based view of
attributability is concerned with cares in the ontological sense. Of course, it is often
the case that what one cares about in the ontological sense often coincides with
what one cares about in the psychological sense. But this is not a requirement. (In
14
Might believing that you should care about φ suffice for showing that you care about φ, per-
haps just a little? This is entirely possible. As I note later, my view is ecumenical about various routes
to attributability. Believing that φ is good might very well be sufficient for demonstrating that I care
about φ. Similarly, avowing that φ is good might also be sufficient for demonstrating that I care about
φ. Indeed, I suspect that beliefs and avowals about what one ought to care about often do reflect what
one cares about. I intend here only to point to the possibility of there being at least one case of a person
believing that she ought to care about φ but failing to care about φ.
15
The psychological sense of caring is perhaps more clearly labeled the “subjective” sense, but for
the sake of consistency I follow Jaworska’s (2007b) usage. Note also that the psychological–ontological
distinction is not between two kinds of cares, but rather between two senses in which one might refer
to cares.
108 Self
the next chapter, I discuss in more detail the relationship between cares in the onto-
logical sense and the agent’s own perspective.)
All things considered, a care-based theory of attributability is unusually inclusive,
in the sense that, on its lights, a great many actions will end up reflecting on agents,
potentially including very minor actions, like the museumgoer’s shifting around to
get the best view of the painting, as well as actions that conflict with an agent’s reflec-
tive endorsements, narrative identities, and rational judgments.16 One virtue of this
inclusiveness, as some care theorists have put it, is that cares are conceptually well
suited to make sense of actions “from the margins,” for example, the actions of small
children and Alzheimer’s patients ( Jaworska, 1999, 2007a; Shoemaker, 2015) or
agents with severe phobias or overwhelming emotions (Shoemaker, 2011). Small
children, for example, may not be able to make rational judgments about the value
of their actions, but they may nevertheless care about things. These sorts of actions
don’t easily fit into familiar philosophical ways of conceptualizing attributability.
The hard question is what it means to care about something. A persuasive answer
to this question is that when something matters to you, you feel certain ways about it.
According to the affective account of caring (e.g., Shoemaker, 2003, 2011; Jaworska,
1999, 2007a,b; Sripada, 2015), to care about something is to be disposed to feel cer-
tain complex emotions in conjunction with the object of one’s cares. When we care
about something, we feel “with” it. As David Shoemaker (2003, 94) puts it, in caring
about something, we are emotionally “tethered” to it. To care about your firstborn,
your dog, the local neighborhood park, or the history of urban beekeeping is to be
psychologically open to the fortunes of that person, animal, place, or activity.
Caring in this sense—of being emotionally tethered to something because
it matters to you—is inherently dispositional. For example, I can be angry about
the destruction of the Amazon rainforest—in the sense that its fate matters to
me—without experiencing the physiological correlates of anger at any particular
moment. To care about the Amazon involves the disposition to feel certain things
16
To clarify, the care-based theory of attributability that I endorse is unusually inclusive. Other care
theorists’ views tend to be more restrictive. For example, Arpaly and Timothy Schroeder (1999) argue
that “well-integrated” beliefs and desires—those beliefs and desires that reflect upon what they call the
“whole self ”—must be both “deep” (i.e., deeply held and deeply rooted) and unopposed to other deep
beliefs and desires. I address the issue of opposition in Chapter 5; I’ll argue that we can in fact have con-
flicting cares. The point I am making here can be fleshed out in terms of depth. Actions that are attrib-
utable to agents need not stem from “deep” states, on my view. Theorists like Arpaly and Schroeder
are focused only on morally relevant actions and attitudes, however, whereas I am focused on actions
that reflect character in general. It is also crucial to keep the scope of my claim in mind. Attributability
licenses aretaic attitudes—reactions stemming from an evaluation of a person’s character—but it does
not necessarily license actively blaming or punishing a person (in the case of bad actions). Keeping in
mind that attributability licenses aretaic attitudes alone should help to render the inclusiveness of my
claim less shocking. My view can be understood as a reflection of the way in which we perceive and
evaluate each other’s character constantly in daily life.
Car ing , Implicit Attit ud e s , and the S el f 109
at certain moments, like anger when you are reminded of the Amazon’s ongoing
destruction. (This isn’t to say that the disposition to feel anger alone is sufficient for
caring, as I discuss later.)
Caring in this sense is also a graded notion. Imagine that you and I both have pet
St. Bernards, Cupcake and Dumptruck. If you tell me that Cupcake has injured her
foot, I might feel bad for a moment, and perhaps I will give her an extra treat or rec-
ommend a good vet. Maybe I care about Cupcake, but not that much. Or perhaps
I care about you, and recognize that you care about Cupcake. If Dumptruck hurts his
foot in the same way, however, my reaction will be stronger, more substantive, and
longer-lasting. It seems that I care more about Dumptruck than I do about Cupcake.
But what, really, are cares, such that feelings can constitute them? I follow
Jaworska’s (2007b) Neo-Lockean approach, which builds upon work by Michael
Bratman (2000). The core of this approach is that a mental state or attitude that
belongs to an agent—those that are “internal” to her—must support the cohesion
of the agent’s identity over time. Bratman famously argues that plans and policies
play this role in our psychological economy. But it is possible that more unreflec-
tive attitudes play this role too. As Jaworska shows, Bratman’s elaboration of how
plans and policies support identity and cohesion over time is applicable to more
unreflective states, like certain emotions. Bratman argues that internal states display
continuity over time and referential connectedness. Continuity distinguishes, for
example, a fleeting desire from a desire that plays a more substantive role in one’s
mind. The concept of continuity also helps to distinguish one and the same state
occurring at two points in time from two distinct states occurring at two points
in time. In order to be identity-bearing, my desire for ice cream today must be the
same desire as my desire for ice cream tomorrow. Referential connectedness is a
relatively more demanding condition. Connections are referential links between
mental states or between attitudes and actions. A referential link is a conceptual
pointer. The content of my desire for ice cream, for example, points to the action
I take, and the action I take is an execution of this particular desire. The content of
the state and the action are co-referring. This clarifies that it is not enough for inter-
nality that a state like a desire causes an action; the content of my desire—that it is a
desire for ice cream—must point, through a conceptual connection, to my decision
to go to the store.
Jaworska argues that some, but not all, emotions display continuity and connec-
tion. Grief, for example, meets these conditions. On her view, grief involves painful
thoughts, a tendency to imagine counterfactuals, disturbed sleep, and so on. These
patterns display continuity and, by pointing referentially to the person or thing for
which one grieves, display connections as well. “In this way,” Jaworska writes, “emo-
tions are constituted by conceptual connections, a kind of conceptual convergence,
linking disparate elements of a person’s psychology occurring at different points in
the history of her mental life” (2007b, 553). Other emotions, like a sudden pang of
fear, do not display continuity and connection, on her view.
110 Self
On this rendering of the affective account of caring, the thing that you grieve for
is something you care about, while whatever made you spontaneously shrink in fear
is not. In turn this suggests that your grieving is something reflective of your charac-
ter, while your fearing is not.
I think it is right to say that feelings are tied to “mattering” in this way and that
affective states must display continuity and connection to count as mattering. But
I find the distinction between grief and fear—and, more important, the distinction
between putative kinds of emotions on which it is based—unconvincing.
Shoemaker and Jaworska appear to disagree about whether “marginal” agents, such as young
17
children, can care about things, but I do not pursue this question. For skepticism about whether non-
human animals lack the capacity to care, see Helm (1994).
Car ing , Implicit Attit ud e s , and the S el f 111
18
My point is not that Duncan and Barrett’s strong claim is necessarily correct, but rather that the
data’s attesting to the pervasiveness of affect in cognition makes it unlikely that there is a clear line
between cognitive and noncognitive emotion.
19
I follow these authors in focusing on cognitively complex reflective capacities in a broad sense. Of
course, deliberation, self-understanding, evaluative judgment, and so on can and sometimes do come
apart. Certainly some emotions involve self-understanding but not deliberation, for example. Perhaps
a good theory of secondary emotions would focus on some (or one) of these capacities but not oth-
ers. A care theorist interested in maintaining the distinction between her approach and voluntarist or
judgment-based approaches to attributionism would then need to show how the particular cognitively
complex capacity associated with secondary emotions is functionally distinct from what voluntarist
and judgment-based approaches posit as the source of internality.
112 Self
(2003, 2011) goes to great lengths to distinguish his view from those that find the
source of internality in an agent’s reflective judgment (such as Gary Watson [1975]
and Smith [2005, 2008]). However, if secondary emotions alone can underwrite
cares, and secondary emotions are just those emotions that are partly constituted
by cognitively complex reflective capacities like evaluative judgment, then it looks
as if the affective account of cares collapses into one based on cognitively complex
reflective capacities.
Third, the precise relationship between feelings and judgments in so-called sec-
ondary emotions is vague. Jaworska says that secondary emotions “involve” self-
awareness, deliberation, and so on. Shoemaker says that secondary emotions are
“mediated” by the prefrontal cortex and require certain cognitive abilities, such
that agents can engage in processes of evaluation. It is hard to situate these claims
precisely within theories of emotion (see below for more on theories of emotion).
Are evaluative judgments, according to Jaworska and Shoemaker, causes of emo-
tions? Or do they simply co-occur with emotions? Are they, perhaps, components
of emotional states? These differences matter. For example, in the case of envy, it
is one thing to say that the object of one’s envy matters to a person because envy
involves co-occurring feelings and judgments, such as the belief that the person who
is envied is smarter than oneself. It is another thing to say that the object of one’s
envy matters to a person because envy is triggered by the judgment that the person
who is envied is smarter than oneself, and subsequently one experiences certain
feelings. In the latter case, in which the agent’s judgment is taken to be a causal pre-
cursor to her feelings, it seems clear that it is the judgment itself that fixes the agent’s
cares. The judgment is responsible for showing what matters to the person, in other
words. But, alternatively, if feelings and judgments are taken to co-occur in a state of
envy, it is less clear which of these co-occurring states is responsible for constituting
a care. This becomes particularly salient in cases where feelings and judgments seem
to diverge. It’s hard to tell whether and how to attribute a care to me if emotions are
constituted by co-occurring feelings and judgments in the case in which I feel envi-
ous despite believing that there is no reason for me to feel this way.
Instead of distinguishing between primary and secondary emotions, the affec-
tive account of cares should take emotions to represent features of the world that an
agent values, even if those representations do not constitute judgments about what
the agent finds valuable. Here I build upon Jesse Prinz’s (2004) theory of emotion.20
Prinz (2004) aims to reconcile “feeling” (or “somatic”) theories of emotion with
“cognitive” theories of emotions. Crudely, feeling theories argue that emotions are
20
See also Bennett Helm (2007, 2009), who develops an affective theory of caring that does not
rely on the distinction between primary and secondary emotions. Helm argues that to experience an
emotion is to be pleased or pained by the “import” of your situation. Something has import, according
to Helm, when you care about it, and to care about something is for it to be worthy of holistic patterns
of your attention and action.
Car ing , Implicit Attit ud e s , and the S el f 113
feelings and that particular emotions are experiences of particular changes in the
body ( James, 1884; Lange, 1885; Zajonc, 1984; Damasio, 1994). For example,
elation is the feeling of one’s heart racing, adrenaline surging, and so on. As Prinz
argues, feeling theories do a good job of explaining the intuitive difference between
emotions and reflective thought. Evidence supporting this intuition stems from the
fact that particular emotions can be induced by direct physiological means, like tak-
ing drugs. Similarly, particular emotions can be elicited by simple behaviors, such
as holding a pencil in your mouth, which makes you feel a little happier, by forcing
your lips into a smile (Strack et al., 1988).21 It is hard to give an explanation of this
result in terms of an agent’s beliefs or evaluative judgments; why should mechani-
cally forcing your lips into a smile make you think about things differently and thus
feel happier? Relatedly, feeling theories do a good job of explaining the way in which
our feelings and reflective thoughts can come into conflict, as in the case where
I think that I shouldn’t be jealous, but feel jealous nevertheless. Feeling theories
have trouble, however, making sense of the way in which some emotions are some-
times closely tied to our thoughts. Consider the difference between feeling fearful
of a snake, for example, and feeling fearful of taking a test (Prinz, 2004). While your
bodily changes may be equivalent in both cases—racing heart, sweating, and the
like—in the case of the test your fear is tied to your belief that the test is important,
that you value your results, and so on, while in the snake case your fear can unfold
regardless of whether you believe that snakes are dangerous or that fearing snakes is
rational or good. There are many cases like this, wherein our emotions seem closely
tied to our beliefs, judgments, values, self-knowledge, and so on. Parents prob-
ably wouldn’t feel anxious when their kids get sick if they didn’t wonder whether
what seems by all rights like a common cold might actually be a rare deadly virus.
Similarly, jealous lovers probably wouldn’t feel envy if they didn’t harbor self-doubt.
And so on. Traditional feeling theories of emotion—those that define emotions
strictly in terms of the experience of particular changes in the body—have trouble
explaining these common phenomena.
Cognitive theories of emotion (Solomon, 1976; Nussbaum, 2001) and related
“appraisal” theories (Arnold, 1960; Frijda, 1986; Lazarus, 1991) do better at
explaining the link between emotions and reflective thought. Traditional cognitive
theories identify emotions as particular kinds of thoughts. To be afraid of some-
thing is to believe or imagine that thing to be scary or worthy of fear, for example.
These theories founder on precisely what feeling theories explain: the ways in which
emotions and thoughts often seem to come apart and the ways in which emotions
can form and change independently of our evaluative judgments. Appraisal theo-
ries are a more plausible form of cognitive theory. Appraisal theories argue that
21
But see Wagenmakers et al. (2016) effort to replicate Strack et al. (1998). See the Appendix for
discussion of replication in psychological science.
114 Self
emotions are at least in part caused by thoughts, but they do not identify emotions
as such with beliefs, judgments, and the like. Rather, appraisal theories hold that
particular emotions are caused by evaluative judgments (or appraisals) about what
some entity in the environment means for one’s well-being. These appraisals are of
what Richard Lazarus (1991) calls core relational themes. Particular emotions are
identified with particular core relational themes. For example, anger is caused by an
appraisal that something or someone has demeaned me or someone I care about;
happiness is caused by an appraisal that something or someone has promoted my
reaching a goal; guilt is caused by the appraisal of my own action as transgressing a
moral imperative; and so on.
In one sense, appraisal theories are hybrids of feeling theories and cognitive the-
ories of emotions (Prinz, 2004, 17–19). This is because appraisals themselves are
said to cause agents to experience particular feelings. Appraisals are precursors to
feelings, in other words, and emotions themselves can then be thought of as com-
prising both judgments and feelings. This does not represent a happy reconciliation
of feeling theories and cognitive theories, however. If appraisals do not constitute
emotions, but are instead causal precursors to feelings, then there should be no
cases—or, at least, predictable and relatively exceptional cases—in which agents
experience the feelings component of emotions without appraisals acting as causal
precursors. But the very cases that threaten traditional cognitive theories of emo-
tion illustrate this very phenomenon. These are cases of direct physical induction
of emotions, for example, by taking drugs or holding a pencil in one’s mouth. These
acts cause emotions without appraisals of core relational themes. And these kinds
of cases are pervasive.22
Prinz (2004) suggests a new kind of reconciliation. He retains the term
“appraisal” but deploys it in an idiosyncratic way. On a view like Lazarus’s, apprais-
als are judgments (namely, judgments about how something bears on one’s interest
or well-being). On Prinz’s view, appraisals are not judgments. Rather, they are per-
ceptual representations of core relational themes. Prinz draws upon Fred Dretske’s
(1981, 1986) sense of mental representations as states that have the function of car-
rying information for a particular purpose. In the case of emotions, our mental rep-
resentations are “set up” to be “set off ” by (real or imagined) stimuli that bear upon
core relational themes. For example, we have evolved to feel fear when something is
represented as dangerous; we have evolved to feel sadness when something is rep-
resented as valued and lost. But just as a boiler being kicked on by the thermostat
Appraisal theorists might add additional conditions having to do with the blocking, swamping,
22
or interruption of appraisals. These conditions would be added in order to explain away cases of appar-
ent dissociation between appraisals and emotions. Given the pervasiveness of such cases, however, the
onus would be on appraisal theorists to generate these conditions in a non–ad hoc way. Moreover, it is
unclear if such conditions would cover cases in which the putative appraisal is missing rather than in
conflict with the resulting emotion.
Car ing , Implicit Attit ud e s , and the S el f 115
need be reliably caused only by a change in the room’s temperature, fear or sadness
need be reliably caused only by a representation of the property of being dangerous
or being valued and lost, without the agent judging anything to be dangerous or
valued and lost. This is the difference between saying that an appraisal is a represen-
tation of a core relational theme and saying that an appraisal is a judgment about a
core relational theme. The final key piece of Prinz’s view is to show how representa-
tions of core relational themes reliably cause the emotions they are set up to set off.
They do so, Prinz argues, by tracking (or “registering”) changes in the body.23 Fear
represents danger by tracking a speeded heart rate, for example. This is how Prinz
reconciles feeling theories of emotion with cognitive theories. Emotions are more
than just feelings insofar as they are appraisals (in the sense of representations) of
core relational themes. But emotions are connected to core relational themes not
via rational judgments about what matters to a person, but instead by being set up
to reliably register changes in the body. 24
My aim is not to defend Prinz’s theory as such (nor to explicate it in any rea-
sonable depth), although I take it to provide a plausible picture of how emotions
“attune” agents’ thoughts and actions around representations of value.25 What I take
from Prinz’s account is that emotions can represent features of the (real or imag-
ined) world that the agent values without the agent thereby forming rational judg-
ments about that thing. This is, of course, a “low-level” kind of valuing. In this sense,
agents can have many overlapping and possibly conflicting kinds of values (see dis-
cussion in Chapter 5).
Understood this way, emotions are poised to help indicate what matters to
us, in a broader way than other theorists of affective cares have suggested. (That
is, some emotions that theorists like Jaworska and Shoemaker think do not tell us
23
For a related account of emotion that similarly straddles feelings theories and cognitive theories,
see Rebekka Hufendiek (2015). Hufendiek argues, however, that the intentional objects of emotions
are affordances, not changes in the body.
24
The idea that the emotional component of implicit attitudes are “set up to be set off ” by repre-
sentations of particular stimuli may seem to sit uncomfortably with my claim that the objects of these
representations matter to agents, in the attributability sense. That is, some are inclined to see a conflict
between saying that a response is both “programmed” by evolution and attributable to the agent. I do
not share this intuition, both because it follows from voluntaristic premises about attributability that
I do not endorse and because I do not know how to draw a distinction between responses that are and
are not “programmed” by evolution. This is, of course, a question about free will and compatibilism,
which I leave to philosophers more skilled than myself in those dark arts.
25
According to Randolph Nesse and Phoebe Ellsworth, “Emotions are modes of functioning,
shaped by natural selection, that coordinate physiological, cognitive, motivational, behavioral, and
subjective responses in patterns that increase the ability to meet the adaptive challenges of situations that
have recurred over evolutionary time” (2009, p. 129; emphasis in original). See also Cosmides and
Tooby (2000) and Tooby and Cosmides (1990) for related claims. I take it as counting in favor of
Prinz’s theory that treating emotions as perceptual representations of core relational themes helps to
show how emotions are adaptive in this sense.
116 Self
anything about what an agent cares about will, on my view, indicate what an agent
cares about.) But my claim is not that all emotions indicate cares. The relevant
states must be continuous across time and referentially connected. I’ll now argue
that when emotional reactions are components of implicit attitudes, they can meet
these conditions.
As I have noted, cares are graded. You might care about the big painting only a little bit. It might
26
also be right to say that what you care about are paintings like this one. The object of your care in this
case is the type, not the token.
Car ing , Implicit Attit ud e s , and the S el f 117
attitudes are individuated at the cluster level. Their relata co-activate rather than
merely frequently co-occur. This is why implicit attitudes are singular entities rather
than a collection of frequently co-occurring but fundamentally separate represen-
tations, feelings, actions, and changes over time. Given this, it seems right to say
that the FTBA relata of implicit attitudes are co-referring in the sense that they are
conceptually connected. It is this bonding between the relata that connects these
states to an agent’s cares. More precisely, it is the tight connection between particu-
lar combinations of dispositions to notice, feel, behave, and adjust future automatic
reactions to similar stimuli that displays referential connections of the sort that indi-
cate a genuine connection to what the agent cares about.27
These conditions are not met in contrast cases of isolated perceptual, affective, or
behavioral reactions. On its own, an agent’s tendency to notice particular features
of the ambient environment, for example, bears little or no relation to the agent’s
cares. The odd shape of a window might catch your eye while you walk to work, or
a momentary feeling of disgust might surface when you think about having been
sick last week. So what? Whatever fleeting mental states underlie these changes in
attention fail to display both continuity and connection. The states do not persist
through time, nor do they refer to any of the agent’s other mental states. But when
you notice that the window causes a coordinated response with FTBA components,
and you see and act and learn from your actions, perhaps stepping back to see it
aright, something about what you care about is now on display. Or when you recoil
from the feces-shaped fudge, not just feeling disgust but also reacting behaviorally
in such a way as to stop the fudge from causing you to feel gross, something about
what you care about is now on display.
Consider two cases in more detail. First, ex hypothesi, Cecil Duly displays
belief-discordant attitudes toward Antoine Wells. While Duly denies that he thinks
or feels differently about Wells after their conversation, he begins to treat Wells
27
Developing a point from John Fischer and Mark Ravizza (1998), Levy (2016) identifies three
criteria for demonstrating a “patterned” sensitivity to reasons. Meeting these criteria, he argues, is suf-
ficient for showing that an agent is directly morally responsible for an action, even if the agent lacks
person-level control over that action. Levy’s three criteria are that the agent’s apparent sensitivity to
reasons be continuously sensitive to alterations in the parameters of a particular reason; that the agent
be sensitive to a broad range of kinds of reasons; and that the agent demonstrate systematic sensitivity
across contexts. Levy then goes on to argue that the mechanisms that have implicit attitudes as their
components fail to be patterned in this way; they fail to meet these three criteria. While I think these
criteria represent a promising way to think about the relationship between implicit attitudes and the
self, I would frame and analyze them differently. Being patterned in this way strikes me as a promising
avenue for understanding actions that are attributable to agents. In any case, I would contest—as much
of the discussion in this and the preceding chapter suggests—Levy’s claim that implicit attitudes para-
digmatically fail to be patterned in this way. Elsewhere, Levy (2016) writes that implicit attitudes are
not “entirely alien to the self ” and that “that suffices for some degree of attributability.” One could inter-
pret the argument of this chapter to be my way of analyzing the degree of attributability that implicit
attitudes paradigmatically deserve.
118 Self
more kindly and humanely nevertheless. It’s plausible to think that Duly’s action-
guiding attitudes are implicit. In interacting with Wells, Duly might now see,
without explicitly noticing, a particular pained look on Wells’s face (F: “human
suffering!”), feel disquieted by it (T: “bad to cause pain!”), move to comfort Wells
in some subtle way, perhaps by saying “please” and “thank you” (B: “offer reas-
surance to people in pain!”), and absorb the feedback for future instances when
this behavior alleviates his felt tension (A: “suffering reduced!”). Focusing on the
emotional component of Duly’s reaction in particular, in Prinz’s terms, Duly seems
to have formed an emotion-eliciting appraisal of Wells. The core relational theme
Duly represents is perhaps guilt; Wells’s perceived suffering conveys to Duly that
he has transgressed a moral value. This appraisal will be reliably caused by par-
ticular somatic changes in Duly’s autonomic nervous system associated with rep-
resentations of others as suffering or as in pain (De Coster et al., 2013). These
processes are value-signaling, even if Duly forms no evaluative judgments about
Wells or even if Duly’s evaluative judgments of Wells run in the opposite direction
of his implicit attitudes.
Of course, other interpretations of this anecdote are possible. Perhaps, despite
his testimony, Duly simply thought to himself that Wells deserves better treatment
and changed his behavior as a result. Nothing in the story rules this interpretation
out. But it seems unlikely. Duly’s testimony should be taken as prima facie evi-
dence for what he believes.28 People do not always report their beliefs truthfully
or accurately, but Duly’s explicit denial of feeling or acting differently toward Wells
is not irrelevant to determining what has changed in him. Moreover, while Duly
was forced to listen to Wells during the discussion of the film, and thus might have
simply been persuaded by Wells’s reasoning, in a way indicative of a belief-based
interpretation of Duly’s changed attitudes, it is not as if Wells’s arguments are new
to Duly. Duly has heard Wells complaining for a long time, he says. It is plausible to
think that something other than the force of Wells’s arguments had an impression
on Duly.29 Perhaps there was something in the look on Wells’s face that affected
Duly in this particular context. Or perhaps the presence of a third party—Reed—
acted as an “occasion setter” for a change in Duly’s implicit attitudes toward Wells.30
For example, Duly probably had never before been forced to interact with Wells in
See Chapter 3.
28
An experiment to test these questions might change the strength of Wells’s arguments and look
29
for effects on Duly (or, more precisely, would do so on subjects performing tasks operationalized to
represent Wells’s and Duly’s positions). Or it might change crucial words in Wells’s testimony and
compare implicit and explicit changes in Duly’s attitudes. See Chapter 3 for a discussion of analogous
experiments. See also the discussion in Chapter 6 of research on changing unwanted phobic reactions
(e.g., to spiders) without changing explicit attitudes.
30
Occasion setting is the modulation of a response to a given stimulus by the presence of an addi-
tional stimulus, usually presented as part of the context (Gawronski and Cesario, 2013). See the discus-
sion in Chapter 7 of renewal effects on implicit learning in the animal cognition literature.
Car ing , Implicit Attit ud e s , and the S el f 119
anything like an equal status position, and perhaps by virtue of being in this posi-
tion, Duly’s associations about Wells shifted.31
If this is right, then the question is whether the object of Duly’s implicit attitude
constitutes something that Duly cares about. It does to the extent that Duly’s atti-
tude appears to display both cross-temporal continuities and the right kind of ref-
erential connections. Duly’s interactions with Wells are sustained over time. Wells
himself experiences Duly’s actions as manifestations of one and the same attitude.
Indeed, it would be hard to understand Duly’s actions if they were interpreted as
manifesting distinct attitudes rather than as various tokens of one and the same atti-
tude. This makes sense, also, given that Duly’s relevant attitudes all point toward
the same intentional object (i.e., Wells). Not only does Duly’s attitude point toward
Wells, but the various components of this attitude also all seem to refer to Wells.
Duly perhaps noticed a look on Wells’s face, feels disquieted by the look on Wells’s
face, acts in such a way as seems appropriate in virtue of his perception of and feel-
ings about Wells, and incorporates feedback from this sequence into his attitude
toward Wells in future interactions.
All of this suggests that Duly cares about Wells, if only in a subtle and conflicted
way. Indeed, Duly’s caring attitude is conflicted. He explicitly states the opposite of
what I’ve suggested his implicit attitudes represent. But this can be the case, given
that cares, in the sense I’ve discussed, are ontological, not psychological.
Second, consider an implicit bias case. Here it seems perhaps more difficult to
show that the objects of an agent’s implicit attitudes are things she cares about. In
“shooter bias” studies (see Chapter 1), for instance, participants try hard to shoot
accurately, in the sense of “shooting” all and only those people shown holding guns.
Participants in these experiments clearly care about shooting accurately, in other
words. But this fact does not foreclose the possibility that other cares are influenc-
ing participants’ judgments and behavior. Indeed, it seems as if participants’ shoot-
ing decisions reflect a care about black men as threatening (to the agent). Again, this
is a care that the agent need not recognize as her own.32 The emotional component
here is fear, which it is argued signals to the agent how she is faring with respect to
personal safety. This value-laden representation speaks to what matters to her. To
see this, it is important to note that the shooter bias test does not elicit generic or
undifferentiated fear, but rather fear that is specifically linked to the perception of
black faces. This linkage to fear is made clear in consideration of the interventions
that do and do not affect shooter bias. For example, participants who adopt the con-
ditional plan “Whenever I see a black face on the screen, I will think ‘accurate!’ ” do
no better than controls. However, participants who adopt the plan “Whenever I see
31
Changes in social status have been shown to predict changes in implicit attitudes (e.g., Guinote
et al., 2010). See Chapter 7.
32
In Chapter 5, I discuss the way in which inconsistent caring is not problematic in the way that
inconsistent believing is.
120 Self
a black face on the screen, I will think ‘safe!’ ” demonstrate significantly less shooting
bias (Stewart and Payne, 2008). Fear is one of Lazarus’s core relational themes; it is
thought to signal a concrete danger. And in Prinz’s terms, it is reliably indicated by
specific changes in the body (e.g., a faster heart rate).
Evidence for the cross-temporal continuity of participants’ implicit attitudes
in this case comes from the moderate test-retest reliability of measures of implicit
associations (Nosek et al., 2007).33 Moreover, Correll and colleagues (2007) find
little variation within subjects in multiple rounds of shooter bias tests. It is also fairly
intuitive, I think, that agents’ decisions across multiple points in time are manifesta-
tions of the same token attitude toward black men and weapons rather than mani-
festations of wholly separate attitudes.
The case for conceptual convergence is strong here too. The agent’s various
spontaneous reactions to white/black and armed/unarmed trials point referen-
tially to what matters to her, namely, the putative threat black men pose. The find-
ing of Correll and colleagues (2015) that participants in a first-person shooter
task require greater visual clarity before responding to counterstereotypic targets
compared with stereotypic targets (see Chapter 2) is conceptually connected to
pronounced fear and threat detection in these subjects (Stewart and Payne, 2008;
Ito and Urland, 2005) and to slower shooting decisions. Note that this conceptual
convergence helps to distinguish the cares of agents who do and do not demon-
strate shooter bias. Those who do not will not, I suggest, display the co-activating
perceptual, affective, and behavioral reactions indicative of caring about black men
as threatening.
Of course, other cases in which agents act on the basis of spontaneous inclina-
tions might yield different results. I have focused on particular cases in order to show
that implicit attitudes can count as cares. The upshot of this is that actions caused by
implicit attitudes can reflect on character. They can license aretaic appraisals. This
leaves open the possibility that, in other cases, what an agent cares about is fixed by
other things, such as her beliefs.
5. Conclusion
One virtue of understanding implicit attitudes as having the potential to count as
cares, in the way that I’ve suggested, is that doing so helps to demystify what it means
to say that a care is ontological rather than psychological. It’s hard to see how one can
discover what one cares about, or be wrong about what one cares about, on the basis
of an affective account of caring too closely tied to an agent’s self-understanding,
beliefs, and other reflective judgments. To the extent that these accounts tie care-
grounding feelings to self-awareness of those feelings or to evaluative judgments
33
See the discussion of test-retest reliability in Chapter 3 and the Appendix.
Car ing , Implicit Attit ud e s , and the S el f 121
about the object of one’s feelings, cares in the ontological and psychological senses
seem to converge. But implicit attitudes are paradigmatic instances of cares that
I can discover I have, fail to report that I have, or even deny that I have. This is most
relevant to the vices of spontaneity, where, because of potential conflict with our
explicit beliefs, we are liable to think (or wish) that we don’t care about the objects
of our implicit attitudes. But, of course, cases like those of Huck and Duly show that
this is relevant to the opposite kind of case too, in which our spontaneous reactions
are comparatively virtuous.
This in turn gives rise to at least one avenue for future research. In what ways do
changes in implicit attitudes cause or constitute changes in personality or character?
Two kinds of cases might be compared: one in which a person’s implicit attitudes
are held fixed and her explicit attitudes (toward the same attitude object) change
and another in which a person’s implicit attitudes change and her explicit attitudes
(toward the same attitude object) are held fixed. How would observers judge
changes in personality or character in these cases? Depending, of course, on how
changes in attitudes were measured (e.g., what kinds of behavioral tasks were taken
as proxies for changes in attitudes), my intuition is that the former case would be
associated with larger changes in perceived personality than would the latter case,
but that in both cases observers would report that the person had changed.34
Relatedly, the arguments in this chapter represent, inter alia, an answer—or
alternative to—the question of whether implicit attitudes are personal or subper-
sonal states. Ron Mallon, for example, argues that personal mental states occur
“when a situation activates propositional attitudes (e.g. beliefs) about one’s category
and situation,” while subpersonal mental states occur “when a situation activates
mere representations about one’s category and situation” (2016, p. 135, emphasis
in original). On these definitions, implicit attitudes are not personal mental states,
because they are not propositional attitudes (like beliefs; see Chapter 3).35 Nor
are they subpersonal mental states, because they are not mere representations of
categories and situations (Chapter 2).36 Neither are implicit attitudes what Mallon
34
Loosely related research shows that measures of explicit moral identity tend to make better pre-
dictions of moral behavior than do measures of implicit moral identity (Hertz and Krettenauer, 2016).
35
Levy (2017) offers an even more demanding definition of the related concept of “person-level
control.” On his view, adapted from Joshua Shepherd, “[p]erson-level control is deliberate and delib-
erative control; it is control exercised in the service of an explicit intention (to make it the case that
such and such)” (2014, pp. 8–9, emphasis in original). On the basis of this definition, Levy argues
that we generally lack person-level control over actions caused by implicit attitudes. I discuss Levy’s
argument—in particular the role of intentions to act in such and such a way in assessing questions
about attributability and responsibility—in Chapter 5.
36
Implicit attitudes are not mere representations of categories and situations, because they include
motor-affective and feedback-learning components (i.e., the T-B-A relata). Perhaps, however, Mallon
could posit action-oriented or coupled representations as subpersonal states (see Chapter 2). Even so,
Mallon suggests that however subpersonal states are defined, they are connected with one another in
merely causal ways. Implicit attitudes, by contrast, are sensitive to an agent’s goals and cares.
122 Self
Reflection, Responsibility,
and Fractured Selves
The preceding chapter established that paradigmatic spontaneous actions can be “ours”
in the sense that they can reflect upon us as agents. They may be attributable to us in
the sense that others’ reactions to these actions are sometimes justifiably aretaic. I have
defended this claim about spontaneous actions being ours on the basis of a care-based
theory of attributability. In short, caring attitudes are attributable to agents, and implicit
attitudes can count as cares.
But a number of questions remain. First, what exactly is the relationship between
cares and action, such that actions can “reflect upon” cares (§1)? Second, when an
action reflects upon what one cares about, is one thereby responsible for that action?
In other words, are we responsible for the spontaneous actions in which our implicit
attitudes are implicated (§2)? Third, how “deep” is my conception of the self? Do our
implicit attitudes reflect who we are, really truly deep down (§3)? Finally, my account of
attributable states is not focused on agents’ intentions to act or on their awareness of
their actions or their reasons for action. Nor does my account focus on agents’ rational
judgments. What role do these elements play (§4)?
1. Reflection
While I hope to have clarified that implicit attitudes can count as cares, I have not
yet clarified what it takes for an action to reflect upon one’s cares. As Levy (2011a)
puts it in a challenge to attributionist theories of moral responsibility, the difficulty
is in spelling out the conditions under which a care causes an action, and “in the
right sort of way.”1 The right sort of causality should be “direct and nonaccidental”
1
Most researchers writing on attributionism accept that a causal connection between an agent’s
identity-grounding states and her attitudes/actions is a necessary condition for reflection (e.g., Levy,
2011a; Scanlon, 1998; Sripada, 2015). Note also two points of clarification regarding Levy’s (2011a)
terminology. First, he identifies a set of propositional attitudes that play the same role in his discussion
123
124 Self
and must also be a form of mental causality, as others have stressed (Scanlon, 1998;
Smith, 2005, 2008; Sripada, 2015). These conditions are to rule out deviant causal
chains, but also to distinguish actions that reflect upon agents from actions as such.
For an action to say something about me requires more than its simply being an
action that I perform (Shoemaker, 2003; Levy, 2011a). Answering my ringing cell
phone is an action, for example, but it doesn’t ipso facto express anything about me.
In the terms I have suggested, this is because answering the phone is not something
I particularly care about. My cares are not the effective cause of my action, in this
case. (I will return to this example later.)
Causal connection between cares and actions is a necessary condition for the
latter to reflect the former, but it doesn’t appear to be sufficient. To illustrate this,
Chandra Sripada describes Jack, who volunteers at a soup kitchen every week-
end. Jack cares about helping people in need, but Jack also cares about Jill, who
also works at the soup kitchen and whom Jack wants to impress. Sripada stipulates
that the reason Jack volunteers is to impress Jill. The problem, then, is that “given
his means-ends beliefs, two of Jack’s cares—his Jill-directed cares and his charity-
directed cares—causally promote his working at the soup kitchen. But working at
the soup kitchen, given the actual purposes for which he does so, expresses only his
Jill-directed cares” (Sripada, pers. com.). The problem, in other words, is that agents
have cares that are causally connected to their actions yet aren’t reflected in those
actions.2
We can overcome this problem by considering Jack’s actions in a broader con-
text.3 Does he go to the soup kitchen on days when Jill isn’t going to be there? Does
he do other things Jill does? Has he expressed charity-directed cares in other ven-
ues, and before he met Jill? These kinds of questions not only illuminate which of an
agent’s cares are likely to be effective (as in the Jack and Jill case), but also whether an
as what I have been calling cares. That is, he refers to an agent’s identity-grounding states as “attitudes.”
Second, Levy distinguishes between an action “expressing,” “reflecting,” and “matching” an attitude
(or, as I would put it, an action or attitude expressing, reflecting, and matching an agent’s cares). For
my purposes, the crucial one of these relations is expression. It is analogous to what I call “reflection.”
How can two cares “causally promote” the same action? Shouldn’t we focus on which singular
2
mental state actually causes the action, in a proximal sense? This kind of “event causal” conception of
cares—that is, the view that the event that is one’s action must be caused by a singular state—runs
into the problem of long causal chains (Lewis, 1973; Sripada, 2015). Cares are not proximal causes
of action. For example, Jack’s cares about Jill don’t directly cause him to volunteer at the soup kitchen.
Rather, these cares cause him to feel a particular way when he learns about the soup kitchen, then to
check his schedule, then call the volunteer coordinator, then walk out the door, and so on. “Because
the connections between one’s cares and one’s actions are so indirect and heavily mediated,” Sripada
writes, “there will inevitably be many cares that figure into the causal etiology of an action, where the
action does not express most of these cares” (pers. com.). Thanks to Jeremy Pober for pushing me to
clarify this point.
3
Sripada (2015) proposes a different solution, which he calls the Motivational Support Account.
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 125
agent’s cares are effective in any particular case. Consider again the case of answer-
ing the cell phone. Imagine a couple in the midst of a fight. Frank says to work-
obsessed Larry, “I feel like you don’t pay attention to me when I’m talking.” Larry
starts to reply but is interrupted by his ringing phone, which he starts to answer
while saying, “Hang on, I might need to take this.” “See!” says Frank, exasperated. In
this case, answering the phone does seem to reflect upon Larry’s cares. His cares are
causally effective in this case, in a way in which they aren’t in the normal answering-
the-phone case. We infer this causal effectiveness from the pattern toward which
Frank’s exasperation points. The fact that this is an inference bears emphasizing.
Because we don’t have direct empirical access to mental-causal relations, we must
infer them. Observations of patterns of behavior help to justify these inferences.
Patterns of actions that are most relevant to making inferences about an agent’s
cares are multitrack. Jack’s and Larry’s actions appear to reflect their cares because
we infer counterfactually that their Jill-directed cares (for Jack) and work-directed
cares (for Larry) would manifest in past and future thought and action across a vari-
ety of situations. Jack might also go to see horror movies if Jill did; Larry might
desire to work on the weekends; and so on. Patterns in this sense make future actions
predictable and past actions intelligible. They do so because their manifestations are
diverse yet semantically related. That is, similar cares show up in one’s beliefs, imagi-
nations, hopes, patterns of perception and attention, and so on. Another way to put
all of this, of course, is that multitrack patterns indicate dispositions, and inferences
to dispositions help to justify attributions of actions to what a person cares about.
Levy articulates the basis for this point in arguing that people are responsible for
patterns of omissions:
Patterns of lapses are good evidence about agents’ attitudes for reasons to
do with the nature of probability. From the fact that there is, say, a 50%
chance per hour of my recalling that it is my friend’s birthday if it is true
that I care for him or her and if internal and external conditions are suitable
for my recalling the information (if I am not tired, stressed, or distracted;
my environment is such that I am likely to encounter cues that prompt me
to think of my friend and of his or her birthday, and so on), and the fact that
I failed to recall her birthday over, say, a 6-hour stretch, we can conclude
that one of the following is the case: either I failed during that stretch to
care for him or her or my environment was not conductive to my thinking
of him or her or I was tired, stressed, distracted, or what have you, or I was
unlucky. But when the stretch of time is much longer, the probability that
I failed to encounter relevant cues is much lower; if it is reasonable to think
that during that stretch there were extended periods of time in which I was
in a fit state to recall the information, then I would have had to have been
much unluckier to have failed to recall the information if I genuinely cared
for my friend (Levy 2011[b]). The longer the period of time, and the more
126 Self
conducive the internal and external environment, the lower the probabil-
ity that my failure to recall is a product of my bad luck rather than of my
failure to care sufficiently. This is part of the reason why ordinary people
care whether an action is out of character for an agent: character, as mani-
fested in patterns of response over time, is good evidence of the agent’s
evaluative commitments in a way that a single lapse cannot be. (2011a,
252–253)
See the Appendix for discussion of test-retest reliability and boosting it using relevant context cues.
4
See Chapter 2, footnote 9, for discussion of Eberhardt et al.’s (2004) finding of how black–
5
criminal associations create attentional biases and shape word and face detection. Laurie Rudman and
Richard Ashmore (2007) used an IAT measuring associations between white and black faces and nega-
tive terms specifically (although not exclusively) related to violence. The terms were violent, threaten,
dangerous, hostile, unemployed, shiftless, and lazy. (Compare these with the terms used on a standard
attitude IAT: poison, hell, cancer, slime, devil, death, and filth.) Rudman and Ashmore found that
black–violent associations on this IAT predicted reports of engaging in socially excluding behavior
(specifically, how often participants reported having made ethnically offensive comments and jokes,
avoided or excluded others from social gatherings because of their ethnicity, and displayed nonver-
bal hostility, such as giving someone “the finger,” because of their ethnicity). The black–violent IAT
outpredicted the attitude IAT specifically with respect to discriminatory behavior. Finally, Nicole
Donders and colleagues (2008) show that black–violent associations—but neither danger-irrelevant
stereotypes nor generic prejudice—predict the extent to which black faces, versus white faces, capture
the attention of white subjects.
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 127
implicit attitudes.6 This is precisely the reason controlled experiments are impor-
tant. They have the ability to create and control the conditions under which multi-
track patterns of behavior emerge.
These points seem to be building to the claim that people should be held respon-
sible for their implicit attitudes. And, more broadly, that people are responsible for
any spontaneous action that fits the characterization I’ve given. But responsibility is
not a monolithic concept.
6
Although the truth of each of these three claims—that patterns of biased behavior are hard to
notice in daily life; that social norms put pressure on people to behave in unprejudiced ways; and that
people are fairly good at regulating their implicit attitudes—are highly dependent on many contex-
tual factors. People in different epistemic positions, due to education and social privilege, will likely
be variably susceptible to the “blind spot bias” (i.e., the tendency to recognize biases more easily in
others than in oneself). Social norms favoring egalitarianism are highly geographically variable. And
people’s untutored skill in regulating their implicit attitudes is content-dependent. This can be seen in
average correlations between implicit and explicit attitudes. Generally implicit and explicit attitudes
converge for attitude-objects like brand preferences, for which there are not strong social incentives to
regulate one’s implicit attitudes, and diverge for attitude-objects like racial preferences, for which there
are strong incentives to regulate one’s implicit attitudes (although more so in some places than others).
Thanks to Lacey Davidson for pushing me to clarify this.
128 Self
thought that it would be strange to hold a person responsible for some minor
spontaneous action—in the sense of holding the person to some obligation that
she ought to discharge—that drives the worry about my account of attributability
being too inclusive. First I’ll explain the important difference between being and
holding responsible, as I understand it, and what this difference means for the rela-
tionship between the self and spontaneity (§2.1). My basic claim will be that agents
are often responsible for their spontaneous actions in the attributability sense but
that this does not license judgments about how we ought to hold one another
responsible.7 I’ll then use this distinction to contextualize my claims with recent
research in empirical moral psychology (§2.2). To be clear, through all of this, my
claims have to do with responsibility for actions. But in many cases (though not all,
as I argue later), whether and how one is responsible for a given action—or even
for a mere inclination—has to do with the psychological nature of the attitudes
implicated in it.
Because my primary interest in this and the preceding chapter is the relationship between the self
7
and spontaneity (which I take the concept of attributability to address), I will leave aside the question
of when, or under what conditions, we ought to hold one another responsible for actions driven by
implicit attitudes.
8
See John Doris (2015) for a sustained argument that there is no one monotonic way of setting the
conditions of responsible agency.
9
Shoemaker does not understand answerability and accountability as two forms of holding
responsible. For Shoemaker, there are three kinds of moral responsibility, each equiprimordial, so to
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 129
that it is plausible that agents are responsible for paradigmatic spontaneous actions
in the attributability sense, because being responsible in this sense does not neces-
sarily mean that one is obliged to answer for one’s actions or be held to account for
them in some other way.
Answerability describes the condition of being reasonably expected to defend
one’s actions with justifying (and not just explanatory) reasons. These must be the
reasons that the agent understood to justify her action (Shoemaker, 2011, 610–
611). Holding Lynne responsible in the answerability sense means conceptualizing
the question “Why did you φ?” as appropriate in this context. When we demand an
answer to this kind of question, we intrinsically tie the action to the agent’s judg-
ment; we tie what Lynne did to the presumed considerations that she judged to
justify what she did. Demanding an answer to this question is also a way of exact-
ing a demand on Lynne. Being answerable in principle means being open to the
demand that one offer up one’s reasons for action. In this sense, being answerable
is a way of being held responsible, in the sense of being held to particular interper-
sonal obligations.10
Being “accountable” is another way of being held responsible for some action. It
is different from answerability in the sense that being accountable does not depend
solely or even primarily on psychological facts about agents. Whether it is appro-
priate to demand that Lynne answer for her actions depends, most centrally, on
whether her action stemmed from her evaluative judgments or was connected to
them in the right way (see §4 for further discussion). In contrast, there are times
when it seems appropriate to hold someone responsible notwithstanding the status
of that person’s evaluative judgments or reasons for action. These are cases in which
we incur obligations in virtue of social and institutional relationships. Watson (1996,
235), for example, defines accountability in terms of a three-part relationship of one
person or group holding another person or group to certain expectations, demands,
or requirements. For example, the ways in which I might hold my daughter account-
able have to do with the parenting norms I accept, just as what I expect of a baseball
player has to do with the social and institutional norms governing baseball. In a sim-
ilar vein, Shoemaker (2011) defines accountability in terms of actions that involve
relationship-defining bonds. Understood this way, practices of holding accountable
can be assessed in particular ways, for example, whether they are justified in light of
the promises we have made within our relationships.
Jaworska’s claim here is ambiguous between young children being unable to distinguish between
11
true and false beliefs and young children failing to possess the concept of a false belief as such. Very
young children—as young as two or three —demonstrate in their own behavior the ability to recog-
nize a belief as mistaken. It is trickier to determine when children come to possess the concept of a
false belief as such. One proxy may be a child’s ability to fluently distinguish between appearance and
reality. John Flavell and colleagues (1985) report that a fragile understanding of the appearance–reality
distinction emerges somewhere around ages four to six. Thanks to Susanna Siegel for pushing me to
clarify this point.
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 131
keep her job.12 In a more intimately interpersonal context, you might hold a friend
accountable for returning your phone calls in virtue of the obligations of friendship.
12
But the facts about her psychology might make her answerable for her decision to take the job at
the zoo in the first place.
13
For an overview, see Doris (2015).
132 Self
14
Cameron and colleagues (2010) report that the four items on the Moral Responsibility Scale
had an “acceptable” internal consistency (Cronbach’s α = .65). This means that participants’ answers
to the four items on the scale were somewhat, but not strongly, correlated with each other. While
this suggests that participants’ conceptions of moral responsibility as such, punishment, blame, and
accountability are clearly related to one another, it also suggests that these may be distinct concepts.
Another possibility is that participants’ answers are internally consistent, to the degree that they are,
because participants treat the four questions as basically equivalent, when taking the survey. When
enacting related judgments in the wild, these same participants might make more discrepant judg-
ments. It would be interesting to see the results of a similar study in which the questions were presented
separately between subjects.
15
See also Washington and Kelly (2016).
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 133
targets’ actual foresight of their own behavior, rather than participants’ perceptions
about what the targets ought to foresee, did not mediate their moral responsibility
judgments. This suggests that participants’ views about holding responsible—the
obligations they ascribe to others—explain their answers on the moral responsi-
bility scale. This suggests in turn that we do not know how participants in these
studies would make specifically aretaic judgments about actions affected by implicit
attitudes. A study on folk attitudes toward responsibility for implicit bias could care-
fully distinguish between participants’ willingness to evaluate others’ character, to
demand others’ reasons for action, and to hold others accountable in various ways.
These same outcome measures could be applied to other cases of spontaneous and
nondeliberative actions as well, such as subtle expressions of interpersonal fluency
and even athletic improvisation.
16
See, e.g., Levy (2011).
134 Self
Levy also articulates the view that attributionism is concerned with the fun-
damental self in this sense. In writing about responsibility for implicit bias, he
argues that attributionists are “committed to excusing agents’ responsibility for
actions caused by a single morally significant attitude, or cluster of such attitudes
insufficient by themselves to constitute the agent’s evaluative stance, when the
relevant attitudes are unrepresentative of the agent. This is clearest in cases where
the attitude is inconsistent with the evaluative stance” (2011, 256). In other
words, attributionists must excuse agents’ responsibility for actions caused by
implicit biases because these attitudes, while morally significant, don’t constitute
the agent’s fundamental stance. He then goes on to describe implicit biases in
these terms, stressing their arationality (or “judgment-insensitivity”), concluding
that attributionists “ought to excuse [agents] of responsibility for actions caused
by these attitudes.” Levy takes this as a problem for attributionism. But for my
purposes, the important point is that Levy assumes a fundamentally unified con-
ception of an agent’s evaluative stance. This is what I take him to mean when he
says that an attitude or cluster of attitudes “constitutes” the agent’s “global” evalu-
ative stance.18
One challenge for this conception of the self is that conflicts occurring within
one’s self are difficult to understand, on its lights. There is no problem for this view
with the idea of conflict between an agent’s superficial attitudes, of course. But the
notion that agents have a fundamental evaluative stance—a unified deep self—
means that one of their attitudes (or set of attitudes) must represent them, at the end
of the day. Conflicts within the self—for example, both favorable and unfavorable
deep evaluations of φ—would make answering the question “But what do you truly
think or feel about φ, at the end of the day?” seemingly impossible.
If this understanding of the self is correct, then it will be hard to see how sponta-
neous action will ever reflect on the agent. If there can be no conflicts within one’s
fundamental self, then in cases of conflict between spontaneous inclination (of the
17
Emphasis added. Smith credits the term “moral personality” to Pamela Hieronymi (2008), but
uses it in a different way.
18
In concluding their presentation of what they call the “Whole Self ” view of moral responsibility,
Arpaly and Schroeder write, “We agents are not our judgments, we are not our ideals for ourselves,
we are not our values, we are not our univocal wishes; we are complex, divided, imperfectly rational
creatures, and when we act, it is as complex, divided, imperfectly rational creatures that we do so. We
are fragmented, perhaps, but we are not some one fragment. We are whole” (1999, pp. 184–185).
I agree with this eloquent description of what it is to be an agent. The question I am here considering is
whether the fact that we are whole agents means that attributability judgments necessarily target each
other’s “whole” self.
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 135
kind I’ve described, and not just mere reflexes or mere associations) and reflective
judgment, it is hard to see how attributability would ever “go with” one’s spontane-
ous inclinations.
But must we think of the self (in the attributability sense) as fundamental and
unified? Why assume that there is some singular way that one regards φ at the end
of the day? It strikes me as coherent to think of the self as tolerating internal hetero-
geneity and conflict. Sripada (2015), for example, writes of a “mosaic” conception
of the deep self. The mosaic conception accepts conflicts internal to the deep self.
Recall that Sripada’s conception of the self is, like mine, constituted by one’s cares.
Unlike a set of beliefs, Sripada argues, it is not irrational to hold a set of inconsist
ent cares. “To believe X, believe that Y is incompatible with X, and believe Y is
irrational,” he writes, but “to care for X, believe that Y is incompatible with X, and
care for Y is not irrational.” In other words, cares need not be internally harmoni-
ous, in contrast with (an ideally rational set of) beliefs or reflective judgments,
which reflect what the agent takes to be true. Thus while a belief-based conception
of the deep self would not be able to tolerate internal conflict between evaluative
attitudes, a care-based conception can, because it is not irrational to have an incon-
sistent set of cares.
On my view, a theory of attributability need not try to pinpoint a person’s
bottom-line evaluative regard for φ, at the end of the day, so to speak. Instead, it can
aim to clarify the conditions under which an action reflects upon a person’s substan-
tive evaluative regard for φ, even if that regard conflicts with other deep attitudes
of the agent. Everything hangs on what “substantive” means. I endorse a capacious
conception of “substantive.” Character is reflected in a great many more actions than
most theories suggest. In spirit my claim is close to Shoemaker’s when he offers
aretaic praise for his demented grandfather:
For research suggesting that the folk hold a related pluralist conception of the self, see Tierney
19
et al. (2014).
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 137
same as the conditions under which an agent ought to be held responsible for an
action. Moreover, I’ve argued that our cares can be in conflict with one another. This
leads to a pluralistic or mosaic conception of the self.
But surely, a critic might suggest, there must be something to the idea that an
agent’s intentions to act, and her awareness of those intentions or of other relevant
features of her action, play a core role in the economy of the self. When we consider
one another in an aretaic sense, we very often focus on each other’s presumed inten-
tions and conscious states at the time of action. Intentions and self-awareness seem
central, that is, not just to the ways in which we hold one another responsible (e.g.,
in juridical practices), but also to the ways in which we assess whether an action
tells us anything about who one is. What role do these considerations play in my
account? I consider them in turn: intentions first, then self-awareness. Then, I con-
sider an influential alternative: if neither intentions to act nor awareness of one’s
reasons for action are necessary features of attributable actions, what about the
relatively weaker notion that for an action to be attributable to an agent, the agent’s
relevant attitudes must at least be in principle susceptible to being influenced by the
agent’s rational judgment? Mustn’t attributable actions stem from at least in prin-
ciple judgment-sensitive states?
4.1 Intentions
Judging whether or not someone acted on the basis of an intention is indeed very
often central to the kind of aretaic appraisals we offer that person.20 The subway
rider who steps on your toes on purpose is mean-spirited and cruel, whereas the one
who accidently crunches your foot while rushing onto the train is more like moder-
ately selfish. As it pertains to my account of spontaneous action, the question is not
whether an agent’s intention to φ often matters, but rather whether it must matter,
in the sense of playing a necessary role in the way attributability judgments ought to
be made. But what about long-term intentions? Consider again Obama’s grin. One
might concede that this micro-action was not reflective of an occurrent intention
to reassure the chief justice, but insist instead that Obama’s automatic dispositions
to smile and nod were reflective of a long-term, standing intention, perhaps to be
a social person. In this case, one might say that Obama’s spontaneous act of inter-
personal fluency was attributable to him, but on the basis of its reflecting upon his
long-term intentions. But emphasizing long-term intentions in this kind of context
is problematic, for at least three reasons.
First, it is not obvious how to pick out the relevant intention with which one’s
gestures are to be consistent without making it either impracticably specific or
Here I mean “intentional” in the ordinary sense, of doing A in order to bring about B (e.g., smiling
20
4.2 Awareness
Aretaic appraisals of spontaneous action don’t necessarily trace back to an agent’s
apparent long-term intentions to act in some way. But what about an agent’s self-
awareness? In some sense it may seem strange to say that character judgments
of others are appropriate in cases where the other person is unaware of what
140 Self
she is doing. But my account of attributability does not include conscious self-
awareness among the necessary conditions of an action’s reflection on an agent.
Should it?
It is important to separate different senses in which one might be self-aware in
the context of acting. One possibility is that one is aware, specifically, of what one is
doing (e.g., that I’m staring angrily at you right now). Another possibility is that one
is aware of the reasons (qua causes) for which one is doing what one is doing (e.g.,
that I’m staring angrily at you because you insulted me yesterday). A third possibility
is that one is aware of the effects of one’s actions (e.g., that my staring will cause you
to feel uncomfortable).21 Each of these senses of self-awareness bears upon attribut-
ability. When one is self-aware in all three senses, the relevant action surely reflects
upon who one is in the attributability sense. But is awareness in any of these senses
necessary?
In advancing what he calls the “Consciousness Thesis,” Levy (2012, 2014) argues
that a necessary condition for moral responsibility for actions is that agents be
aware of the features of those actions that make them good or bad. They must, in
other words, have a conscious belief that what they are doing is right or wrong. For
example, in order for Luke to be responsible for accidentally shooting his friend in
Loaded Gun (Chapter 4), it must be the case that Luke was aware, at the time of the
shooting, that showing the gun to his friend was a bad thing to do, given the possibil-
ity of its being loaded. If Luke was completely unaware of the facts that would make
playing with the gun a stupid idea, then Luke shouldn’t be held morally responsible
for his action. Levy’s argument for this derives from his view of research on the
modularity of the mind. In short, the argument is that minds like ours are modular,
that is, they are made up of independently operating systems that do not share infor-
mation (Fodor, 1983). Modularity presents an explanatory problem: How is it that
behavior so often seems to integrate information from these ostensibly dissociated
systems? For example, if our emotions are the products of one mental module, and
our beliefs are the products of another, then why doesn’t all or most of our behavior
look like an incomprehensible jumble of belief-driven and emotion-driven outputs
that are at odds with one another? Everyone recognizes that this is sometimes the
case (i.e., sometimes our beliefs and our feelings conflict). But the question is how
the outputs of mental modules ever integrate. Levy’s answer is that consciousness
performs the crucial work of integrating streams of information from different men-
tal modules. Consciousness creates a “workspace,” in other words, into which dif-
ferent kinds of information are put. Some of this information has to do with action
guidance, some with perception, some with evaluative judgment, and so on. This
workspace of consciousness is what allows a person to put together information
about an action that she is performing with information about that action being
I adapt these three possibilities from Gawronski and colleagues’ (2006) discussion of the differ-
21
right or wrong. In Luke’s case, then, what it means to say that he is aware of the fea-
tures of his action that makes the action good or bad is that his mind has integrated
information about what he’s doing—playing with a gun—with information about
that being a bad or stupid thing to do—because it might be loaded.
One of Levy’s sources of evidence for this view are experiments showing that self-
awareness tends to integrate otherwise disparate behaviors. For example, in cognitive
dissonance experiments (e.g., Festinger, 1956), agents become convinced of particular
reasons for which they φed, which happen to be false. But after attributing these rea-
sons to themselves, agents go on to act in accord with those self-attributed reasons.
Levy (2012) interprets this to show that becoming aware of reasons for which one is
acting (even if those reasons have been confabulated) has an integrating effect on one’s
behavior. Levy ties this integrating effect of self-awareness to the conditions of respon-
sible agency. Put simply, his view is that we can be thought of as acting for reasons only
when consciousness is playing this integrating role in our minds. And we are morally
responsible for what we do only when we are acting for reasons.
Levy doesn’t distinguish between being responsible and holding respon-
sible in the same way as I do, so it is not quite right to say that Levy’s claim is
one about the necessary conditions of attributability (as I’ve defined it). But
the Consciousness Thesis is meant as a claim about the minimal conditions of
agency. Since all attributable actions are actions as such, any necessary condition
of agency as such should apply mutatis mutandis to the minimal conditions of
attributability. So could some version of the Consciousness Thesis work to show
that actions are attributable to agents just in case they consciously believe that
they are doing a good or bad thing?
One worry about the Consciousness Thesis is that it relies upon Levy’s global
workspace theory of consciousness, and it’s not clear that this theory is cor-
rect. Some have argued that no such global workspace exists, at least in the terms
required by Levy’s account (e.g., Carruthers, 2009; King and Carruthers, 2012).
Even if the global workspace theory is true, however, I think there are reasons to
resist the claim that consciousness in Levy’s sense is a basic condition of attribut-
ability (or of responsible agency in the attributability sense).
Consider star athletes.22 In an important sense, they don’t seem to be aware of
the reasons their actions are unique and awe-inspiring. As I noted in Chapter 1, Hall
of Fame NFL running back Walter Payton said, “People ask me about this move or
that move, but I don’t know why I did something. I just did it.”23 Kimberly Kim, the
youngest person ever to win the US Women’s Amateur Golf Tournament, said, “I
22
See Brownstein (2014) and Brownstein and Michaelson (2016) for more in-depth discussion of
these examples. See also Montero (2010) for critical discussion of cases that do not fit the description
of athletic expertise that I give.
23
Quoted in Beilock (2010, 224).
142 Self
don’t know how I did it. I just hit the ball and it went good.”24 Larry Bird, the great
Boston Celtic, purportedly said, “[A lot of the] things I do on the court are just
reactions to situations . . . A lot of times, I’ve passed the basketball and not realized
I’ve passed it until a moment or so later.”25 Perhaps the best illustration arises in an
interview with NFL quarterback Phillip Rivers, who fails pretty miserably to explain
his decisions for where to throw:
Rivers wiggles the ball in his right hand, fingers across the laces as if ready
to throw. He purses his lips, because this isn’t easy to articulate. “You
always want to pick a target,” he says. “Like the chin [of the receiver]. But
on some routes I’m throwing at the back of the helmet. A lot of it is just
a natural feel.” Rivers strides forward with his left leg, brings the ball up
to his right ear and then pauses in midthrow. “There are times,” he says,
“when I’m seeing how I’m going to throw it as I’m moving my arm. There’s
a lot happening at the time. Exactly where you’re going to put it is still
being determined.” Even as it’s leaving the fingertips? More head-shaking
and silence. Finally: “I don’t know,” says Rivers. “Like I said, there’s a lot
going on.” (Layden, 2010)
What’s perplexing is that these are examples of highly skilled action. A form of
action at its best, you might even say.26 This suggests that skilled action lies within
the domain of attributability. Certainly we praise athletes for their skill and treat
their abilities as reflecting on them. And yet these athletes report having very little
awareness, during performance, of why or how they act. They disavow being aware
of the reasons that make their moves jaw-dropping. Thus there seems to be tension
between these athletes’ reports and the claim of the Consciousness Thesis (i.e., that
they must be aware of their reasons for action, and in particular their right-making
reasons for action, if they are to deserve the praise we heap on them). Star athletes
present a prima facie counterexample to the Consciousness Thesis.
One way to dissolve this tension would be to say that star athletes’ articulacy
is irrelevant. What matters is the content of their awareness during performance,
regardless whether these agents can accurately report it. I agree that the content of
awareness is what matters for evaluating the proposal that attributability requires
consciousness of the right-making reasons for one’s action. But articulacy is not
Quoted in Beilock (2010, 231), who follows the quote by saying, “[Athletes like Kim] can’t tell
24
you what they did because they don’t know themselves and end up thanking God or their mothers
instead. Because these athletes operate at their best when they are not thinking about every step of
performance, they find it difficult to get back inside their own heads to reflect on what they just did.”
25
Quoted in Levine (1988).
26
See Dreyfus (2002b, 2005, 2007a,b) for discussion of the idea that unreflective skilled action is
action “at its best.”
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 143
irrelevant to this question. What agents report offers a defeasible indication of their
experience. Coaches’ invocations to “clear your mind” help to corroborate these
reports. For example, cricketer Ken Barrington said, “When you’re playing well you
don’t think about anything and run-making comes naturally. When you’re out of
form you’re conscious of needing to do things right, so you have to think first and act
second. To make runs under those conditions is mighty difficult.”27 Barrington’s is
not an unusual thought when it comes to giving instructions for peak performance
in sports. Branch Rickey—the Major League Baseball manager who helped break
the color barrier by signing Jackie Robinson—purportedly said that “an empty head
means a full bat.”28 Sports psychologists have suggested as much too, attributing
failures to perform well to overthinking, or “paralysis by analysis.”
Surely some of these statements are metaphorical. Barrington’s mind is certainly
active while he bats. He is presumably having experiences and focusing on some-
thing. But research on skill also appears to corroborate what star athletes report and
coaches teach. One stream of research focuses on the role of attention in skilled
action. Do experts attend—or could they attend—to what they are doing during
performance? According to “explicit monitoring theory” (Beilock et al., 2002), the
answer is no, because expert performance is harmed by attentional monitoring of
the step-by-step components of one’s action (e.g., in golf, by attending to one’s back-
swing). Experts can’t do what they do, in other words, if they focus on what exactly
they’re doing. Dual-task experiments similarly suggest that, as one becomes more
skilled in a domain, one’s tasks become less demanding of attention (Poldrack et al.,
2005).29
Critics of explicit monitoring theory have pointed out that expert performance
might not suffer from other forms of self-directed attention, however. For exam-
ple, David Papineau (2015) argues that experts avoid choking by focusing on their
“basic actions” (i.e., what they can decide to do without having to decide to do any-
thing else) rather than on the components of their actions. But focusing on one’s
basic actions—for example, that one is trying to score a touchdown—isn’t the same
as focusing on the right-making features of one’s action. “Scoring a touchdown” isn’t
what makes Rivers’s actions good or praiseworthy; any hack playing quarterback is
also trying to score a touchdown (just as many people aim to be sociable but few
display interpersonal fluency like President Obama). Awareness of these course-
grained reasons isn’t tantamount to awareness of the right-making features of one’s
27
Quoted in Sutton (2007, 767).
28
Reported in “Manuel’s Lineups Varied and Successful,” http://www.philly.com/philly/sports/
phillies/20110929_Manuels_lineups_varied_and_successful.html.
29
Dual-task experiments involve performing two tasks at once, one of which requires skill. People
who are skilled in that domain perform better at both tasks because, the theory suggests, the skill-
demanding task requires less attention for them. Russell Poldrack and colleagues (2005) support this
interpretation with neural imaging data.
144 Self
action. Nor is it tantamount to awareness of the reasons that explain how these
agents perform in such exceptionally skilled ways, which is what would be required
to reconcile skilled agency with the consciousness requirement.
Research on improvisational action in other domains of expertise is suggestive
too. In a review of Levy (2014), Arpaly (2015) points out that improvisational jazz
musicians simply don’t have enough time to attend to and deliberate about the
goodness or badness of their decisions.30 Brain research seems to tell a related story.
Neuroimaging studies of improvisational jazz musicians show substantial downreg-
ulation of the areas of the brain associated with judgment and evaluation when the
musicians are improvising (and not when they are performing scored music; Limb
and Braun, 2008). One might assume that awareness of the right-and wrong-mak-
ing features of one’s actions is tied to evaluation and judgment.
Another way to dissolve the tension between these exemplars of skill and the
Consciousness Thesis might be to focus on what athletes do when they practice.
Surely in practice they focus on their relevant reasons for action. Kim might adjust
her grip and focus explicitly on this new technique. Indeed, she’ll have to do so
until the new mechanics become automatic. So perhaps the role of right-making
reasons is indexed to past practice. Perhaps star athletes’ reasons for action have
been integrated in Levy’s sense in the past, when they consciously practiced what
to do. But this won’t work either. As Arpaly puts it, if it were the case that past prac-
tice entirely explains performance, then all one would need to do to become an
expert performer is “work till they drop and then somehow shake their minds like a
kaleidoscope” (2015, 2). But clearly this isn’t the case, as those who have practiced
till they drop and still failed to perform can attest. So we are still lacking an under-
standing of what makes star performance distinct. It’s not the integration of reasons
for action of which one was aware when practicing. (Relatedly, sometimes people
exhibit expertise without ever having consciously or intentionally practiced at all.
Arpaly discusses social fluency—like Obama’s grin—in similar terms.)
Levy’s Consciousness Thesis is rich and intriguing, and represents, to my mind,
the most persuasive claim on the market for a consciousness requirement for moral
responsibility (and, inter alia, attributability).31 Nevertheless, barring a more per-
suasive alternative, I proceed in thinking that actions can reflect on agents even if
those agents aren’t focally conscious of what they’re doing.
Levy (2016) discusses cases of skilled action like these and argues that in them agents display
31
a kind of subpersonal control sufficient for holding them responsible. This is because, he argues, the
agent displays a “patterned” sensitivity to reasons. See Chapter 4, footnote 27, for an elaboration on
this claim. Levy applies this argument about patterned sensitivity to reasons to implicit attitudes, pur-
suing the possibility that even if agents lack conscious awareness of the moral quality of their action of
the kind that renders them morally responsible for that action, perhaps they can still be held respon-
sible because their attitudes display the right kind of patterned subpersonal control. Levy ultimately
rejects both possibilities.
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 145
32
Smith develops her view in terms of responsibility for attitudes, not actions per se, as
discussed later.
146 Self
Moreover, RRV is also connected to a claim about what it means for an atti-
tude to reflect upon an agent. It is incumbent upon Smith to clarify this, as one
might wonder what distinguishes the focus on rational judgments in RRV from
the voluntarist’s way of tying moral responsibility to an agent’s deliberative or
conscious choices.33 This is important because RRV is meant as an alternative
to voluntarism about moral responsibility. Smith’s tactic is to offer an expansive
definition of those states that reflect our rational assessments of what we have
reason to do, a definition sufficiently expansive to include deliberative and con-
scious choices but also desires, emotions, and mere patterns of awareness. For
example, Smith writes, “Our patterns of awareness—e.g., what we notice and
neglect, and what occurs to us—can also be said to reflect our judgments about
what things are important or significant, so these responses, too, will count as
things for which we are responsible on the rational relations view” (2008, 370).
Sripada (pers. com.) labels Smith’s expansive definition of states that reflect an
agent’s rational judgments “in principle sensitivity” (IPS). The claim of IPS is
that mere patterns of awareness and the like reflect agents’ rational judgments
because the psychological processes implicated in those patterns of awareness
are, in principle, open to revision in light of the agent’s reason-guided faculties.
According to Smith:
As I formulate it:
33
I am indebted to Holly Smith’s (2011, 118, n 9) articulation of this worry.
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 147
IPS clarifies how one might think that agents who act spontaneously might still
fall within the purview of RRV. On Smith’s view, it does not undermine attributabil-
ity to show that some attitude is occurrently unresponsive to the agent’s reasoned
judgments. Rather, attributability is undermined only if an attitude is in principle
causally isolated from the agent’s rational reflections. What is crucial to see is how
IPS saves RRV (and their correlative postulates for action) in cases of spontane-
ous action. It isn’t enough for Kim to say, “I don’t know how I did it,” on this view;
Kim might still be answerable for her putting nonetheless. Analogously, an avowed
egalitarian who behaves in some discriminatory way cannot be exculpated from
responsibility simply by saying that her discriminatory behavior conflicts with what
she judges she ought to do. In other words, the implicitly biased avowed egalitarian
might be tempted to say, “My evaluative judgment is that discrimination is wrong;
therefore, it is wrong in principle to ask me to defend what I have done, because
what I have done does not reflect my evaluative judgments.” IPS(Action) cuts off
this line of defense.
There is something deeply intuitive about this approach. There does indeed
seem to be a connection between those actions that I can rightly be asked to defend,
explain, or answer for in some way and those being the actions that reflect on me. But
if something like this is right—that attributability is answerability—then something
must be wrong with the care-based account of attributability I offered in Chapter 4.
In the case of virtuous spontaneity, it makes no sense to ask President-Elect Obama
to answer for his reassuring grin or Kimberly Kim to answer for the perfection of
her putts. Precisely what agents like these say is that they have no answer to the
question “Why did you do that?”34 So, too, in cases of the vices of spontaneity. As
Levy puts it, in the case of implicit bias “it makes no sense at all to ask me to justify
my belief that p when in fact I believe that not-p; similarly, it makes no sense to ask
me to justify my implicit racism when I have spent my life attempting to eradicate
it” (2011, 256).
Despite its intuitive appeal, I do not think attributability should be understood
as answerability in Smith’s sense. My first reason for saying this stems from cases in
which it seems right to attribute an attitude or action to an agent even if that attitude
or action fails to reflect her rational judgment. These are the same cases I discussed
earlier (§2.1), proposed by Shoemaker (2011) and Jaworska (1999, 2007b), having
34
See Brownstein (2014).
148 Self
I think Smith’s conclusion is right but her justification is wrong. Implicit biases like
these do reflect on agents (as I argued in the preceding chapter). But this is not
necessarily because these biases are judgment-sensitive. Why think that Smith’s
vignette describes someone who takes a person’s race as a reason not to trust or
hire her?
Part of the problem, I think, is that Smith’s claim that attitudes that are attribut-
able to agents must be in principle causally susceptible to revision by an agent’s pro-
cesses of rational reflection (i.e., IPS) is both too vague and too strong. It is vague
because it is in principle hard to tell what counts as a process of rational revision.
Indeed, it is hard to see how anything at all can be ruled out as being in principle
susceptible to rational revision. Suppose at some point in the future psychologists
develop techniques that allow agents to overcome the gag reflex, or even to stop
the beating of their own heart, using nothing but certain “mindfulness” exercises.
Would this indicate that these processes, too, are in principle susceptible to rational
revision?
IPS is too strong because it is sometimes the case that agents can change their
attitudes and behavior in such a way that seems to reflect upon them as agents, with-
out those changes counting as responsiveness to rational pressure per se. Sometimes
we do this by binding ourselves by force, like Ulysses’ tactic for resisting the Sirens.
But this action is attributable to Ulysses simply because the change it wrought in his
behavior traces back to his “pre-commitment” (Elster, 2000). That is, Ulysses made
R ef l ec tion , R e spons ib ilit y, and Frac t ured S elve s 149
a deliberative choice to instruct his crew to bind him to the mast when they were to
pass the island of the Sirens. This case wouldn’t speak against Smith’s view. But there
are many cases that do. I’ll call these cases of self-regulation by indirect means.35
I discuss these in more depth in Chapter 7. Briefly though, for example, there are a
slew of “evaluative conditioning” techniques that enable agents to shift the valence
of their stored mental associations through repeated practice. A person who has
learned to associate “Bob” with “selfishness” and “anger” can, through training,
learn to associate Bob with “charity” and “placidness,” for example. People can also
shift their “statistical map” of social stereotypes by repeated exposure to counterste-
reotypes. One striking field study found, for example, that women undergraduates’
implicit associations between leadership and gender were significantly altered in
proportion to the number of women science professors they had (i.e., women in
counterstereotypic leadership roles in the sciences; Dasgupta and Asgari, 2004). At
the start of the year, the students implicitly associated women with concepts related
to nurturance more than with concepts related to leadership. (This is a common
finding.) After one year, the women undergraduates who took more classes with
women scientists more strongly associated women with leadership concepts on
measures of their implicit attitudes.
In these examples, there is little reason to think that agents’ implicit attitudes
shift in response to rational pressure. That is, there is little reason to think that these
successful cases of implicit attitude change are successful because the relevant atti-
tudes are judgment-sensitive, in Smith’s sense.36 Indeed, in the case of Dasgupta and
Shaki Asgari’s study, while women undergraduates’ implicit attitudes changed when
they took more classes from women scientists, their explicit attitudes didn’t change.
Some of the women undergraduates continued to explicitly endorse the view that
women possess more nurturance than leadership qualities, even after they ceased to
display this association on indirect measures.
5. Conclusion
In this chapter, I’ve situated my account of the relationship between implicit atti-
tudes and the self with respect to familiar ways of thinking about attributability
and responsibility. I’ve described how I understand what it means for an action
to “reflect on” the self; I’ve distinguished between being responsible and various
forms of holding oneself or others responsible for actions; I’ve offered a picture of
the deep self as fractured rather than necessarily unified; and I’ve explained why
I think spontaneous actions can reflect on agents even when agents have no relevant
35
I borrow the term “indirect self-regulation” from Jules Holroyd (2012). See also Washington and
Kelly (2016) on “ecological control,” a term they borrow from Clark (2007).
36
See the discussion in Chapter 3 on implicit attitudes and rational processing, such as inference.
150 Self
intentions to act as they do, when they are unaware of what they are doing and why,
and even when their attitudes are isolated from their rational judgments. I’ve tried
to make sense of these claims in terms of both freestanding philosophical theories
and research on relevant folk attitudes. Of course, arguably the most salient con-
cerns about spontaneous and unplanned actions that arise in folk conversation are
ethical concerns. How can we improve our implicit attitudes? Under what condi-
tions can they become more morally credible? These are the questions I address in
the next two chapters.
PA RT T H R E E
ETHICS
6
As D’Cruz puts it, Belinda has better “instincts” for when to act on a whim, while Alfred
seems overly cautious and even stultified. He acts like “a Prufrock.” By thinking so care-
fully about what to do, Alfred undermines Belinda’s suggestion, which at its core was to
do something without thinking too carefully about it.
D’Cruz’s vignette must be understood against the backdrop of a familiar and
appealing idea. The idea is that deliberation is necessary and central to ethical
action. By ethical action, I mean action that promotes goodness in the agent’s life,
153
154 Ethics
1
I will not delve into theories of what constitutes goodness in an agent’s life. All I mean by “good-
ness” is what appears to me to be good. For example, it appears to me that implicit attitudes that pro-
mote egalitarian social practices are good, while implicit attitudes that promote discriminatory social
practices are bad. Bennett takes a similar approach in his discussion of the relationship between “sym-
pathy” and “bad morality” (in cases like those of Huckleberry Finn). Bennett writes, “All that I can
mean by a ‘bad morality’ is a morality whose principles I deeply disapprove of. When I call a morality
bad, I cannot prove that mine is better; but when I here call any morality bad, I think you will agree with
me that it is bad; and that is all I need” (1974, 123–124).
2
Thus the spirit in which my argument is made is similar to that of Doris (2015), who writes, “
I insist that reflection is not necessary for, and may at times impede, the exercise of agency, while self-
ignorance need not impede, and may at times facilitate, the exercise of agency.”
Deliberation and Spontane i t y 155
1. Nondeliberative Action
My claim that deliberation is neither necessary nor always recommended for ethi-
cal action would be a nonstarter if deliberation were foundational for action. If we
had to deliberate in order to act for reasons, in other words, then we would have to
deliberate in order to act for ethical reasons. This idea—that practical deliberation
is necessary for action (properly so called)—is familiar in the philosophy of action.
For example, David Chan writes, “To make a reason for doing something the agent’s
reason for doing it, the reason must enter into a process of practical reasoning”
156 Ethics
(1995, 140). Similarly, Peter Hacker writes, “Only a language-using creature can
reason and deliberate, weigh the conflicting claims of the facts it knows in the light
of its desires, goals and values, and come to a decision to make a choice in the light
of reasons” (2007, 239).
But I think deliberation is not foundational for action in this sense. Railton
(2009, 2014, 2015) and Arpaly and Schroeder (2012) have convincingly argued
for this view. They begin with the observation that agents often do things they have
reasons to do without having to recognize or consider those reasons in delibera-
tion. Railton argues that driving a car skillfully, producing just the right witty reply
in conversation, adjusting one’s posture to suit the mood of a group, shoplifting
competently, playing improvisational music, and shooting a basketball in the flow
of a game are all nondeliberative reason-responsive actions. Arpaly and Schroeder
discuss examples like seeing at a glance that a reaching device can be used to grasp
a banana, multiplying numbers in one’s head, passing the salt when someone says,
“Pass the salt,” and knowing how and when to deliberate itself. These examples are
similar to those discussed throughout this book. Railton and Arpaly and Schroeder
discuss them in order to address a regress problem common to many theories of
action. For Railton, the problem is with theories that attempt to identify a special
act—like choosing or endorsing one’s reasons for action—that distinguishes either
action done for reasons or autonomous action from mere behavior. For Arpaly and
Schroeder, the problem is with theories that rationalize action exclusively in terms
of deliberation.3 The problem for both Railton and Arpaly and Schroeder is that an
action cannot be rationalized or shown to be done for reasons by another second-
ary act of choosing or endorsing or deliberating unless that secondary choosing,
endorsing, or deliberating itself is autonomous or done for reasons. For if the sec-
ondary choosing, endorsing, or deliberating is not itself autonomous or done for
reasons, it will not be capable of showing the resulting action to be autonomous
or done for reasons. But whatever makes the secondary choosing, endorsing, or
deliberating autonomous or done for reasons itself must be autonomous or done for
reasons . . . and so on.
This means that no act of past, present, or future deliberation can rationalize
deliberation itself. Arpaly and Schroeder (2012, 218–219) elaborate on this, argu-
ing against what they call Previous Deliberation, Present Deliberation, and Possible
Deliberation. Previous Deliberation holds that deliberation is responsive to reasons
if it stands in an appropriate relation to a previous act of deliberation, such as ante-
cedently resolving to act in accord with a certain principle. The trouble with this way
of rationalizing deliberative action is that it succumbs to the regress problem. Each
3
Arpaly and Schroeder are more explicit than Railton about whose theories they have in mind. They
identify Barry (2007), Chan (1995), Hacker (2007), Korsgaard (1997, 2009), and Tiberius (2002) as
holders of the putatively mistaken view that deliberation is required for responding to reasons. They
identify David Velleman’s (2000) model of practical reason as one that may avoid this problem.
Deliberation and Spontane i t y 157
act for reasons that are not themselves full-fledged acts of thinking and acting for
reasons. They are especially concerned to show that deliberation itself (a “mental
action”) is reason-responsive on the basis of fundamental reason-responsive nonde-
liberative processes. However, they extend these arguments to cases beyond men-
tal action, arguing that, for example, passing the salt when someone says, “Pass the
salt,” is a nondeliberative reason-responsive action that need not stand in a rational-
izing relationship to other previous, present, or possible acts of deliberation. The
central idea is that the act of passing the salt is rationalized, not by any deliberative
act, but by a nondeliberative process that is nevertheless responsive to reasons. Such
nondeliberative reason-responsive processes are regress-stoppers.
The upshot of these arguments is that deliberation is not foundational for action.
As Arpaly and Schroeder put it, “Deliberation is not the foundation of our abil-
ity to think and act for reasons but a tactic we have for enhancing our preexisting,
and foundational, abilities to think and act for reasons: our ND [nondeliberative]
reason-responding” (2012, 234). If this is correct, then there should be cases in
which agents’ spontaneous nondeliberative reactions get things right—that is, seize
upon the agent’s best reasons for action—in the absence of, or even in spite of, the
conclusions of their deliberative practical reasoning.
the road when a person of a different race appears or feel profound disbelief when
that person says something intelligible. Huckleberry, from the beginning, appears
to be the mirror image of this sort of person: he is a deliberative racist and viscerally
more of an egalitarian” (2004, 76).
To interpret Huck this way requires two things. First, Huck’s explicit attitudes
conflict with his behavior. Indeed, what seems to elude Huck, as Arpaly argues, is
the belief that letting Jim go is the right thing to do. Much to the contrary, Huck
explicitly believes that, because he fails to turn Jim in, he’s a wicked boy and is going
to hell.5 Second, to interpret Huck this way requires that his action be a response to
some kind of character-sustaining motivation. Huck’s action-guiding attitude must
be his in such a way that it reflects well upon him. While of course Huck is fictional
and many interpretations of Twain’s text are possible, there is reason to think that
Huck’s motivation is praiseworthy in this sense.
Contrast this with Jonathan Bennett’s (1974) interpretation. He uses the Huck
Finn case to consider the relationship between what he calls “sympathy” and
morality. By sympathy, Bennett seems to mean what most people today would call
empathy, an emotional concern for others in virtue of their feelings and experi-
ences. Huck’s sympathy for Jim, on Bennett’s reading, is “just a feeling,” a “natural”
reaction to one’s friend, which Bennett says is irrational, given that all of Huck’s
reasons suggest that he ought to turn Jim in (1974, 126, 126–127). Bennett con-
trasts actions based on sympathy to actions based on reasons (whether good or
bad). While I agree with Bennett’s broad view that Huck’s conscience gets it right
despite his deliberation, I think Arpaly (2004) is right in pointing out that Bennett’s
understanding of Huck’s conscience is too crude. For Bennett, Huck acts on the
basis of some “purely atavistic mechanism,” perhaps akin, Arpaly suggests, to what
Kant calls “mere inclination” (76). But Twain’s description of Huck suggests that he
undergoes something more like a “perceptual shift” in his view of Jim. Arpaly writes:
5
Arpaly writes that Huck acts for good moral reasons “even though he does not know or believe that
these are the right reasons. The belief that what he does is moral need not even appear in Huckleberry’s
unconscious. (Contra Hursthouse 1999, my point is not simply that Huckleberry Finn does not have
the belief that his action is moral on his mind while he acts, but that he does not have the belief that
what he does is right anywhere in his head—this moral insight is exactly what eludes him.)” (2004, 77).
160 Ethics
to see that there is no particular reason to think of one of them as the infe-
rior to the other. While Huckleberry never reflects on these facts, they do
prompt him to act toward Jim, more and more, in the same way he would
have acted toward any other friend. That Huckleberry begins to perceive
Jim as a fellow human being becomes clear when Huckleberry finds him-
self, to his surprise, apologizing to Jim—an action unthinkable in a society
that treats black men as something less than human. (2004, 76–77)
If this is right, then it is appropriate to say that Huck acted ethically, in a nonac-
cidental way, despite what he believed he ought to do. Huck’s feelings for Jim are
not brute instincts, as Bennett suggests, but are instead attitudes that reflect upon
Huck’s experience and perception, his behavior, and how he incorporates feedback
from this behavior over time. In short, it isn’t a stretch to credit Huck’s egalitarian
act to his implicit attitudes toward Jim. Huck comes to care about Jim’s personhood,
it seems, and this is ethically praiseworthy, in the sense I argued in Chapters 4 and
5. His implicit attitudes don’t reflect upon how rational he is or upon how consist
ently his conscience might be deployed in other contexts. They don’t necessarily
reflect on who Huck is in a deep sense, overall. But they do reflect upon him.
This suggests that in a case like Huck’s, deliberation is not necessary for genu-
inely ethical action (i.e., action that isn’t just accidentally good or is good in some
way that doesn’t reflect on the person). However, it remains open that this may
be tantamount to scant praise. It seems clear that one’s esteem for Huck would
rise had he both acted rightly and recognized the rightness of his action in delib-
eration. In other words, we can praise Huck despite his being conflicted, while
also recognizing that it would have been better had he not been conflicted. Huck
isn’t good at deliberating. He fails to take into consideration a number of crucial
premises, for example, that Jim is a person and that people are not property. Had
Huck been a better deliberator, we might imagine him going forward in life more
likely to act ethically, without having to overcome deep rending personal conflict.
Similarly, in the Duly and LaPiere cases, there is no reason to think that the agents
couldn’t have acted deliberatively, or wouldn’t have acted better had their delibera-
tive judgments and nondeliberative inclinations aligned. While one might think
that Duly’s increasingly kinder treatment of Wells reflects well upon him—despite
what Duly appears to believe—one might still hold that Duly would be better off
both treating Wells kindly and explicitly judging that Wells ought to be treated
kindly. Similarly, LaPiere’s hoteliers/restaurateurs would have been better off, ethi-
cally speaking, both behaving in a nondiscriminatory way and avowing to act in a
nondiscriminatory way.
Cases like these leave it open that deliberation, while nonfoundational for
action, and perhaps not necessary in all cases for ethical action, is still always rec-
ommended. Deliberating (well) is always the best option, in other words. But other
cases challenge this idea too.
Deliberation and Spontane i t y 161
is because, presumably, Moritz’s euphoria was tied to the spontaneity of his deci-
sion. The more that Moritz engages in deliberative practical reasoning about what
to do, the less he has reason to continue to Zittau. Of course, Moritz doesn’t have
reasons to create a policy of skipping his stop regularly, just as Alfred and Belinda
don’t have reasons to eat ice cream for dinner every night. Indeed, doing so would
be another way of undermining the spontaneity of these decisions. But in these par-
ticular cases, agents have positive reasons to act nondeliberatively.
There is no reason Huck, Duly, or LaPiere’s hoteliers/restaurateurs wouldn’t
have been better off, ethically, had they been better deliberators and acted on the
basis of their practical deliberation. But this isn’t the case with Alfred and Belinda,
or in any case involving DVRs. Alfred’s problem is not that he should be a better
deliberator.8 His problem—the thing preventing him from acting spontaneously in
a way that he himself would value—is his very need to deliberate. In other words,
just in virtue of deliberatively considering whether to make a spontaneous choice
to eat ice cream for dinner, Alfred no longer has reasons to spontaneously choose to
eat ice cream for dinner.
DVR cases show that deliberation can have costs. And sometimes these costs tip
the scales in favor of not deliberating. The fact that deliberation can have costs in this
sense suggests that, in some contexts at least, deliberation can undermine ethical
action. Moreover, the contexts in which deliberation can undermine ethical action
are not rare. The relevant context is not limited to cases of acting on a whim, as
Belinda and Moritz do.
In some cases, it simply isn’t feasible to deliberate, and thus one’s goals are instru-
mentally supported by acting nondeliberatively.9 One reason for this is that real time
interaction does not offer agents the time required to deliberate about what to say
or do, and trying to deliberate in such circumstances is counterproductive. This is
evident when you think of a witty comeback to an insult, but only once it’s too late.
Unfortunately, most of us don’t have Oscar Wilde’s spontaneous wit. (Fortunately,
most of us also don’t have the same doggedness that motivated George Costanza
to fly to Ohio to deliver a good comeback.) To be witty requires rapid and fluent
assessments of the right thing to say in the moment. In addition to time pressure,
social action inevitably relies upon nondeliberative reactions because explicit think-
ing exhausts us. We are simply not efficient enough users of cognitive resources to
reflectively check our spontaneous actions and reactions all the time.
Beyond the feasibility problem of acting deliberatively in cases like these, there
are many cases in which agents’ nondeliberative reactions and dispositions are par-
ticularly well suited to the situation. The research on expertise in athletics discussed
in Chapter 5 provides one set of examples. On the tennis court, it is not just difficult
For this “feasibility argument,” I am indebted to Gendler “Moral Psychology for People with
9
Brains.”
Deliberation and Spontane i t y 163
to think deliberatively about which shots to play while playing. The best players
are those who seem to have the best spontaneous inclinations; the flexibility and
situation-specific responsiveness of their impulses is a positive phenomenon.10 That
is, like Belinda, their spontaneous reactions are getting it right, and there is no rea-
son to think that they, like Huck, would be better off were these reactions supported
by deliberative reasoning. Athletes who reflect on the reasons for their actions while
playing sometimes end up like Alfred, destroying those very reasons. (Of course,
in a wider slice of time, athletes do analyze their play, when they practice. I’ll argue
later that this doesn’t undermine the claim that deliberation is costly in sports, nor
does it undermine the broader claim that examples like these show that deliberation
can undermine ethical action.)
The same can be said of particular elements of social skill. The minor social
actions and reactions explained by our implicit attitudes often promote positive
and prosocial outcomes when they are spontaneously and fluently executed. Hagop
Sarkissian describes some of the ways in which these seemingly minor gestures can
have major ethical payoffs:
For example, verbal tone can sometimes outstrip verbal content in affect-
ing how others interpret verbal expressions (Argyle et al. 1971); a slightly
negative tone of voice can significantly shift how others judge the friendli-
ness of one’s statements, even when the content of those statements are
judged as polite (Laplante & Ambady 2003). In game-theoretic situations
with real financial stakes, smiling can positively affect levels of trust among
strangers, leading to increased cooperation (Scharlemann et al. 2001).
Other subtle cues, such as winks and handshakes, can enable individuals
to trust one another and coordinate their efforts to maximize payoffs while
pursuing riskier strategies (Manzini et al. 2009). (2010b, 10)11
Here, too, nondeliberative action is a positive phenomenon. It is not just that since
deliberation is not feasible in real-time action, we must make do with the second-best
alternative of relying upon our nondeliberative faculties. Rather, in cases of social
fluency, like athletic expertise, agents’ nondeliberative inclinations generate praise-
worthy outcomes, outcomes that could be undermined by deliberation. A smile that
reads as calculated or intentional—a so-called Pan Am smile—is less likely than a
genuine—or Duchenne smile—to positively affect levels of trust among strangers.
Here it seems that it is the very spontaneity of the Duchenne smile—precisely that
it is not a product of explicit thinking—that we value. Supporting this, research sug-
gests that people value ethically positive actions more highly when those actions
10
But see Montero (2010) for discussion of particular sports and arts in which this is not the case.
11
W hile I endorse giving others kind smiles and meaningful handshakes, I think one should prob-
ably avoid winking at strangers.
164 Ethics
are performed spontaneously rather than deliberatively (Critcher et al., 2013).
(However, perhaps as in the case of athletic expertise, the role of deliberation in
social fluency is indexed to past practice, as discussed later.)
Well-known thought experiments in philosophy can illustrate this idea too.
Bernard Williams’s suggestion that it is better to save one’s drowning spouse unhesi-
tatingly than to do so accompanied by the deliberative consideration that in situ-
ations like this saving one’s spouse is morally required illustrates the same point.
Williams’s point isn’t that deliberating interferes with right action by distracting or
delaying you. Rather, it’s that the deliberative consideration adds “one thought too
many” (1981, 18).12 Your spouse desires that you rescue him or her without your
needing to consult your moral values.13
Spontaneity has distinctive value in all of these cases, from conversational wit to
athletic expertise to social fluency to ethical actions that are necessarily nondelib-
erative. Deliberating in real time, when one is in the flow of decision-making, threat-
ens to undermine one’s practical success in all of these cases. Conversational wit
is undermined by deliberation in the moment, as is athletic improvisation, social
fluency, and so on. To the extent that people have reasons to be witty, or to excel
in high-speed sports, or to promote trust and cooperation through verbal tone and
smiling, they will undermine these very reasons by deliberating upon them.
Of course, in many cases, we contemplate and consider what to do in situations
like these ahead of time. Arguably the sharpest wits—stand-up comics—spend a
lifetime contemplating those nuances of social life that ultimately make up their
professional material. Likewise, expert athletes train for thousands of hours, and
much of their training involves listening to and considering the advice of their
coaches. So it is not as if deliberation plays no role in these cases. Rather, it seems to
play a crucial role in the agents’ past, in shaping and honing their implicit attitudes.
This seems to suggest that deliberation is indeed recommended in promoting
and improving virtuous implicit attitudes. It is just that the role it plays is indexed
to the agent’s past. It is important to note that this is not inconsistent with Railton’s
and Arpaly and Schroeder’s critique of Previous Deliberation (§1). Their point is
that past instances of deliberation cannot solve the regress problem of rationalizing
12
Bernard Williams makes this point in arguing against consequentialist forms of ethics, but the
idea can be broadened.
13
Or consider the key scene in the film Force Majeure. What appears to be an avalanche is rushing
toward a family. The father keeps his cool . . . until he panics and flees the oncoming rush, grabbing his
cell phone on the way but abandoning his family. The avalanche turns out to be a mirage, but this takes
nothing away from what was his quite reasonable terror. Of course, what was not reasonable was his
abandonment of his wife and small children. Surely his problem wasn’t a failure to deliberate clearly
about what to do, but rather that he acted like a coward in a situation demanding spontaneity. The rest
of the film grapples with his inability to admit what he did, but also with what his cowardly spontane-
ous reaction says about him as a person. All of this makes sense, I think, only within the framework of
thinking that his nondeliberative reaction itself should have been better.
Deliberation and Spontane i t y 165
action. To reiterate, this is because one cannot claim that a present nondeliberative
action is an action as such just because the agent contemplated her reasons for it
at some point in the past, because that act of past deliberation itself stands in need
of rationalizing. The point of this argument is to show that agents can—indeed,
must—act for reasons nondeliberatively. But this claim about the nature of action
does not provide normative guidance about what agents ought to do, that is, when
they should or shouldn’t deliberate. As such, it is consistent with the argument
against Previous Deliberation that agents ought to cultivate their implicit attitudes
by reflecting on them in various ways ahead of time, in the ostensible manner of
comics and athletes and so on.
We should hesitate to accept the idea that deliberation is always normative even
if it is indexed to the past, however. One reason for this is that there are some cases
in which agents act excellently and nondeliberatively without having had any sort
of training or preparation. Heroes, for example, often “just react” to the situation,
and do so without having deliberated about what to do at any time in the present
or the past. Wesley Autrey (Chapter 1)—the “Subway Hero” who saved the life of
an epileptic man who had fallen onto some subway tracks, by holding him down
while a train passed just inches overhead—didn’t credit his decision to any delibera-
tive act or explicit forethought, but rather to his experience working construction
in confined spaces (such that he could snap-judge that he would have enough room
for the train to pass overhead).14 Cases of social fluency often have this form too. As
I argued about Obama’s grin (Chapters 2 and 5), it is unlikely that any intentional
efforts in the past to practice reassuring gestures like this explain the fluency of his
act. Such efforts are usually woefully underspecific. They fail to distinguish those
with and without the relevant fluency (just as I said, in Chapter 4, that expert ath-
letes’ thousands of hours of intentional practice don’t distinguish them from inex-
pert athletes who have practiced for thousands of hours too).
There is also reason to question the role past deliberation plays in shaping one’s
implicit attitudes even in those cases in which agents do train for and practice what
they do. Expert athletes learn from their coaches and plausibly deliberate about
what their coaches teach them, but it’s not clear that what they learn is a product
of what they deliberate about or conceptualize. For example, in many ball sports,
like baseball, tennis, and cricket, athletes are taught to play by “watching the ball” or
“keeping their eye on the ball.” This instruction serves several purposes. Focusing
on the ball can help players to pick up nuances in the angle and spin of the incoming
serve or pitch; it can help players keep their head still, which is particularly impor-
tant in sports like tennis, where one has to meet the ball with one’s racquet at a
precise angle while one’s body is in full motion; and it can help players avoid dis-
tractions. One thing that attempting to “watch the ball” does not do, however, is
14
See http://en.wikipedia.org/wiki/Wesley_Autrey.
166 Ethics
cause players to actually visually track the ball from the point of release (in baseball
or cricket), or opponent contact (in tennis), to the point of contact with one’s own
bat or racquet. In fact, it is well established that ball players at any level of skill make
anticipatory saccades to shift their gaze ahead of the ball one or more times during
the course of its flight toward them (Bahill and LaRitz, 1984; McLeod, 1987; Land
and McLeod, 2000). These saccades—the shifting of the eye gaze in front of the
ball—occur despite the fact that most players (at least explicitly) believe that they
are visually tracking the ball the whole time.15
Moreover, it seems clear that a batter who does have an accurate understand-
ing of what she does when she watches the ball, and attempts to put this under-
standing into action as a result of practical deliberation, won’t necessarily be any
better off than the ordinary batter who falsely understands what she does when she
intends to watch the ball. The batter with the accurate belief might even be worse
off. The point isn’t that batters can’t articulate their understanding of what they do
while playing. The point is that the best available interpretation of batters’ skill pos-
its well-honed implicit attitudes to explain their ability, not true beliefs about what
to do (as I argued in Chapter 3). These implicit attitudes generate dispositions to
respond to myriad circumstances in spontaneous and skillful ways. Good coaches
are ingenious in identifying ways to inculcate these attitudes. Of course, this tute-
lage involves explicit instruction. But as in the case of watching the ball, explicit
instruction may achieve its goal not via the force of the meaning of the instruction
itself. In tennis, the coach might say, “Think about placing your serve in the outer-
most square foot of the service box,” but effectively what this might do is cause the
player to unconsciously rotate her shoulders outwardly while serving.
While these are very specific examples, broader theories of skill learning tend to
emphasize observation rather than instruction. Consider research on how children
learn to manipulate objects given their goals. Showing them various toys, Daphna
Buchsbaum and colleagues (2010) told four-year-olds, “This is my new toy. I know
15
In fact, David Mann et al. (2007) show that skilled cricketers suffer little to no loss in their bat-
ting abilities even when they are wearing misprescribed contact lenses that render the batters’ eyesight
on the border of legal blindness (although one must note the small sample size used in this study).
Research suggests that visual information in these contexts is processed in the largely unconscious
dorsal visual stream (see Chapter 2, footnote 13). For an extensive analysis of the watching-the-ball
example, see Brownstein and Michaelson (2016) and Papineau (2015). Recall (from Chapter 4) that
Papineau argues that athletes in these contexts must hold in mind an intention to perform what he calls
a “basic action.” Basic actions are, on his view, actions that you know how to do and can decide to do
without deciding anything else. For example, a batter might hold the intention in mind to “hit defen-
sively.” The batter need not—and ought not—focus on how to hit defensively. Focusing in this way on
the components of a basic action can cause what Papineau calls the “yips.” But failing to keep the right
intention in mind causes what Papineau calls “choking.” Distinguishing between choking and the yips
in this way is useful, but the concern I just discussed remains so long as there is the possibility of the
agent’s intentions to perform basic actions being confabulated.
Deliberation and Spontane i t y 167
it plays music, but I haven’t played with it yet, so I don’t know how to make it go.
I thought we could try some things to see if we can figure out what makes it play
music.” The experimenters then showed the children various sequences of actions,
like shaking the toy, knocking it, and rolling it. Following a pattern, sometimes
the toy played music and sometimes it didn’t. Sometimes this pattern included
all the actions the experimenter performed, but sometimes it didn’t. When given
their turn to make the toy play music, the children didn’t simply imitate what the
experimenter did. Rather, they made decisions about which parts of the sequence
to imitate. These decisions reflected statistical encoding of the effective patterns
for making the toy work. However, in a second experiment, the experimenter told
another group of children, “See this toy? This is my toy, and it plays music. I’m going
to show you how it works. I’ll show you some things that make it play music and
some things that don’t make it play music, so you can see how it works.” In this peda-
gogical (vs. comparatively nonpedagogical) context, the children “overimitated” the
experimenter. They did just what the experimenter did, including needless actions,
thus ignoring statistical information. As a result, they were less effective learners. As
Alison Gopnik (2016), one of the coauthors of the study put it, summarizing this
and other research, “It’s not just that young children don’t need to be taught in order
to learn. In fact, studies show that explicit instruction . . . can be limiting.”
As with my argument about deliberation writ large, my point here is not that
explicit instruction is always counterproductive to skill learning. Rather, my argu-
ment is that it is harder than it seems to explain away cases of nondeliberative skill
by indexing those skills to past acts of deliberative learning.
thinking—seem essential for avoiding them. Indeed, arguably the most widely
practiced and seemingly effective techniques for habit change and overcom-
ing unwanted impulses focus on critical self-reflection. I have in mind versions of
cognitive-behavioral therapy (CBT), which are standardly understood in terms
of encouraging agents to consciously and explicitly revise the ways in which they
make sense of their own habitual thoughts, feelings, and actions. CBT strategies
teach agents how to replace their unwanted or maladaptive cognitive, affective, and
behavioral patterns with patterns that match their reflectively endorsed goals. This
process of recognition and replacement is thoroughly deliberative, or so it is argued.
Indeed, it has been compared by its practitioners to an inwardly focused process of
Socratic questioning (Neenan and Dryden, 2005; Beck, 2011).
Nevertheless, I now want to point out important ways in which deliberation can
distract from and even undermine the fight against unwanted implicit attitudes.
One way in which deliberation can be distracting in combating unwanted
implicit attitudes is by simply wasting time. For example, cognitive therapies are
very common for treating phobias and anxiety disorders, in particular in conjunc-
tion with various exposure therapies. But reviews of the literature show that they
provide little, if any, added benefit to exposure therapies alone (Wolitzky-Taylor
et al., 2008; Hood and Antony, 2012). Moreover, consider one of the most striking
demonstrations of effective treatment for spider phobia (Soeter and Kindt, 2015).
First, spider-fearful participants were instructed to approach an open-caged taran-
tula. They were led to believe that they were going to have to touch the tarantula
and, just before touching it, were asked to rate their level of fear. The point of this
was to reactivate the participants’ fearful memories of spiders. After reporting their
feelings, the cage was closed (i.e., they didn’t have to touch the spider). A treat-
ment group was then given 40 mg of the beta blocker propranolol. (One control
group was given a placebo; another was given the propranolol but did not have the
encounter with the tarantula.) Beta blockers are known to disrupt the reconsolida-
tion of fear memories while leaving expectancy learning unaffected. After taking
the beta blocker, the participants’ approach behavior toward spiders was measured
four times, from four days post-treatment to one year post-treatment. Treatment
“transformed avoidance behavior into approach behavior in a virtual binary fash-
ion,” the authors write, and the effect lasted the full year after treatment (Soeter and
Kindt, 2015, 880). All participants in the treatment group touched a tarantula four
days after treatment, while participants in the control groups were barely willing
to touch the tarantula’s cage. Moreover, and most relevant to the effectiveness of
deliberatively considering one’s unwanted thoughts and feelings, participants in the
treatment condition reported an unchanged degree of fear of spiders after treatment,
despite these dramatic changes in their behavior. Eventually, three months later,
their self-reported fear “caught up” with the changes in their behavior. The authors
conclude, “Our findings are in sharp contrast with the current pharmacological and
cognitive behavioral treatments for anxiety and related disorders. The β-adrenergic
Deliberation and Spontane i t y 169
blocker was only effective when the drug was administered upon memory reactiva-
tion, and a modification in cognitive representations was not necessary to observe a
change in fear behavior” (2015, 880).16
In a more general context, it is hard to tell when deliberative self-criticism and
reflection are doing the work of changing one’s attitudes and behavior and when
it just seems so. This is often the case with cognitive therapies. Despite its prac-
titioners’ understanding of CBT’s mechanisms for bringing about attitude and
behavioral change, there are reasons to think that CBT is effective for quite dif-
ferent reasons. One point of concern about theorists’ understanding of how CBT
works has to do with the accuracy of introspection. The theory of CBT assumes that
people can accurately introspect the maladaptive thoughts that drive their behav-
ior. This identification is putatively accomplished through a process of introspective
Socratic questioning (i.e., Socratic questioning of oneself, guided by the CBT practi-
tioner). But, of course, people are not always (or often) good at identifying through
introspection the thoughts that explain and cause their behavior. This is because
most of our thoughts are not encoded in long-term, episodic memory. Instead, as
Garson Leder (2016) points out in a critique of the dominant interpretation of how
CBT works, usually people rely upon theory-guided reconstructions when trying
to understand their own past experiences and thoughts (e.g., Nisbett and Wilson,
1977). Dominant theories of CBT require that people identify the actual past
thoughts that explain their current feelings and behavior, not just past thoughts that
could coherently make sense of their current feelings and behavior.17 But the real-
ity of how memory works—as is in evidence in the large confabulation literature,
for example (as discussed later)—suggests that this is simply not feasible most of
the time. Alternatively, it is possible that CBT works through associative processes.
Patients might simply be counterconditioning their associations between particular
thoughts and particular feelings and behavior by rehearsing desirable thoughts in
conjunction with desirable outcomes.
A similar pattern may be at work in some cases of interventions for implicit bias.
Some research suggests that persuasive arguments effectively shift implicit atti-
tudes.18 If this is so, then it seems to follow that deliberating about the contents of
16
Another reason this experiment is striking is that propranolol has also been used effectively to
diminish anti-black bias on the black–white race IAT (Terbeck et al., 2012). This is suggestive of pos-
sibilities for future research on interventions to combat implicit bias, whether involving beta blockers
or other fear-reconditioning approaches. Such interventions may represent a “physical stance” of the
kind I describe in Chapter 7. They may also supplement the role of rote practice in implicit attitude
change, also described in Chapter 7.
17
Alternatively, patients are encouraged to “catch” and record snippets of their inner monologue in
particular and problematic situations. It is similarly unclear if doing this—accurately recording parts of
one’s inner monologue—is actually what’s responsible for the salutary effects of CBT.
18
See Richeson and Nussbaum (2004), Gregg et al. (2006), and Briñol et al. (2009). See also
Chapter 3.
170 Ethics
one’s implicit attitudes is likely to be an effective way of changing them. The connec-
tion here is that deliberation is the usual means by which we evaluate and appreciate
the strength of an argument. In deliberation, we try to infer sound conclusions from
premises and so on. So evidence showing that persuasive arguments change implicit
biases would seem to be good evidence in turn that deliberation is effective in com-
bating unwanted implicit biases.
Pablo Briñol and colleagues (2009) compared the effects on an IAT of read-
ing strong versus weak arguments for the hiring of black professors at universities.
One group of research participants learned that hiring more black professors has
been shown to increase the quality of teaching on campus—a strong argument—
while the other group was told that hiring more black professors was trendy—a
weak argument. Only participants in the strong argument condition had more posi-
tive implicit attitudes toward blacks after intervention. Mandelbaum (2015b) cites
this study as evidence supporting his propositional account of implicit attitudes
(Chapter 3). The reason he does so is that, on his interpretation, the implicit atti-
tudes of the participants in the strong-argument condition processed information
in an inferentially sound way. If this is right, it suggests that people ought to deliber-
ate about reasons to be unbiased, with the expectation that doing so will ameliorate
their implicit biases.
But as in the case of CBT, it is unclear exactly why argument strength affects
implicit attitudes. As Levy (2014) points out, to show that implicit attitudes are
sensitive to inferential processing, it is not enough to measure implicit attitudes
toward φ, demonstrate that agents make a relevant set of inferences about φ, and
then again measure agents’ implicit attitudes toward φ. It could be, for instance,
that the persuasive arguments make the discrepancy between agents’ implicit and
explicit attitudes salient, that agents find this discrepancy aversive, and that their
implicit attitudes shift in response to these negative feelings (Levy, 2014, 12).19 In
this case, implicit attitudes wouldn’t be changing on the basis of the logical force of
good arguments. Rather, they would be changing because the agent feels bad. In
fact, Briñol and colleagues have explored the mechanisms underlying the effects of
persuasive arguments on implicit attitude change. They suggest that what drives the
effect is simply the net valence of positive versus negative thoughts that the study
participants have. It’s not the logical force of argument, in other words, but rather
that positive arguments have positive associations. They write, “The strong mes-
sage led to many favorable thoughts . . . the generation of each positive (negative)
thought provides people with the opportunity to rehearse a favorable (unfavorable)
evaluation of blacks, and it is the rehearsal of the evaluation allowed by the thoughts
Although there may be reasons to doubt this interpretation. As Madva points out (pers. com.),
19
the common finding in the literature is that making people conscious of their implicit biases—or mak-
ing those attitudes salient—tends to amplify rather than diminish bias on indirect measures.
Deliberation and Spontane i t y 171
(not the thoughts directly) that are responsible for the effects on the implicit measure”
(2009, 295, emphasis added).
This is consistent with what I suggested about the possible mechanisms driving
the effects of CBT, namely, that agents are counterconditioning their associations
between particular thoughts and particular feelings and behavior. If this is what’s
happening in Briñol and colleagues’ study, then the right conclusion might not be
that agents ought to deliberate about persuasive reasons to be unbiased. It might
be more effective to simply make people feel bad about being biased.20 (I’m not
arguing that we should make people feel bad about their implicit biases. In the next
chapter I canvass various nondeliberative tactics for improving the virtuousness of
implicit attitudes.)
Recent developments in multinomial modeling of performance on indirect mea-
sures of attitudes can help to uncover the particular processes underlying implicit
attitude change. The Quadruple Process Model (Quad Model; Conrey et al., 2005),
for instance, is a statistical technique for distinguishing the uncontrollable activa-
tion of implicit associations from effortful control over activated associations. If per-
suasive arguments really do effectuate changes within implicit attitudes themselves,
rather than by amplifying aversive feelings (or by some other means), the Quad
Model would (in principle) demonstrate this by showing changes in the activation
of associations themselves rather than heightened effortful control. Unfortunately,
this particular analysis has not been done.21 The Quad Model has been used, how-
ever, to analyze the processes underlying other implicit attitude change interven-
tions. Thus far, it appears that interventions that change the activation of implicit
associations themselves—in the context of implicit racial biases—include retrain-
ing approach and avoid tendencies toward members of stigmatized racial groups
(Kawakami et al., 2007b, 2008), reconditioning the valence of race-related stim-
uli (Olson and Fazio, 2006), and increasing individuals’ exposure to images, film
clips, or even mental imagery depicting members of stigmatized groups acting in
stereotype-discordant ways (Blair et al., 2001; Dasgupta and Greenwald, 2001;
Weisbuch et al., 2009). What’s notable about this list of interventions—all of which
I discuss in more depth in the next chapter—is its striking dissimilarity to reasoned
persuasion. These techniques seem to operate via retraining and countercondition-
ing, not deliberation.
In other cases, conscious and seemingly deliberative efforts to be unbiased can
actually make implicit biases worse. The problem here—as previously discussed—
is that deliberation requires effortful attention, and self-directed effortful attention
is cognitively depleting. The effect of cognitive depletion is then sometimes more
biased behavior (in addition to other unwanted behaviors, such as outbursts of
20
For detailed discussion of Briñol and colleagues’ research and its bearing on questions about the
nature of implicit attitudes, see Madva (2016b).
21
So far as I know; see Madva (2017).
172 Ethics
anger).22 Evan Apfelbaum and colleagues (2008) show that agents’ tendencies to
avoid talking about race in interracial interactions predict negative nonverbal behav-
iors. People who are “strategically color-blind,” in other words, tend to demonstrate
more negative race-based micro-behaviors. Moreover, Apfelbaum and colleagues
find that this relationship is mediated by a decreased capacity to exert self-control.
This finding is consistent with a cognitive depletion explanation of this phenom-
enon. One’s resources for controlling one’s biases through effortful and conscious
control become temporarily spent, thus leading to more biased behavior.23
When we move beyond a narrow focus on implicit bias, we see not only that
deliberation sometimes has problematic consequences, but also that deliberation
itself can be subject to biases and shortcomings. The order in which we contem-
plate questions or premises, the vivacity with which a moral dilemma is posed, the
framing of a policy as an act or an omission—all of these dramatically affect the
judgments we make when deliberating. Perhaps nowhere are the vulnerabilities
of deliberation clearest, though, than in the confabulation literature. Nisbett and
Wilson’s (1977) classic paper, “Telling More than We Can Know: Verbal Reports
on Mental Processes,” instantiated a surge of studies demonstrating the frequent
gulf between the contents of our mental deliberations and the actual reasons for
our actions. Nisbett and Wilson’s findings pertained to the “position effect” on the
evaluation of consumer goods. When asked to rate the quality of four nightgowns
or four pairs of pantyhose, which were laid out in a line, participants routinely pre-
ferred the rightmost item. This was a dramatic effect; in the case of pantyhose, par-
ticipants preferred the pair on the right to the other pairs by a factor of 4 to 1. The
trick was that all the nightgowns and pantyhose were identical. Moreover, none of
the participants reported that they preferred the items on the right because they
were on the right. Instead, they confabulated reasons for their choices. Even more
dramatic effects have since been found. The research of Petter Johansson and col-
leagues (2005) on choice blindness demonstrated that people offer introspectively
derived reasons for choices that they didn’t even make. This was shown using a
double-card ploy, in which participants are asked to state which of two faces shown
in photographs is more attractive and then to give reasons for their preference. The
ploy is that, unbeknownst to participants, the cards are switched after they state
their preference but before they explain their choice. Johansson and colleagues
22
For cognitive depletion and anger, see Stucke and Baumeister (2006) and Gal and Liu (2011).
For cognitive depletion and implicit bias, see Richeson and Shelton (2003) and Govorun and Payne
(2006), in addition to Apfelbaum et al. (2008), discussed here. A recent meta-analysis of the ego deple-
tion literature (Hagger et al., 2016) may complicate these claims, but I am unaware of any failures to
replicate these specific studies. For detailed discussion of this meta-analysis and of effect sizes and
replication efforts in the ego depletion literature, see http://soccco.uni-koeln.de/cscm-2016-debate.
html. See also my brief discussion in the Appendix.
23
Although other factors are surely in play when agents attempt to “not see” race. A difficulty taking
the perspective of others is a likely factor, for example. Thanks to Lacey Davidson for this suggestion.
Deliberation and Spontane i t y 173
found that participants failed to detect the fact that they were giving reasons for pre-
ferring the face that they did not in fact prefer 74% of the time. This includes trials in
which participants were given as much time as they wanted to deliberate about their
choice as well as trials in which the face pairs were highly dissimilar.24
Studies like these make it hard to argue straightforwardly that thinking more
carefully and self-critically about our unwanted implicit attitudes is always the opti-
mal solution for improving them. If there is often a gulf between the reasons for
which we act and the reasons upon which we deliberate, as studies like these sug-
gest, then deliberation upon our reasons for action is likely to change our behavior
or attitudes in only a limited way.
Further evidence still for this is found in studies that examine the judgments
and behavior of putative exemplars of deliberation. Consider, for example, research
on what Dan Kahan and colleagues (2017) call “motivated numeracy.” Numeracy
is a scale used to measure a person’s tendency to engage in quantitative reason-
ing and to do so reflectively and systematically in order to make valid inferences.
Kahan and colleagues told one set of participants about a study examining the
effects of a new cream developed for treating skin rashes. The participants were
given basic data from four conditions of the fictitious study (in a 2 × 2 contin-
gency table): how many patients were given the cream and improved; how many
were given the cream and got worse; how many were not given the cream and
improved; and how many were not given the cream and got worse. The partici-
pants were then asked whether the study supported the conclusion that people
who used the cream were more likely to get better or to get worse than those who
didn’t use the cream. This isn’t an easy question, as it involves comparing the ratios
of those who experienced different outcomes. Unsurprisingly, 59% of participants
(across conditions) gave the wrong answer. Also unsurprisingly, and confirming
extensive previous research, was that the participants’ numeracy scores predicted
their success: 75% of those in the 90th percentile and above gave the right answer.
In a second condition, Kahan and colleagues presented the participants with the
same data, except that instead of presenting them as the results of a medical study
on skin cream, they presented the data as the results of a study on the effects of
gun-control laws on crime rates. The same 2 × 2 table was presented, this time
showing how many cities had enacted gun-control measures and crime increased;
how many cities had enacted gun-control measures and crime decreased; how
many cities had not enacted gun-control measures and crime increased; and how
many cities had not enacted gun-control measures and crime decreased. And
24
I note the general gendered creepiness of the research prompts used in these studies. One won-
ders why Nisbett and Wilson chose nightgowns and pantyhose, rather than, say, belts and aftershave.
Was it a presumption that women are more likely to confabulate than men? One also wonders why
Johansson and colleagues had participants rate the attractiveness of photographs of women only. Are
men not fitting subjects for evaluation as objects of attraction?
174 Ethics
the participants were then asked to indicate whether “cities that enacted a ban on
carrying concealed handguns were more likely to have a decrease in crime . . . or an
increase in crime than cities without bans” (Kahan et al., 2007, 63). Here, too, Kahan
and colleagues confirmed mountains of previous research, except that in this con-
dition the unsurprising finding had to do with motivated reasoning. Participants’
political attitudes affected their evaluations of the implications of the data. In the
gun-control condition, but not in the skin-cream condition, liberals increasingly
identified the right answer when the right answer (given the fictitious data) was that
gun-control measures decrease crime, and conservatives increasingly identified the
right answer when the right answer was that gun-control measures increase crime.25
What’s striking about this study is a third finding, which builds upon these two
unsurprising effects (i.e., that higher numeracy predicts participants’ quantitative
reasoning and that people engage in motivated reasoning when they are evaluat-
ing something that holds political, social, or moral meaning for them). The third
finding was that higher numeracy predicted participants’ performance only in the
gun-control condition when the correct response was congenial to their political
outlooks. Higher numeracy didn’t protect participants from motivated reasoning,
in other words. In fact—and this is what’s really striking—motivated reasoning
increased dramatically in more numerate participants. Kahan and colleagues report:
This suggests that people who are better at quantitative reasoning are more likely to
engage in motivated reasoning than people who are worse at quantitative reason-
ing. Kahan and colleagues suggest that this is because more numerate individuals
have a cognitive ability that they use opportunistically to sustain identity-protective
beliefs. They are better at reasoning toward false conclusions, in other words, in
order to confirm their antecedent beliefs.26
Within each condition, half of the participants were given data that supported the inference that
25
the skin cream/gun-measure improves/increases the rash/crime, and the other half were given data
that supported the inference that the skin cream/gun-measure worsens/decreases the rash/crime.
26
But it is notable that Kahan and colleagues do not take this finding to suggest that these partici-
pants are acting irrationally. They write, “We submit that a form of information processing cannot reli-
ably be identified as ‘irrational’, ‘subrational’, ‘boundedly rational’, or the like independent of what an
individuals’ aims are in making use of information (Baron, 2008, p. 61). It is perfectly rational, from an
Deliberation and Spontane i t y 175
individual welfare perspective, for individuals to engage with decision-relevant science in a manner that
promotes culturally or politically congenial beliefs. What any individual member of the public thinks
about the reality of climate change, the hazards of nuclear waste disposal, or the efficacy of gun control
is too inconsequential to influence the risk that that person or anyone he or she cares about faces.
Nevertheless, given what positions on these issues signify about a person’s defining commitments,
forming a belief at odds with the one that predominates on it within important affinity groups of which
such a person is a member could expose him or her to an array of highly unpleasant consequences
(Kahan, 2012). Forms of information processing that reliably promote the stakes that individuals have
in conveying their commitment to identity defining groups can thus be viewed as manifesting what
Anderson (1993) and others (Lessig, 1995; Akerlof and Kranton, 2000; Cohen, 2003; Hillman, 2010;
Stanovich, 2013) have described as expressive rationality. If ideologically motivated reasoning is expres-
sively rational, then we should expect those individuals who display the highest reasoning capacities to
be the ones most powerfully impelled to engage in it (Kahan et al., 2012). This study now joins the rank
of a growing list of others that fit this expectation and that thus supports the interpretation that ideo-
logically motivated reasoning is not a form of bounded rationality, but instead a sign of how it becomes
rational for otherwise intelligent people to use their critical faculties when they find themselves in the
unenviable situation of having to choose between crediting the best available evidence or simply being
who they are.” (2017, 77, emphasis in original).
27
I say that this finding is consistent with Schwitzgebel and Cushman’s research on the presump-
tion that professional philosophers rate relatively high in cognitive sophistication. I do not know
whether this is actually true. At the least, I suspect that it is true of common beliefs about professional
philosophers.
176 Ethics
5. Conclusion
These arguments are not at all tantamount to the radical view that deliberation has
no role, or even an unimportant role, in making our spontaneous reactions more
virtuous. Rather, they show that deliberation is neither necessary nor always rec-
ommended for ethical action. This leaves open the possibility that some forms of
deliberation are better than others.28 But it forecloses the possibility that the way to
ensure that our implicit attitudes are ethical is always to scrutinize them, Alfred-like.
This we often ought to do, but it isn’t always necessary and it can be costly. The ques-
tion remains, then: What ought we to do in these situations? How can we effectively
cultivate better “instincts” for acting both spontaneously and ethically, Belinda-like?
See Elizabeth Anderson (2005) for a compelling comment on the value of creating more effec-
28
tive “deliberative contexts” for combating the shortcomings of some moral heuristics.
7
The Habit Stance
In the preceding chapter, I argued that deliberation is neither necessary nor always
recommended for ethical action. One way to gloss this claim is to think in terms of
Dennett’s distinction between “stances” for predicting behavior. Dennett (1989)
popularized the terms “physical stance,” “design stance,” and “intentional stance”.
These terms refer to ways of making predictions about what entities in the world
will do. The physical stance is the basic orientation of the natural sciences. If you
want to predict what a rock will do when I drop it from my hand, you’ll use the
physical stance (or some lay approximation of it). That is, you’ll use what you know
about how gravity affects objects in order to make a prediction about what the rock
will do (namely, fall toward the earth). You won’t worry about what the rock has
been designed to do, unless you’re an extreme creationist or believe in an archaic
teleological version of physics. Nor will you worry about the rock’s wanting or
hoping or desiring to fall to the earth, since most people don’t think rocks have
intentional states like these. If you want to predict what a toaster will do when you
press the start button, however, you probably won’t focus on the underlying physics
of heat and electricity, nor again will you focus on what you imagine the toaster’s
hopes or fears to be. Rather, because the toaster is an artifact, you’ll focus on what it
was designed to do. You’ll do this because it’s effective and reliable to predict what
the toaster will do on the basis of your understanding of what it was designed to do.
Your understanding of its design might be wrong, but taking the design stance is
still probably your best bet. Finally, if you want to make a prediction about what a
person will do—say, what Selma will do if she hates her job—you are best off focus-
ing on familiar features of her mind. She is likely to show up late, complain about
her boss, maybe quit, and so on. Jerry Fodor illustrates the appeal of the inten-
tional stance: “If you want to know where my physical body will be next Thursday,
mechanics—our best science of middle-sized objects after all, and reputed to be
pretty good in its field—is no use to you at all. Far the best way to find out (usually, in
practice, the only way to find out) is: ask me!” (1987, 6, emphasis in original).
Each stance can be taken toward one and the same entity, with variable degrees
of success. It won’t do much good to take the intentional stance toward rocks and
177
178 Ethics
toasters, but a neuroscientist might make very successful predictions about human
behavior by taking the physical stance. (This doesn’t take a neuroscientist, of course.
Taking the physical stance, I can predict that if I feed you a lot of caffeine, you will
act energetically.) Similarly, evolutionary psychologists might make predictions
about human emotions by taking the design stance. For example, we have evolved
to feel disgusted by germs and pathogens that can harm us (Kelly, 2013). This leads
to a design-stance prediction that you will feel disgusted if I offer you a handful of
rotting food.
In arguing that deliberation is neither necessary nor always recommended for
ethical action, one could say that I am arguing against taking the intentional stance
alone toward oneself. Roughly, by this I mean that, in endeavoring to improve the
virtuousness of our implicit attitudes, we should not treat them as rational states to
be reasoned with. Admittedly, this is a bit of a stretch of Dennett’s concept. Taking
the intentional stance doesn’t mean focusing on an agent’s deliberations per se or on
making predictions about the effects of various self-regulation strategies. Dennett’s
stances are tools for making predictions about what entities will do, by treating those
entities as if their behavior were driven just by physical forces, functional design, or
intentional states.1 What is adaptable from Dennett, though, is the broad distinction
between predictions that focus on an entity’s explicit mental states—those states
capable of figuring into deliberation and practical reasoning—and predictions that
don’t. This adaptation is useful when we are considering what kinds of interven-
tions for improving our implicit attitudes are effective and for conceptualizing the
attitude we take toward ourselves when aspiring to self-improvement.2
At the same time, the ethics of implicit attitudes is a matter neither of adopting
the physical stance alone toward oneself nor of adopting the design stance alone
toward oneself. Implicit attitudes are intentional states with cognitive and affective
components. The actions they underwrite reflect upon us; they are examples of
what we care about. These qualities of implicit attitudes recommend against regu-
lating them by treating them as if they were governed by the laws of physics alone.
They also recommend against treating them as if they were designed to perform
certain fixed functions.
1
The “as if ” is important for allowing Dennett room to endorse physicalism. The point of these
stances is not to distinguish entities that are and aren’t made of, or governed by, physical forces. The
point is rather about making the most accurate and efficient predictions of behavior.
2
I am indebted to Louise Antony (2016) for the idea of characterizing various interventions for
changing implicit attitudes in terms of Dennett’s stances. Note that the relevant distinction here is not
between stances that maintain an engaged, first-person perspective and stances that take up a disen-
gaged, third-person perspective toward one’s implicit attitudes. In the way that I will use the concept of
“stances,” all stances will involve taking certain attitudes toward one’s implicit attitudes. These are ways
of treating our implicit attitudes when we try to change them. So even taking the intentional stance
toward one’s implicit attitudes involves adopting a third-person perspective on them.
The Habit Stanc e 179
While improving our implicit attitudes is not reducible to taking the intentional,
physical, or design stance, it involves elements of each. Cultivating more virtu-
ous implicit attitudes is in part intentional, physical, and design-focused, in other
words. In recognition of William James’s point that “our virtues are habits as much
as our vices” (1899, 64) (and in keeping with the tradition that any decent work of
philosophical psychology must quote James), I will call the most promising hybrid
of these stances the habit stance.3 The habit stance is in part intentional, because
improving our implicit attitudes ineliminably involves goal-setting and adopting
particular motives. It is in part physical, because improving our implicit attitudes
necessarily involves rote repetition and practice, activities that operate on the basis
of brutely causal processes. And it is in part design, because improving our implicit
attitudes requires both appreciating what they are designed by evolution to do
and appreciating how to design our environments and situations to elicit the right
kinds of spontaneous responses.4 In this context, adopting the habit stance means
cultivating better implicit attitudes—and hence better spontaneous decisions and
actions—by treating them as if they were habits.
In this chapter, I describe what it means to take the habit stance in endeavoring
to improve the virtuousness of our implicit attitudes. I do so by appealing to the
most empirically well-supported and promising interventions for implicit attitude
change. One virtue of this approach is that it skirts a worry about the “creepiness”
of implicit attitude change interventions. The creepiness worry is that these inter-
ventions will be widely rejected because they seem akin to a kind of brainwashing.
Some interventions involve rote repetition of associations, and some are even pre-
sented subliminally. On their own, these techniques seem to be ways of taking the
physical stance alone toward attitude change, which does have a Clockwork Orange
ring to it. Understanding these techniques in the broader context of the habit stance,
however, should alleviate the creepiness worry. Treating our implicit attitudes as if
they were habits means treating them as part of who we are, albeit a part distinct
from our ordinary, conscious beliefs and values.
Of course, research on implicit social cognition is relatively new, which means
that research on implicit attitude change is even newer. While it is clear that implicit
attitudes are malleable, there is much to learn about the most effective techniques
for changing them. First I’ll discuss three general approaches that increasingly
appear to be well supported in both lab-based and field studies (§1). Each repre-
sents a different element of habit change, that is, a way of taking the habit stance
toward oneself. Respectively, I’ll outline rote practice (§1.1), pre-commitment (§1.2),
3
Thanks to Alex Madva for suggesting this concept.
4
Understood this way, the idea of the habit stance marks a step beyond what others have rightly
noted about states like implicit attitudes, that they are reducible to neither “reasons” nor “causes”
(Gendler, 2012; Dreyfus, 2007a). This is true, and our efforts to improve them should similarly not be
reduced to targeting either of these alone.
180 Ethics
and context regulation (§1.3). These are not the only effective strategies for the self-
regulation of implicit attitudes, nor will they necessarily prove to be the most effec-
tive. Future research will tell. But what they illustrate is the value and importance
of the habit stance, and in particular how this stance is composed of elements of
the others (i.e., of the intentional, physical, and design stances). Then I’ll consider
objections to taking the habit stance in this way (§2). Some objections are primarily
empirical; they are about the broadness and durability of implicit attitude change
interventions. Another is not empirical. It is about the nature of praise, in particular
whether the reshaping of one’s attitudes and behavior in the ways I describe counts
as a genuine form of ethics.
1. Techniques
Research on self-regulation is immensely broad, denoting more a set of assump-
tions about techniques for self- improvement than a particular target set of
unwanted behaviors. Two assumptions are particularly salient in the field. One is
that approaches to ethical self-cultivation can and should be subjected to empirical
scrutiny. I embrace this assumption in what follows. The second is that the relation-
ship between self-regulation and self-control is extremely tight. I reject this assump-
tion. In doing so, I cast a wider net in considering the most effective techniques for
improving our implicit attitudes, wider, that is, than a focus on self-control strate-
gies alone would allow.
“Self-regulation” and “self-control” are not settled terms in the research literature.
But roughly, self-regulation refers to the general process of managing one’s thoughts,
feelings, and behavior in light of goals and standards (e.g., Carver and Scheier
1982; Fujita 2011). Self-control refers more narrowly to impulse inhibition (e.g.,
Mischel et al., 1988) or, as Angela Duckworth puts it, to “the voluntary regulation
of behavioral, emotional, and attentional impulses in the presence of momentarily
gratifying temptations or diversions.”5 Many researchers see self-regulation and self-
control, understood in roughly these terms, as very tightly linked. In their widely
cited meta-analysis, for example, Denise T. D. de Ridder and colleagues say that,
despite important differences between competing theories, “researchers agree that
self-control focuses on the efforts people exert to stimulate desirable responses and
Duckworth’s definition differs from Mischel’s in the sense that she does not emphasize the inhi-
5
bition of impulses in the service of resisting temptation. One can exhibit self-control, on her defini-
tion, by avoiding situations in which one will feel unwanted impulses. See https://sites.sas.upenn.
edu/duckworth/pages/research. Note also that others challenge these definitions of self-regulation
and self-control. See, e.g., Jack Block (2002), who advances a roughly Aristotelian conception of self-
control, as a capacity that admits of deficiencies as well as excesses. For an elaboration of these issues,
as well as of the examples I present in the text, see Brownstein (ms).
The Habit Stanc e 181
running late, the adaptive response to things not working out may be to
start walking home (explore a more promising strategy). When the youth
is struggling with schoolwork the automatic response could be either to
just move on, or to persist, depending partly on whether they think of this
as the same situation as someone being late picking them up from school
(for example if they think “this is yet another situation where I can’t rely on
others”). (2015, 10–11)
There are also counterexamples that challenge the idea that self-regulation and
self-control are always tightly linked. Self-regulation is sometimes attained without
suppressing occurrent impulses or resisting temptations, such as when one success-
fully manages competing thoughts and feelings while playing sports (Fujita, 2011).
Shooting a free throw in basketball, for instance, can involve suppressing impulsive
feelings of fear about missing the shot, but it need not. And it is likely that the best
shooters don’t suppress these feelings of fear; rather, they simply don’t feel them.
Sometimes one achieves self-regulation by acting on the basis of one’s impulses.
Alfred and Belinda, from Chapter 6, offer a good example of this. Both, we can
presume, share the long-term intention to be healthy and sensible, and both also
enjoy spontaneously eating ice cream for dinner from time to time. Only Belinda,
however, has good instincts for when and where to spontaneously eat ice cream for
dinner. In other words, only Belinda has a good feel for when to act on the basis
of her impulses and temptations. She seems better self-regulated than Alfred, who
cannot let himself indulge without carefully scrutinizing his decision (in such a way,
as I discussed earlier, that he ruins the idea of spontaneously eating ice cream for
dinner). In other words, Alfred seems to exert more self-control than Belinda, but
Belinda seems better self-regulated than Alfred.
As I have said, avoiding the assumption that self-regulation necessarily involves
effortful self-control enables me to cast a wider net in considering the most effec-
tive techniques for improving our implicit attitudes. I begin with the simplest: rote
practice.
a coach, who assigns practice activities with explicit goals and effective practice
activities with immediate feedback and opportunities for repetition” (2016, 352).
Brooke Macnamara and colleagues, in their meta-analysis, employ a much broader
understanding of practice, including activities like weight training and watching
film. Neither of these approaches speaks to the role of rote practice in skill develop-
ment. Surely such repetitive action is insufficient for expertise. But it is consistent
with the evidence that it is nearly always necessary.7
This is not a new discovery in other areas of life. Confucius’s Analects place great
emphasis on the rote practice of ritual activities (Li). Practice is central to living in
accord with what Confucius calls “the Way” (Dao). For example, in Analects 8.2,
“The Master said, ‘If you are respectful but lack ritual you will become exasperating;
if you are careful but lack ritual you will become timid; if you are courageous but
lack ritual you will become unruly; if you are upright but lack ritual you will become
inflexible’ ” (Confucius, 2003, p. 78). Performance of ritual covers a variety of activi-
ties in the Analects. On the Confucian picture, these rituals and rites serve to form
moral character and habits, which in turn help to create social harmony (Li, 2006).
Sarkissian (2010a) has combined these insights with recent research in social
psychology and neuroscience in arguing that prolonged ritual practice facilitates
spontaneous yet ethical behavior through the creation of somatic neural markers.
In short, these are affective tags that mark particular behavioral routines as positive
or negative (Damasio, 1994). Sarkissian writes:
The accrual of these markers over time fine-tunes and accelerates the
decision-making process; at the limit, the correct course of action would
come to mind immediately, with compelling emotional valence . . . famil-
iarity with a broad range of emotions, facilitated through exposure to litera-
ture, art, and social rituals, will allow one to perceive values in a wide range
of scenarios, thus improving the likelihood of responding appropriately in
any particular situation. (2010a, 7)
Somatic marker theory has been combined with theories of feedback-error learn-
ing, of the sort I discussed in Chapter 2 (Holroyd et al., 2003). This nicely connects
my account of the functional architecture of implicit attitudes with established and
plausible theories of habit and self-regulation.
The most head-on evidence attesting to the importance of rote practice for implicit
attitude change, however, comes from research on changing implicit biases. Kerry
Kawakami and colleagues have demonstrated that biased approach and avoidance
tendencies can be changed through bare-bones rote practice. In their research, this
involves training participants to associate counterstereotypes with social groups.
For example, in the training condition in Kawakami et al. (2007a), participants were
shown photographs of men and women, under which both gender-stereotypic (e.g.,
“sensitive” for women) and gender-counterstereotypic (e.g., “strong” for women)
information was written. Participants were asked to select the trait that was not cul-
turally associated with the social group represented in the picture. Participants prac-
ticed fairly extensively, repeating this counterstereotype identification a total of 480
times. The effect of practicing counterstereotypical thinking in this way was a sig-
nificant change in participants’ behavior. In the evaluation phase of the experiment,
participants made hypothetical hiring decisions about male and female candidates,
as well as rated the candidates in terms of stereotypical and counterstereotypical
traits. Kawakami and colleagues have varied the social groups and traits used as
prompts in other studies, as well as the behavioral outcome measures. For example,
using this procedure, Kawakami and colleagues (2000) asked participants to negate
black-athletic stereotypes. They were able to completely reverse biased participants’
implicit attitudes. Moreover, variations in the training technique have also proved
effective. Kawakami et al. (2007b) used a procedure in which participants pulled a
joystick toward themselves when they were presented with black faces and pushed
the joystick away from themselves when presented with white faces. The notion
guiding this technique originates in research on embodied cognition. Gestures
representing physical avoidance—pushing something away from one’s body—can
signify mental avoidance—effectively saying no to that thing. Likewise, gestures
representing physical approach—pulling something toward oneself—can signify
acceptance of that thing (Chen and Bargh, 1999).
Rote practice is at the heart of this paradigm, throughout its many variations. It
is crucial that participants in these experiments simply practice associating stigma-
tized social groups with counterstereotypes repeatedly. One way to think of this,
recommended by Madva (2017), is as the “contact hypothesis in a bottle.” In short,
the contact hypothesis—originally proposed by Gordon Allport (1954) and exten-
sively researched since—proposes that prejudiced attitudes can be changed by inter-
group contact. Put plainly, spending time and interacting with people from social
groups other than one’s own promotes egalitarian attitudes and mutual understand-
ing. Evidence for the effectiveness of contact in promoting these ends is substantive,
though only under particular conditions. Intergroup contact promotes egalitarian
explicit attitudes when both parties are of equal status and when they are engaged
in meaningful activities, for example (Pettigrew and Tropp, 2006). Evidence for the
effectiveness of intergroup contact in improving implicit attitudes is also substan-
tive, but also under particular conditions (Dasgupta and Rivera, 2008; Dasgupta,
2013). For example, Natalie Shook and Fazio (2008) found in a field study that
random assignment to a black roommate led white college students to have more
positive implicit attitudes toward black people.
Counterstereotype training mimics at least part of the mechanism that
drives the effects of intergroup contact: repeated experiences that strengthen
186 Ethics
But see Kawakami et al. (2007b) for a description of a subliminal counterstereotype training
8
technique, which does not require participants to choose to change their stereotypical associations.
Kawakami and colleagues find that participants sometimes resent counterstereotype training, and
thus they endeavored to design a less obtrusive intervention. It is noteworthy, though, that while par-
ticipants who resented counterstereotype training, and briefly showed ironic effects of the training
(i.e., showed more stereotypic responding on subsequent tasks), they subsequently demonstrated less
biased responding after time. It seems as if, once their resentment abated, they demonstrated the more
lasting effects of the intervention (i.e., less stereotypic responding).
The Habit Stanc e 187
in Confucian ethics is aimed at shaping the way that one perceives and acts upon
values. This is clearly irreducible to a physical stance ethics. And finally, while rote
practice certainly takes advantage of the ways in which we have evolved—in par-
ticular the way in which our minds are akin to categorization machines that update
in the face of changing experiences (Gendler, 2011)—rote practice is irreducible to
a design stance intervention too. The point of practicing a particular behavior is not
to fulfill some function we have acquired through evolution. Rather, it is to cultivate
better spontaneous responses to situations. Doing this is akin to cultivating better
habits, which involves intentional, physical, and design elements, but is reducible to
none of these.
1.2 Pre-Commitment
In Chapter 5, I briefly mentioned Ulysses’ strategy of instructing his crew to bind
him to the mast of his ship so that he could resist the Sirens’ song. I called this a
form of pre-commitment. Ulysses wanted to resist the song; he knew he wouldn’t
be able to resist it; so he made a plan that denied himself a choice when the moment
came. In one sense, this strategy clearly reflected Ulysses’ beliefs, values, and inten-
tions. If you wanted to predict that he would have used it, you would want to adopt
the intentional stance. In another sense, though, Ulysses took the physical stance
toward himself. He saw that he would be unable to resist the song on the basis of
his willpower. Instead, he relied on the physical force of the ropes and the mast to
keep him in place. If you wanted to predict whether Ulysses would in fact resist the
Sirens’ song, you would want to test the strength of the rope rather than the strength
of his resolve.
Ulysses’ technique was crude, literally relying on binding himself by force. More
subtle versions of self-regulation via pre-commitment have since been devised.
Principal among them—and perhaps most intriguing for psychological reasons—
are techniques that regulate one’s attention through pre-commitment.
Consider implementation intentions. These are “if–then” plans that appear to
be remarkably effective for redirecting attention in service of achieving a wide
range of goals. Implementation intentions specify a goal-directed response that
an individual plans to perform when she encounters an anticipated cue. In the
usual experimental scenario, participants who hold a goal, “I want to X!” (e.g., “I
want to eat healthy!”) are asked to supplement their goal with a plan of the form,
“and if I encounter opportunity Y, then I will perform goal-directed response
Z!” (e.g., “and if I get take-out tonight, then I will order something with vegeta-
bles!”). Forming an implementation intention appears to improve self-regulation
in (for example) dieting, exercising, recycling, restraining impulses, maintain-
ing focus (e.g., in sports), avoiding binge drinking, combating implicit biases,
and performing well on memory, arithmetic, and Stroop tasks (Gollwitzer and
Sheeran, 2006).
188 Ethics
It is worth digging a little deeper into the pathways and processes that allow
if–then planning to work, given how remarkably effective and remarkably simple
this form of self-regulation appears to be.9 Two questions are salient in particular.
First, why think that implementation intentions are a form of self-regulation via
pre-commitment, in particular involving the redirecting of attention? Second, why
think that self-regulation of this kind reflects the habit stance?
Peter Gollwitzer and Paschal Sheeran (2006) argue that adopting an imple-
mentation intention helps a person to become “perceptually ready” to encoun-
ter a situation that is critical to that person’s goals. Becoming perceptually ready
to encounter a critical cue is a way of regulating one’s attention. Perceptual readi-
ness has two principle features: it involves increasing the accessibility of cues rel-
evant to a particular goal and automatizing one’s behavioral response to those cues.
For example, performance on a shooter bias test—where one’s goal is to “shoot”
all and only those individuals shown holding weapons in ambiguous pictures—is
improved when one adopts the implementation intention “and if I see a gun, then
I will shoot!” (Mendoza et al., 2010) Here the relevant cue—“gun”—is made more
accessible—and the intended response—to shoot when one sees a gun—is made
more automatic. Gollwitzer and colleagues explain this cue–response link in asso-
ciative terms: “An implementation intention produces automaticity immediately
through the willful act of creating an association between the critical situation and
the goal-directed response” (2008, 326). It is precisely this combination of willfully
acting automatically that makes implementation intentions fascinating.
I’ll say more about the automatization of behavioral response in a moment. It
seems that being “perceptually ready” is tied in particular to the relative accessi-
bility of goal-relevant cues when one has adopted an implementation intention.
Researchers have demonstrated what they mean by cue accessibility using a num-
ber of tasks. Compared with controls, subjects who adopt if–then plans have better
recall of specified cues (Achtziger et al., 2007); have faster reactions to cues (Aarts
et al., 1999); and are better at identifying and responding to cued words (Parks-
Stamm et al., 2007). It appears that the critical cues specified and hence made more
accessible by adoption of implementation intentions can be either external (e.g., a
word, a place, or a time) or internal (e.g., a feeling of anxiety, disgust, or fear).
All of this suggests that adopting implementation intentions enhances the
salience of critical cues, whether they are internal or external to the subject. Cues,
in this context, become Features for agents. In some cases, salience is operation-
ally defined. In these cases, for a cue to be accessible just means for the subject to
respond to it quickly and/or efficiently, regardless of whether the subject sees the
cue consciously or perceives it in any particular way. It is possible, though, that the
See Gollwitzer and Sheeran (2006) for review of average effect sizes of implementation intention
9
interventions.
The Habit Stanc e 189
perceptual content of the cue changes for the subject, at the level of her percep-
tual experience. For example, if I am in a state—due to having adopted an imple-
mentation intention—where I am more likely to notice words that begin with the
letter “M,” I might literally see things differently from what I would have seen had
I merely adopted the goal of looking for words that begin with the letter “M.”10 In
both cases, implementation intentions appear to be a form of self-regulation via the
pre-planned redirecting of one’s attention.
Why think that this is a way of taking the habit stance? Without pre-commitment
to the plan— which appears to represent an intentional stance ethics—
implementation intentions can’t get off the ground. Indeed, implementation inten-
tions are tools to improve goal striving. Just as Ulysses consciously and intentionally
chose to bind himself to the mast, if–then planners must consciously and intention-
ally choose to act in way Z when they encounter cue Y. But the analogy cuts both
ways. Ulysses had to follow through with literally binding himself. In an important
sense, he had to deny himself agency in the crucial moment. If–then planning seems
to rely upon a related, but more subtle version of denying oneself choices and deci-
sions in the crucial moments of action. This is the automatization of behavioral
response. The idea is that if–then planning makes it more likely that one just will
follow the plan, without evaluating, doubting, or (re)considering it. More generally,
this is the idea of pre-commitment. It treats one’s future self as a thing to be man-
aged in the present, so that one has fewer choices later.11 One could imagine Alfred
benefiting from adopting a plan like “If I see ice cream bars at the grocery store, then
I’ll buy them!”12 Perhaps doing so would help him to avoid destroying his reasons
to be spontaneous by deliberating about those reasons.
We can find further reason to think that attention-regulation via pre-commit-
ment involves, but is not irreducible to, the intentional stance by considering related
self-regulation strategies that don’t work. Adding justificatory force to one’s imple-
mentation intentions appears to diminish their effectiveness. Frank Wieber and col-
leagues (2009) show that the behavior response of subjects who were instructed
to adopt an if–then plan that included a “why” component—for example, “If
I am sitting in front of the TV, then I will eat fruit because I want to stay healthy!”
(emphasis added)—returned to the level obtained in the goal-intention-only con-
dition. Being reminded of one’s reasons for striving for a goal seems to undermine
the effectiveness of adopting an implementation intention, in other words. Wieber
10
See Chapter 2 for discussion of different versions of this possibility, some of which involve cogni-
tive penetration of perception and some of which don’t.
11
See Bratman (1987) on how intentions can serve to foreclose deliberation.
12
As I discuss in §2, the dominant focus in research on self-regulation is on combating impulsivity.
But there are those of us like Alfred who are not too myopic, but are “hyperopic” instead. We (I admit)
struggle to indulge even when we have good reason to do so. For research on hyperopic decision-
making, see Kivetz and Keinan (2006), Keinan and Kivetz (2011), and Brownstein (ms).
190 Ethics
and colleagues found this using both an analytic reasoning study and a two-week
weight-loss study. It would be interesting to see if “if–then–why planning” has simi-
larly detrimental effects in other scenarios.
While implementation intentions clearly aren’t reducible to a physical stance
intervention, there is evidence that their effectiveness is tied to surprisingly low-
level physical processes. Implementation intentions appear to have effects on the
very beginning stages of perceptual processing. This emerges from research on the
role of emotion in if–then planning. Inge Schweiger Gallo and colleagues (2009)
have shown that implementation intentions can be used to control emotional reac-
tivity to spiders in phobic subjects. Schweiger Gallo and colleagues’ paper is the
only one I know of that offers a neurophysiological measure of emotion regulation
using implementation intentions. I mention this because they find that implemen-
tation intentions diminish negative emotional reactions to spiders very early in the
perceptual process, within 100 msec of the presentation of the cue. Compared with
controls and subjects in the goal-only condition, subjects who adopted if–then
plans that specified an “ignore” reaction to spiders showed lower activity in the P1
component of visual cortex. This suggests that subjects are not merely controlling
their response to negative feelings; they are downregulating their fear itself, using
what James Gross (2002) calls an “antecedent-based” self-regulation strategy (ante-
cedent to more controlled processes).13
Adopting an implementation intention is an example of regulating one’s future
behavior by making it the case that one pays attention to one thing rather than
another or by changing how one sees things. While this involves both intentional
and physical stance elements, as I’ve argued, one could also see this kind of self-
regulation strategy as reflective of the design stance. Pre-commitment involves
“hacking” one’s future behavior. It takes advantage of the sort of creatures that we
are, in particular that we can forecast how we will and won’t behave in the future;
13
This example is unlike many better-known cases in which implementation intentions have
powerful effects on behavior, cases in which the self-regulation of emotion does not appear to drive
the action. Cases in which if–then plans promote the identification of words that start with the letter
“M,” for example, or performance on Stroop tasks or IATs, are not cases in which negative emotions
are explicitly restrained. In these cases, there is no obvious emotional content driving the subject’s
response to the cue. But as I discuss in Chapter 2, there is the possibility that low-level affect plays a
pervasive role in visual processing. It is possible, then, that cues specified by implementation inten-
tions come to be relatively more imperatival for agents. In cases where one’s goal is to block emotional
reactions, as in spider phobia, the imperatival quality of goal-relevant cues may be diminished. In other
cases, when one’s goal is to react to “M” words, for example, the imperatival quality of the goal-relevant
cue may be increased. It is worth noting that leading implementation intention researchers Thomas
Webb and Paschal Sheeran (2008) make room for something like what I’m suggesting. They note that
it is an open question whether “forming an if–then plan could influence how much positive affect is
attached to specified cues and responses, or to the cue–response link, which in-turn could enhance the
motivational impact of these variables” (2008, 389).
The Habit Stanc e 191
that we are “set up to be set off ” (see Chapter 4) by particular kinds of stimuli (e.g.,
sugar, threats, disgusting things); and that how likely we are to achieve our goals is
perhaps surprisingly susceptible to minor variations in the form of our plans (e.g.,
implementation intention researchers insist that including the exclamation point at
the end of if–then plans matters—that is, “if Y, then Z!” rather than “if Y, then Z”).14
As before, in the case of rote practice, the idea of the habit stance captures the
nature of these kinds of interventions better than any one of the intentional, physi-
cal, or design stances alone. Gollwitzer has described implementation intentions as
“instant habits,” and although the “instant” part is a bit tongue in cheek, it does not
seem far off. Habits are, ideally, expressions of our goals and values, but not just this.
They are useful precisely because they leave our goals and values behind at the cru-
cial moment, just as Ulysses left his autonomy behind by tying himself to the mast.
One cannot see why they work in this way without adopting the physical and design
stances too, without understanding, in the case of implementation intentions, how
early perceptual processing works and how we have evolved to respond to particular
cues in particular ways. For these reasons, self-regulating our implicit attitudes via
pre-commitment is best understood as an ethical technique born of treating our-
selves as if we were habit-driven creatures.
14
T. Webb (pers. com.).
15
See also Haley and Fessler (2005) for similar results on a lab-based task.
192 Ethics
many sodas as they want), the policy encourages more moderate consumption by
removing the cue triggering overconsumption (i.e., the large cup). Richard Thaler
and Cass Sunstein (2008) describe policies like these as efforts to engineer the
“choice architecture” of social life. Such policies, as well as default plans for things
like retirement saving and organ donation, nudge people toward better behavior
through particular arrangements of the ambient environment.
Nudge policies are politically controversial, since they raise questions about
autonomy and perfectionism, but they are psychologically fascinating.16 The partic-
ular variables that moderate their effectiveness—or combinations of variables—are
revealing. For example, David Neal and colleagues (2011) have shown that people
who habitually eat popcorn at the movies will eat stale unappetizing popcorn even
if they’re not hungry, but only when the right context cues obtain. People eat less
out of habit if they are not in the right place—for example, in a meeting room rather
than at a cinema—or if they cannot eat in their habitual way—for example, if they
are forced to eat with their nondominant hand. Identifying the context cues rel-
evant to different domains of self-regulation is crucial.
Ideas like these, of manipulating subtle elements of the context, have been
used to promote the self-regulation of implicit attitudes. Gawronski and Cesario
(2013) argue that implicit attitudes are subject to “renewal effects,” which is a term
used in animal learning literature to describe the recurrence of an original behav-
ioral response after the learning of a new response. As Gawronski and Cesario
explain, renewal effects usually occur in contexts other than the one in which the
new response was learned. For example, a rat with a conditioned fear response to
a sound may have learned to associate the sound with an electric shock in context
A (e.g., its cage). Imagine then that the fear response is counterconditioned in con-
text B (e.g., a different cage). An “ABA renewal” effect occurs if, after the rat is placed
back in A (its original cage), the fear response returns. Gawronski and Cesario argue
that implicit attitudes are subject to renewal effects like these.
For example, a person might learn biased associations while hanging out with her
friends (context A), effectively learn counterstereotypical associations while tak-
ing part in a psychology experiment (context B), then exhibit behaviors consistent
with biased associations when back with her friends. Gawronski and Cesario dis-
cuss several studies (in particular, Rydell and Gawronski, 2009; Gawronski et al.,
2010) that demonstrate in controlled laboratory settings renewal effects like these
in implicit attitudes. The basic method of these studies involves an impression for-
mation task in which participants are first presented with valenced information
about a target individual who is pictured against a background of a particular color.
This represents context A. Then, the same target individual is presented with oppo-
sitely valenced information against a background of a different color, representing
16
See Waldron (2104); Kumar (2016a).
The Habit Stanc e 193
context B. Participants’ evaluations of the target are then assessed using an affec-
tive priming task in which the target is presented against the color of context A. In
this ABA pattern, participants’ evaluations reflect what they learned in context
A. Gawronski and Cesario report similar renewal effects (i.e., evaluations consist
ent with the valence of the information that was presented first) in the patterns
AAB (where the original information and new information are presented against
the same background color A, and evaluations are measured against a novel back-
ground B) and ABC (where original information, new information, and evaluation
all take place against different backgrounds).
While the literature Gawronski and Cesario discuss emphasizes the return of
undesirable, stereotype-consistent attitudes in ABA, AAB, and ABC patterns, they
also discuss patterns of context change in which participants’ ultimate evaluations
of targets reflect the counterconditioning information they learned in the second
block of training. These are the ABB and AAA patterns. The practical upshot of
this, Gawronski and Cesario suggest, is that one ought to learn counterstereotyp-
ing information in the same context in which one aims to be unbiased (ABB and
AAA renewal).17 And what counts as the “same” context can be empirically speci-
fied. Renewal effects appear to be more responsive to the perceptual similarity
of contexts than they are to conceptual identity or equivalence. So it is better, for
example, to learn counterstereotyping information in contexts that look like one’s
familiar environs than it is to learn them in contexts that one recognizes to be con-
ceptually equivalent. It may matter less that a de-biasing intervention aimed at class-
room interactions is administered in another “classroom,” for example, than that it
is administered in another room that is painted the same color as one’s usual class-
room. Finally, Gawronski and Cesario suggest that if it is not possible to learn coun-
terstereotyping interventions in the same or similar context in which one aims to be
unbiased, one ought to learn counterstereotyping interventions across a variety of
contexts. This is because both ABA and ABC renewal is weaker when counterattitu-
dinal information is presented across a variety of contexts rather than just one. The
reason for this, they speculate, is that fewer contextual cues are incorporated into
the agent’s representation of the counterattitudinal information when the B context
is varied. A greater variety of contexts signals to the agent that the counterattitudinal
information generalizes to novel contexts.
17
A common critique of implicit bias interventions is that they are too focused on changing
individuals’ minds, rather than on changing institutions, political arrangements, unjust economics,
etc. (cf. Huebner, 2009; Haslanger, 2015). Sometimes this critique presents social activism and self-
reformation as mutually exclusive, when in fact they are often compatible. The idea of learning coun-
terstereotypical information in the same context in which one hopes to act in unbiased ways represents
a small, but very practical way of combining “individual” and “institutional” efforts for change. The
relationship between individuals and institutions is reciprocal. Changing the context affects what’s in
our minds; and what’s in our minds shapes our willingness to change the context.
194 Ethics
These studies suggest that minor features of agents’ context—like the back-
ground color against which an impression of a person is formed—can influ-
ence the activation of implicit attitudes, even after those attitudes have been
“unlearned.” These are striking results, but they are consistent with a broad array
of findings about the influence of context and situation on the activation and
expression in behavior of implicit attitudes. Context is, of course, a broad notion.
Imagine taking an IAT. The background color against which the images of the
target subjects are presented is part of the context. Perhaps before having taken
the IAT, the experimenter asks the subject to imagine herself as a manager at a
large company deciding whom to hire. Having imagined oneself in this powerful
social role will have effects on one’s implicit evaluations (as discussed later), and
these effects can be thought of as part of one’s context too. Similarly, perhaps one
had a fight with one’s friend before entering the lab and began the task feeling an
acute sense of disrespect, or was hungry and jittery from having drunk too much
caffeine, and so on.
As I use the term, “context” can refer to any stimulus that moderates the way
an agent evaluates or responds behaviorally to a separate conditioned stimulus.
Anything that acts, in other words, as what theorists of animal learning call an
“occasion setter” can count as an element of context.18 A standard example of an
occasion setter is, again, an animal’s cage. A rat may demonstrate a conditioned
fear response to a sound when in its cage, but not when in a novel environment.
Similarly, a person may feel or express biased attitudes toward members of a
social group only when in a particular physical setting, when playing a certain
social role, when feeling a certain way, or the like. In Chapter 3, I discussed physi-
cal, conceptual, and motivational context cues that shape the way implicit atti-
tudes unfold. These involved the relative lightness or darkness of the ambient
environment (Schaller et al., 2003); varying the category membership of tar-
gets on tests of implicit attitudes (Mitchell et al., 2003; Barden et al., 2004);
and inducing salient emotions (Dasgupta et al., 2009). Each of these kinds of
findings presents a habit stance opportunity for the cultivation of better implicit
attitudes.
As I mentioned previously, research suggests that one should learn counterste-
reotypical information in a context that is physically similar to the one in which one
aims to be unbiased. Changing the physical context in this way is a way of treating
our implicit attitudes as if they were habits. It stems from our long-term reflective
goals; it operates on us as physical entities, whose behaviors are strongly affected by
seemingly irrelevant features of our situation; and it is design-focused, akin to the
“urban planning” of our minds.
18
I am indebted to Gawronski and Cesario (2013) for the idea that context acts as an occasion set-
ter. On occasion setting and animal learning, see Schmajuk and Holland (1998).
The Habit Stanc e 195
Interventions that focus on conceptual elements of context are similar. For exam-
ple, research suggests that perceiving oneself as high-status contributes to more
biased implicit attitudes. Ana Guinote and colleagues (2010) led participants in a
high-power condition to believe that their opinions would have an impact on the
decisions of their school’s “Executive Committee,” and these participants showed
more racial bias on both the IAT and Affect Misattribution Procedure (AMP) than
those in a low-power condition, who were led to believe that their opinions would
not affect the committee’s decisions. These data should not be surprising. It is not
hard to imagine a person who treats her colleagues fairly regardless of race, for exam-
ple, but (unwittingly) grades her students unfairly on the basis of race. Perhaps being
in the superordinate position of professor activates this person’s prejudices while
being in an equal-status position with her colleagues does not. To combat these
tendencies, one could devise simple strategies to diminish the salience of superordi-
nate status. Perhaps the chair of the department could occupy a regular office rather
than the fancy corner suite. Or perhaps she could adopt a relevant implementation
intention, such as “When I feel powerful, I will think, ‘Be fair!’ ” These also look to
be habit stance interventions, irreducible to Dennett’s more familiar stances, but
instead a hybrid of all three.
So, too, with motivational elements of context. In addition to salient emotions
like disgust, “internal” versus “external” forms of motivation selectively affect
implicit attitudes. People who aim to be unbiased because it’s personally impor-
tant to them demonstrate lower levels of implicit bias than people who want to
appear unbiased because they are concerned to act in accord with social norms
(Plant and Devine, 1998; Devine et al., 2002; for a similar measure see Glaser
and Knowles, 2008). In the broader context of self-regulation, Marin Milyavskaya
and colleagues (2015) find that people with intrinsic (or “want-to”) goals—such
as the desire to eat healthy food in order to live a long life—have lower levels of
spontaneous attraction to goal-disruptive stimuli (such as images of unhealthy
food) than do people with external (or “have-to”) goals—such as the desire to
eat healthy food in order to appease one’s spouse. The measure of spontaneous
attraction in this research is both IAT and AMP. Milyavskaya and colleagues
found that intrinsic, want-to motivation is responsible for the effect. External or
have-to motivation is unrelated to one’s automatic reactions to disruptive stimuli.
This is important because it is suggestive of the kinds of interventions for implicit
attitude change that are likely to be effective. It seems that we should try to adopt
internal kinds of motivation. While this is far from surprising to common sense,
it speaks to the nature of the habit stance. Internal forms of motivation seem hard
to simply adopt in the way that one can simply choose to adopt a goal. Instead,
internal motivation must itself be cultivated in the way of habit creation, much
as knowing how and when to deliberate is itself in part a nondeliberative skill
(Chapter 6).
196 Ethics
2. Objections
One worry about context-regulation strategies in particular is that they seem to be
very fine-grained. It seems as if there are a million different situational cues that
would need to be arranged in such-and-such a way as to shape and activate the
implicit attitudes we want. Moreover, it’s possible that different combinations of
different situations will lead to unpredictable outcomes. I think the only thing to do
in the face of these worries is await further data. Much more research is needed in
order to home in on the most powerful contextual motivators, effective combina-
tions of situational cues, and so on. This point applies in general to the other habit
stance interventions I’ve recommended, all of which are provisionally supported by
the extant data. I don’t think this implies waiting to employ these strategies, though,
since the extant data are the best data we’ve got. Rather, it simply means keeping our
confidence appropriately modest.
There are broader worries about the habit stance techniques I’ve discussed, how-
ever. Some of these are also empirical (§2.1). Are these techniques broadly applica-
ble, such that what works in the lab works across a range of real-life situations? And
are the effects of these techniques durable? Will they last, or will agents’ unwanted
implicit attitudes return once they are bombarded with the images, temptations,
and solicitations of their familiar world? Another worry is not empirical (§2.2). It is
that these self-regulation techniques do not produce genuinely ethical, praisewor-
thy action. Perhaps they are instead merely tricks for acting in ways that agents with
good ethical values already want to act. Are they perhaps a kind of cheating, akin to
taking performance-enhancing drugs for ethical action?
19
See Madva (2017) for discussion.
The Habit Stanc e 197
socially stigmatized groups in the world, and the goal of any substantive ethics of
implicit attitudes should be to create broad dispositional ethical competencies. The
idea here is that these techniques seem to have too narrow a scope.
Much of this criticism can be adjudicated only empirically. How long-lasting the
effects of the kinds of attitude-change techniques I’ve discussed can be is an open
question. There is some reason for optimism. Reinout Wiers and colleagues (2011;
replicated by Eberl et al., 2012) significantly diminished heavy drinkers’ approach
bias for alcohol, as measured by the IAT, one year after lab-based avoidance train-
ing using a joystick-pulling task akin to those described earlier in Kawakami and
colleagues’ (2007b) research (§1.1). Dasgupta and Asgari (2004) found that the
implicit gender biases of undergraduate students at an all-women’s college were
reduced over the course of one year and that this reduction in implicit bias was
mediated by the number of courses taken from female (i.e., counterstereotypical)
professors (for a related study, see Beaman et al., 2009). Moreover, adopting an
implementation intention has been shown to affect indirect measures of attitudes
three months past intervention (Webb et al., 2010). And, finally, Patricia Devine
and colleagues (2012) demonstrated dramatically reduced IAT scores in subjects
in a twelve-week longitudinal study using a combination of the aforementioned
interventions.20
Despite these promising results, however, none of these studies have investigated
the durability of interventions as an isolated independent variable in an extended
longitudinal study.21 Doing so is the next step in testing the durability of interven-
tions like these. A helpful distinction to take into account for future research may be
between those interventions designed to change the associations underlying one’s
implicit attitudes and those interventions designed to heighten agents’ control over
the activation of those associations.22 Originally, many prejudice-reduction interven-
tions were association-based (i.e., aimed at changing the underlying associations).
20
However, see Forscher et al. (2016) for a comparatively sobering meta-analysis of the stability of
change in implicit attitudes after interventions. See the Appendix for brief discussion.
21
The exception is Webb et al. (2010). See also Calvin Lai et al. (2016), who found that nine rapid
interventions that produced immediate changes in IAT scores had no significant lasting effects on IAT
scores several hours to several days later. This finding suggests that a magic bullet—i.e., a rapid and
simple intervention—for long-term prejudice reduction is unlikely to exist. But see later in the text for
discussion about more complex multipronged approaches to implicit attitude change, which I believe
are more likely to create lasting change, particularly if they involve repeated practice of interventions
within daily life settings.
22
I draw this distinction from Lai et al. (2013), although I note that the distinction is in some
respects blurry. Identifying changes in underlying association is tricky because agents’ control over
activated associations influences indirect measures like the IAT. Process-dissociation techniques like
the Quad Model (Conrey et al., 2005) distinguish the influence of activated associations and control
over them, but these techniques are based on statistical assumptions that are relative rather than abso-
lute. Nevertheless, the distinction is useful for roughly categorizing kinds of interventions based on
their presumed mechanisms for affecting behavior.
198 Ethics
23
See the discussion of the Quad Model (Conrey et al., 2005) in Chapter 6.
The Habit Stanc e 199
24
Thanks to Louise Antony for this example.
200 Ethics
are still to come. Moreover, there is already some evidence that the effects of some
habit stance techniques may be broader than one might initially expect. For exam-
ple, Gawronski and colleagues (2008) found that counterstereotype race training
(specifically, training participants to affirm counterstereotypes—e.g., “intelligent”
and “wealthy”—about black people) reduced their implicit racial prejudice. That
is, learning to affirm specific counterstereotypes led participants to implicitly like
black people more. Liking is far broader than stereotyping. When we like or dislike
a group, we act in many specific ways consistent with this attitude. What this sug-
gests is that targeting specific implicit attitudes in interventions may affect the suite
of attitudes to which they are related.
We can find additional evidence for the potential broadness of these kinds of
techniques by looking again at the difference between association-and control-
based interventions. If a person is generally less likely to automatically stereotype
people on the basis of race, this should have more widespread effects on behavior
across a range of contexts, even when the person lacks the motivation or oppor-
tunity to control it. As such, it is possible that association-based interventions are
likely to have greater effects on a wider range of behaviors than control-based inter-
ventions (Kawakami et al., 2000). On the other hand, control-based interventions
may have larger effects on “niche-specific” behavior.
Ultimately, durable and broadly effective strategies for changing and regulat-
ing implicit attitudes will likely take a multipronged approach. This is especially
likely as circumstances change or as one moves into new environments, where the
demands of the situation require recalibrating one’s associations. When slower
court surfaces were introduced in professional tennis, for example, players presum-
ably had to exert top-down control at first in order to begin the process of learn-
ing new habits. But this was also presumably accompanied by association-based
retraining, such as the rote repetition of backcourt shot sequences (rather than
net-attacking sequences). Indeed, Devine and colleagues (2012) label their multi-
pronged approach to long-term implicit attitude change a “prejudice habit break-
ing intervention.” The next step is to test the extent to which specific interventions
work and then to test how they might be combined. Combining various interven-
tions might be mutually reinforcing in a way that makes their combination stronger
than the sum of the parts, perhaps by “getting the ball rolling” on self-regulation.
Relatedly, future research may (and should) consider whether different forms of
presentation of interventions affect participants’ use of these interventions outside
the laboratory. Does it matter, for instance, if an intervention is described to partici-
pants as a form of control, change, self-control, self-improvement, or the like? Will
people be less motivated to use self-regulatory tools that are described as “chang-
ing attitudes” because of privacy concerns or fears about mental manipulation, or
simply because of perceptions of the difficulty of changing their attitudes? Will
interventions that appeal to one’s sense of autonomy be more successful? These
open questions point to the possibility of fruitful discoveries to come, some of
The Habit Stanc e 201
which will hopefully alleviate current reasonable worries about habit stance tech-
niques for implicit attitude change.
2.2 Cheating
A different kind of broadly “Kantian” worry focuses on the nature of “moral credit”
and virtue.25 There are (at least) two different (albeit related) ways to understand this
worry. I have discussed versions of both worries earlier, in particular in Chapters 4
and 5. The discussion here focuses on how these worries apply to my account of the
habit stance.
One version of the worry is that actions based on “mere” inclinations can be
disastrous because they aren’t guided by moral sensibilities. As a result, an approach
to the self-regulation of implicit attitudes focused on habit change, without a spe-
cific focus on improving moral sensibilities, would be limited and perhaps even
counterproductive. Barbara Herman (1981, 364–365) explicates the Kantian con-
cern about “sympathy” unguided by morality in this way. She imagines a person
full of sympathy for others who sees someone struggling, late at night, with a heavy
package at the back door of the Museum of Fine Arts. If such a person acts only on
the basis of an immediate inclination to help others out, then she might end up abet-
ting an art thief. Herman’s point is that even “good” inclinations can lead to terrible
outcomes if they’re not guided by a genuinely moral sense. (This way of putting
the worry might be too consequentialist for some Kantians.) She puts the point
strongly: “We need not pursue the example to see its point: the class of actions that
follow from the inclination to help others is not a subset of the class of right or duti-
ful actions” (1981, 365).
The second way to understand this worry focuses on what makes having a moral
sense praiseworthy. Some argue that mere inclinations cannot, by definition, be
morally praiseworthy, because they are unguided by moral reasons. The key distinc-
tion is sometimes drawn in terms of the difference between acting “from virtue”
and acting “in accord with virtue.” A person who acts from virtue acts for the right
(moral) reasons. Contemporary theorists draw related distinctions. For example,
Levy (2014) argues that acting from virtue requires one to be aware of the mor-
ally relevant features of one’s action (see Chapter 5). Acting from virtue, in other
words, can be thought of as acting for moral reasons that one recognizes to be moral
reasons.
Both versions of what I’m calling the broadly “Kantian” worry seem to apply to
the idea of the habit stance. Perhaps even with the most ethical implicit attitudes, a
person will be no better off than Herman’s sympathetic person. After all, even the
25
I place “Kantian” in scare quotes in order to emphasize that the following ideas may or may not
reflect the actual ideas of Immanuel Kant.
202 Ethics
most ethical implicit attitudes will give rise to spontaneous reactions, and these
reactions seem tied to the contexts in which they were learned. Their domain of
skillful and appropriate deployment might be worryingly narrow, then. Moreover
one might worry that adopting the habit stance does not require one to act from
virtue. There is little reason to think that rote practice, pre-commitment, and con-
text regulation put one in a position of acting for moral reasons that one recognizes
to be moral. Nor do these strategies seem poised to boost one’s awareness of any
particular moral reasons for action.
Regarding the first worry, I think things are considerably more complex than
Herman’s example ostensibly illustrating the pitfalls of misguided inclination
suggests. Her example of the sympathetic person who stupidly abets an art thief
is a caricature. In reality, most people’s sympathetic inclinations are not blunt,
one-size-fits-all responses to the world. Rather, most people’s “mere” inclinations
themselves can home in on key cues that determine whether this is a situation
requiring giving the person in need a hand or giving the police a call. Indeed, if
anything, it strikes me that the greatest difficulty is not usually in deciding, on the
basis of moral sense, whether to help or hinder what another person is trying to
do, although, of course, situations like this sometimes arise. What’s far more dif-
ficult is simply taking action when it’s morally required. Adopting the habit stance
in order to cultivate better spontaneous responses is directly suited to solving this
problem.
Of course, this does not mean that our implicit attitudes always get it right (as
I’ve been at pains to show throughout this book). As Railton puts it:
Statistical learning and empathy are not magic—like perception, they can
afford only prima facie, defeasible epistemic justification. And—again,
like perception itself—they are subject to capacity limitations and can
be only as informative as the native sensitivities, experiential history, and
acquired categories or concepts they can bring to bear. If these are impov-
erished, unrepresentative, or biased, so will be our statistical and empathic
responses. Discrepancy-reduction learning is good at reducing the effects
of initial biases through experience (“washing out priors”), but not if the
experiences themselves are biased in the same ways—as, arguably, is often
the case in social prejudice. It therefore is of the first importance to episte-
mology and morality that we are beings who can critically scrutinize our
intuitive responses. (2014, 846)
But, like me, Railton defends a view of inclination according to which our spon-
taneous responses are not “dumb” one-size-fits-all reflexes. His view is elabo-
rated in terms of the epistemic worth of intuition and focuses on identifying the
circumstances in which our unreasoned intuitions are and are not likely to be
reliable.
The Habit Stanc e 203
One way to understand the aim of this chapter is in terms of identifying some key con-
ditions under which our implicit attitudes can be expected to be reliable in this sense.
A related response to the first version of the worry draws inspiration from the
recent debate over “situationism” and virtue ethics. In short, virtue ethicists argue that
morality stems from the cultivation of virtuous and broad-based character traits (e.g.,
Hursthouse, 1999). But critics have argued that virtuous and broad-based character
traits do not exist (Harman, 1999; Doris, 2002). The reason for this, the critics argue,
is that human behavior is pervasively susceptible to the effects of surprising features of
the situations in which we find ourselves. The situationist critics argue that exposure
to seemingly trivial things—like whether one happens to luckily find a dime (Isen and
Levin, 1972) or whether one happens to be in a hurry (Darley and Batson, 1973)—
is much better at predicting a person’s behavior than are her putative character traits.
Supposedly mean people will act nicely if they find a lucky dime, and supposedly nice
people will act meanly if they’re in a rush. In other words, people don’t typically have
virtuous and broad-based character traits. Rather, their behavior is largely a reflection
of the situation in which they find themselves. Of course, defenders of virtue ethics
have responded in a number of ways (e.g., Kamtekar, 2004; Annas, 2008).
The similarity between the situationist debate and the first “Kantian” worry
about habit stance techniques for implicit attitude change is this: in both cases, crit-
ics worry that there is no “there” there. In the case of situationism, critics argue that
the broad-based, multitrack character traits at the heart of virtue ethics don’t really
exist. If these don’t exist, then human behavior seems to be like grass blowing in the
wind, lacking the core thing that virtue ethicists purport guides right action. The
worry about mere inclination is analogous. Without online guidance of behavior
by some overarching, reflective, moral sense, we will similarly be left rudderless,
again like grass blowing in the wind. This is what we are to take away from Herman’s
example. The sympathetic person has good motives but is essentially rudderless,
bluntly doling out empathy everywhere, even where it’s inappropriate.
In the case of situationism, the fact is, though, that character traits do exist. And
this is instructive about implicit attitudes. There is a well-established science of
character. It identifies five central traits, which appear to be culturally universal and
relatively stable across the life span ( John and Srivastava, 1999): openness, consci-
entiousness, extroversion, agreeableness, and neuroticism.26 The core of the debate
about situationism and virtue ethics is not, therefore, whether broad-based traits
26
For a review of the current science of character and personality, see Fleeson et al. (2015).
204 Ethics
exist. Rather, the core of the debate is (or should be) about whether the broad-
based traits that do exist belong on any list of virtues (Prinz, 2009). The Big Five are
not virtues per se. They are dispositions to take risks, be outgoing, feel anxious, and
so on. Crucially, they can cut both ways when it comes to ethics. Anxiety can lead
one to be extremely cowardly or extremely considerate. Even “conscientiousness,”
which sounds virtuous, is really just a disposition to be self-disciplined, organized,
and tidy. And certainly some of history’s most vicious people have been conscien-
tious in this sense. That said, the Big Five are not just irrelevant to virtue. Virtues
may be built out of specific combinations of character traits, deployed under the
right circumstances. Research may find matrices of traits that line up closely with
particular virtues. Similarly, what some call a moral sense might be constructed out
of the small shifts in attitudes engendered by habit stance techniques. For example,
practicing approach-oriented behavior toward people stereotyped as violent, using
Kawakami-style training, is not, just as such, virtuous. Herman might imagine a
white person approaching a black man who clearly wants to be left alone; that is,
an approach intuition unguided by moral sense. But practicing approach-oriented
behavior in the right way, over time, such that the learning mechanisms embed-
ded in one’s implicit attitudes can incorporate feedback stemming from successes
and failures, is not likely to generate this result. Rather, it is more likely to result in
a spontaneous social sensibility for when and how to approach just those people
whom it is appropriate to approach.
Regarding the second version of the worry—focused on the importance of act-
ing from virtue—it is important to remember that we typically have very little grasp
of the competencies embedded in our implicit attitudes. So it is an extremely high
bar to cross to think that our implicit attitudes are praiseworthy only when we are
aware of the morally relevant features of our actions. It is also worth recalling the
claim of Chapter 4, that implicit attitudes are reflective of agents’ character. The
ethical quality of these attitudes, then, reflects on us too. Perhaps this can go some
way toward assuaging the worry that we don’t act for explicit moral reasons when
we reshape and act upon our implicit attitudes.
More broadly, I think it is misguided to worry that acting well is the kind of thing
that is susceptible to cheating, at least in the context of considering moral credit
and the habit stance. It strikes me that regulating one’s implicit attitudes is more like
striving for happiness than it is like pursuing full-blown morality in this sense. And
happiness, it seems to me, is less susceptible to intuitions about cheating than is
morality. As Arpaly (2015) put it in a discussion of some people’s hesitance to take
psychoactive drugs to regulate mood disorders like depression, out of sense that
doing so is tantamount to cheating:
citizenship: some people have it because they were born with it, some per-
haps get it through marriage, some have to work for it. It’s not cheating
to find a non-torturous way to get an advantage that other people have
from birth.
3. Conclusion
I’ve argued for what I call a habit stance approach to the ethical improvement of our
implicit attitudes. I have proposed this approach in contrast to the idea, discussed in
the preceding chapter, that deliberation is necessary and always recommended for
cultivating better implicit attitudes. The habit stance approach largely falls out from
the current state of empirical research on what sorts of techniques are effective for
implicit attitude change. It also should seem consistent with the account of implicit
attitudes I have given, as sui generis states, different in kind from our more familiar
beliefs and values, yet reflective nevertheless upon some meaningful part of who we
are as agents. It is by treating our implicit attitudes as if they were habits—packages
of thought, feeling, and behavior—that we can make them better.
8
Conclusion
The central contention of this book has been that understanding the two faces of
spontaneity—its virtues and vices—requires understanding what I have called the
implicit mind. And to do this I’ve considered three sets of issues.
In Chapters 2 and 3, I focused on the nature of the mental states that make up
the implicit mind. I argued that in many moments of our lives we are not moved
to act by brute forces, yet neither do we move ourselves to act by reasoning on the
basis of our beliefs and desires. In banal cases of orienting ourselves around objects
and other people, in inspiring cases of acting spontaneously and creatively, and in
maddening and morally worrying cases where biases and prejudice affect what we
do, we enact complex attitudes involving rapid and interwoven thoughts, feelings,
motor impulses, and feedback learning. I’ve called these complex states implicit
attitudes and have distinguished them from reflexes, mere associations, beliefs, and
dispositions, as well as from their closest theoretical cousins, aliefs.
In Chapters 4 and 5, I considered the relationship between implicit attitudes
and the self. These states give rise to actions that seem neither unowned nor fully
owned. Museumgoers and Huck and police offers affected by shooter bias reveal
meaningful things about themselves in their spontaneous actions, I’ve argued.
What is revealed may be relatively minor. How the museumgoer makes her way
around the large painting tells us comparatively little about her, in contrast to what
her opinion as to whether the painting is a masterpiece or a fraud might tell us. Nor
do these spontaneous reactions license holding her responsible for her action, in
the sense that she has met or failed to meet some obligation or that she is, even in
principle, capable of answering for her action. But when even this minor action is
patterned over time, involves feelings that reflect upon what she cares about, and
displays the right kinds of connections to other elements of her mind, it tells us
something meaningful about her. It gives us information about what kind of a char-
acter she has, perhaps that in this sort of context she is patient or impatient, an opti-
mizer or satisficer. These aren’t deep truths about her in the sense that they give the
final verdict on who she is. Character is a mosaic concept. And the ways in which
we are spontaneous is part of this mosaic, even when our spontaneity evades or
conflicts with our reasons or our awareness of our reasons for action.
206
Conclu s ion 207
Finally, in Chapters 6 and 7, I examined how ethics and spontaneity interact. How
can we move beyond Vitruvius’s unhelpful advice to “trust your instincts . . . unless
your instincts are terrible”? I’ve argued that becoming more virtuous in our spontane-
ity involves planning and deliberation, but that thinking carefully about how and when
to be spontaneous is not always necessary, and sometimes it can be distracting and even
costly. Retraining our implicit attitudes, sometimes in ways that flatly push against our
self-conception as deliberative agents, is possible and promising. While there are long-
standing traditions in ethics focused on this, the relevant science of self-regulation is
new. So far, I’ve argued, it tells us that the ethics of spontaneity requires a blend of foci,
including what I’ve called intentional, physical, and design elements, and none of these
alone. The full package amounts to cultivating better implicit attitudes as if they were
habits.
All of this is pluralist and incomplete, as I foreshadowed in the Introduction. It is
pluralist about the kinds of mental states that lead us to act and that can lead us to act
well; about how and what kinds of actions might reflect upon me as an agent; about
how to cultivate better implicit attitudes in order to act more virtuously and spontane-
ously. Incompleteness intrudes at each of these junctions. I have not given necessary
and sufficient conditions for a state to be an implicit attitude. Rather, I have sketched
the paradigmatic features of implicit attitudes—their FTBA components—and argued
for their uniqueness in cognitive taxonomy. I have not said exactly when agents ought
to be praised or blamed for their spontaneous inclinations. Rather, I have laid the
groundwork for the idea that these inclinations shed light on our character, even if they
fall short of being the sorts of actions for which we must answer or be held accountable.
And I have not incorporated the ineliminable importance of practical deliberation into
my recommendations for implicit attitude change, even though a deeply important ele-
ment of such change is believing it to be necessary and worthwhile. What I have done,
I think, is pushed back against the historical emphasis (in the Western philosophical
tradition, at least) on the overriding virtues of deliberation. And I have outlined data-
supported tactics for cultivating spontaneous inclinations that match what I assume are
common moral aims.
While alternative interpretations of each of the cases I’ve discussed are available,
the shared features of these cases illuminate a pervasive feature of our lives: actions
the psychology of which is neither reflexive nor reasoned; actions that reflect
upon us as agents but not upon what we know or who we take ourselves to be; and
actions the ethical cultivation of which demand not just planning and deliberating
but also, centrally, pre-committing to plans, attending to our contexts, and, as Bruce
Gemmel—the coach of arguably the best swimmer of all time, Katie Ledecky—
said, just doing the damn work.1
Dave Sheinin writes, “Gemmell reacts with a combination of bemusement and annoyance
1
when people try to divine the ‘secret’ to Ledecky’s success, as if it’s some sort of mathematical for-
mula that can be solved by anyone with a Speedo and a pair of goggles. A few weeks ago, a website
208 Conclusion
Many open questions remain, not only at these junctures and in tightening the
characteristics of the family resemblance I’ve depicted, but also in connecting this
account of the implicit mind to ongoing research that bears upon, and sometimes
rivals, the claims I have made along the way. My arguments for the sui generis struc-
ture of implicit attitudes draw upon affective theories of perception, for example,
but it is still unclear whether all forms of perception are affective in the way some
theorists suggest. Similarly, new theories of practical inference with relatively mini-
malist demands—compared with the more classical accounts of inference from
which I draw—may show that implicit attitudes are capable of figuring into cer-
tain kinds of rationalizing explanation.2 This bears upon perhaps the largest open
question facing Chapters 2 and 3: Under what conditions do implicit attitudes form
and change? I have given arguments based on the extant data, but the entire stream
of research is relatively young. Future studies will hopefully clarify, for example,
whether implicit attitudes change when agents are presented with persuasive argu-
ments, and if they do, why they do.
Future theorizing about implicit attitudes and responsibility will likely move
beyond the focus I’ve had on individuals’ minds, to consider various possible forms
of collective responsibility for unintended harms and how these interface with con-
cepts like attributability and accountability. One avenue may take up a different tar-
get: not individuals but collectives themselves. Are corporations, neighborhoods,
or even nations responsible for the spontaneous actions of their members in a non-
additive way (i.e., in a way that is more than the sum of the individuals’ own respon-
sibility)? In some contexts this will be a question for legal philosophy. Can implicit
bias, for example, count as evidence of disparate impact in the absence of discrimi-
natory intent? In other contexts, this may be a question for evolutionary psychol-
ogy. As some have theorized about the functional role of disgust and other moral
emotions, have implicit attitudes evolved to enforce moral norms? If so, does this
mean that these attitudes are in any way morally informative or that groups are more
or less responsible for them? In other contexts still, this may be a question about the
explanatory power and assumptions embedded in various kinds of social science.
Critics of research on implicit bias have focused on the ostensible limits of psy-
chological explanations of discrimination. For instance, while acknowledging that
there is space for attention to implicit bias in social critique, Sally Haslanger argues
that it is “only a small space,” because “the focus on individuals (and their attitudes)
occludes the injustices that pervade the structural and cultural context and the ways
called SwimmingScience.net promoted a Twitter story called ‘40 Must Do Katie Ledecky Training
Secrets.’ Gemmell couldn’t let that go by unchallenged. ‘Tip #41,’ he tweeted in reply. ‘Just do the damn
work.’ ” https://www.washingtonpost.com/sports/olympics/how-katie-ledecky-became-better-at-
swimming-than-anyone-is-at-anything/2016/06/23/01933534-2f31-11e6-9b37-42985f6a265c_
story.html.
See, e.g., Buckner (2017).
2
Conclu s ion 209
that the context both constrains and enables our action. It reinforces fictional con-
ceptions of autonomy and self-determination that prevents us from taking responsi-
bility for our social milieu (social meanings, social relations, and social structures)”
(2015, 10). I think my account of implicit attitudes provides means for a less fic-
tional account of autonomy than those Haslanger might have in mind, but vindicat-
ing this would take much work.
An intriguing set of questions facing the practical ethics of spontaneity focuses
on the concept of self-trust. Whether and when we trust others has moral stakes, as
moral psychologists like Patricia Hawley (2012) and Meena Krishnamurthy (2015)
have shown. Do these stakes apply equivalently in the case of self-trust? What are
the general conditions under which we ought to trust ourselves, as some theorists
suggest is the hallmark of expertise? Self-trust of the sort I imagine here is in tension
with what I take to be one of the imperatives of practical deliberation, which is that
it represents a check against the vices of our spontaneous inclinations. Being delib-
erative is, in some ways, an expression of humility, of recognizing that we are all sub-
ject to bias, impulsivity, and so on. But are there forms of deliberation that perhaps
skirt this tension? I have emphasized the nondeliberative elements of coaching in
skill learning, but perhaps other features are more conducive to a reflective or quasi-
intellectualist orientation. Historians of philosophy may find some elements in an
Aristotelian conception of motivation, for example, which some say distinguishes
skill from virtue; perhaps fresher approaches may be found in non-Western ethical
traditions.
I ended the Introduction by suggesting that I would be satisfied by convincing
the reader that a fruitful way to understand both the virtues and vices of spontaneity
is by understanding the implicit mind, even if some of the features of the implicit
mind as I describe it are vague, confused, or flat-out wrong. Another way to put
this is that in identifying the phenomenon of interest and offering an account of it,
I incline more toward lumping than splitting. Crudely—as a lumper like me would
say—lumping involves answering questions by placing phenomena into categories
that illuminate broad issues. Lumpers focus more on similarities than differences.
Splitters answer questions by identifying key differences between phenomena, with
the aim of refining definitions and categories. Lumping and splitting are both neces-
sary for making intellectual progress, I believe. If this work inspires splitters more
skilled than me to refine the concept of the implicit mind, in the service of better
understanding what we do when we act spontaneously, I’ll be deeply satisfied.
Appendix
ME A SURES, METHODS,
AND PSYCHOLOGICAL SCIENCE
1
In this appendix, I do not explicitly discuss all of the areas of research presented in this book (e.g.,
I do not discuss methodological issues facing research on expertise and skill acquisition). While I mean
the broad discussion of replicability and methodological principles to cover these areas, I focus on
psychometric issues pertaining to research on implicit social cognition.
211
212 Appendix
the correct way to sort the name “Michelle” would be left (Figure A.1) and right
(Figure A.2), and the correct way to sort the word “Business” would be left (Figure
A.3) and right (Figure A.4).
One computes an IAT score by comparing speed and error rates on the
“blocks” (or sets of trials) in which the pairing of concepts is consistent with
common stereotypes (Figures A.1 and A.3) to the speed and error rates on the
blocks in which the pairing of the concepts is inconsistent with common ste-
reotypes (Figures A.2 and A.4).2 Most people are faster and make fewer errors
on stereotype-consistent trials than on stereotype-inconsistent trials. While
this “gender–career” IAT pairs concepts (male and career), other IATs, such
as the black–w hite race evaluation IAT, pair a concept with an evaluation (e.g.,
black and “bad”).3 Other IATs test implicit attitudes toward body size, age,
sexual orientation, and so on (including more relatively banal targets, like brand
preferences). More than 16 million unique participants had taken an IAT as of
2014 (B. A. Nosek, pers. com.). One review (Nosek et al., 2007), which exam-
ined the results of over 700,00 subjects’ scores on the black-white race evalua-
tion IAT, found that more than 70% of white participants more easily associated
black faces with negative words (e.g., war, bad) and white faces with positive
words (e.g., peace, good) than the inverse (i.e., black faces with positive words
and white faces with negative words). The researchers consider this an implicit
preference for white faces over black faces. In this study, black participants on
average showed a very slight preference for black faces over white faces. This
is roughly consistent with other studies showing that approximately 40% of
black participants demonstrate an implicit in-group preference for black faces
over white faces, 20% show no preference, and 40% demonstrate an implicit
out-group preference for white faces over black faces (Nosek et al., 2002, 2007;
Ashburn-Nardo et al., 2003; Dasgupta, 2004; Xu et al., 2014).
Although the IAT remains the most popular indirect measure of attitudes, it is
far from the only one.4 A number of variations of priming tasks are widely used
This is how an IAT “D score” is calculated. There are other ways of scoring the IAT too.
2
See the discussion in Chapter 2 of the view that these two types of measures assess two separate
3
Michelle
Figure A.1
Male Female
or or
Family Career
Michelle
Figure A.2
Male Female
or or
Career Family
Business
Figure A.3
Male Female
or or
Family Career
Business
Figure A.4
Appendix 215
(e.g., evaluative priming (Fazio et al., 1995), semantic priming (Banaji and Hardin,
1996), and the Affect Misattribution Procedure (AMP; Payne et al., 2005)). A “sec-
ond generation” of categorization-based measures has also been developed in order
to improve psychometric validity. For example, the Go/No-go Association Task
(GNAT; Nosek and Banaji, 2001) presents subjects with one target object rather
than two in order to determine whether preferences or aversions are primarily
responsible for scores on the standard IAT (e.g., on a measure of racial attitudes,
whether one has an implicit preference for whites or an implicit aversion to blacks;
Brewer, 1999).
I have relied mostly on the IAT, but also, to a lesser degree, on the AMP, not only
because these are among the most commonly used measures of implicit attitudes,
but also because reviews show them to be the most reliable indirect measures of
attitudes (Bar-Anan and Nosek, 2014; Payne and Lundberg, 2014).5 In assessing
seven indirect measures, Bar-Anan and Nosek (2014) used the following evaluation
criteria:
measures is also relative rather than absolute. Even in some direct measures, such as personality inven-
tories, subjects may not be completely aware of what is being studied. It is important to note, finally,
that in indirect tests subjects may be aware of what is being measured. One ought not to conflate the
idea of assessing a construct with a measure that does not presuppose introspective availability with the
idea of the assessed construct being introspectively unavailable (Payne and Gawronski, 2010).
5
Bar-Anan and Nosek (2014) are less sanguine about the reliability of the AMP once outliers with
strong explicit attitudes toward targets are removed from the sample. Payne and Kristjen Lundberg
(2014) present several plausible replies to this critique, however.
216 Appendix
2. Psychometrics
There are a number of challenges facing IAT research. I focus on those that I believe
to be the most significant and that have arisen in the philosophical literature. There
is, I believe, significant room for improvement in the design of the IAT. One of my
hopes is that the account of implicit attitudes I have presented will instigate such
improvements. As I discuss later (§2.3), IAT measures can be improved by target-
ing more specific attitudes in more specific contexts.
incorporated the very elements into the measure that my account of implicit atti-
tudes posits as components of them (e.g., context-and content-specific variables).6
It is important to note that Gschwendner and colleagues also found an interaction
between chronic accessibility of the relevant concept in the experimental condition,
but not in the control condition (i.e., participants for whom the concept Turkish is
chronically accessible had greater test-retest stability when the image of the mosque
rather than the image of the garden was embedded in the IAT). This, too, speaks to
the durability of test scores by improving features of the measure. And the very fea-
tures being improved are those that reflect person-specific variables—such as the
chronic availability of the concept Turkish—that I’ve emphasized.
The broad point here is not simply to defend the status quo. The work of
Gschwendner and colleagues (2008) is suggestive of how IAT research can be
improved. Targeting person-specific and context-specific features of attitudes can
help to improve psychometric validity, for example, test-retest reliability. This
improvement may come at the expense of domain generality. That is, the increased
ability to predict behavior given contextual constraints may entail that the measure
won’t necessarily be predictive absent those constraints. I take this to represent a
trade-off facing experimental psychology generally, not a decrease in explanatory
power as such. Measures that apply across a variety of contexts are less likely to have
strong correlation coefficients, while measures that apply to more specific circum-
stances are more likely to be predictive, but only under those conditions.7
Another way to put this is that the embedded images reduce noise in the measure by holding fixed
6
the meaning or value of the situation for participants. Thanks to Jonathan Phillips for this idea.
7
This is a version of the principles of correspondence (Ajzen and Fishbein, 1970, 1977) and
compatibility (Ajzen, 1998; Ajzen and Fishbein, 2005). The core idea of these principles is that the
predictive power of attitudes increases when attitudes and behavior are measured at the same level
of specificity. For example, attitudes about birth control pills are more predictive of the use of birth
control pills than are attitudes about birth control (Davidson and Jaccard, 1979).
Appendix 219
important. Once they are understood, I do not believe that Oswald and colleagues’
normative claim—that the IAT is a poor tool that provides no insight into discrimi-
natory behavior—is warranted.
First, predicting behavior is difficult. Research on implicit social cognition
partly arose out of the recognition that self-report measures of attitudes don’t pre-
dict behavior very well. This is particularly the case when the attitudes in question
are about socially sensitive topics, such as race. Greenwald and colleagues (2009)
report an average attitude–behavior correlation of .118 for explicit measures of
black–white racial attitudes.8 It is important to establish this baseline for compari-
son. It may be a mark of progress for attitude research even if indirect measures of
attitudes have relatively small correlations with behavior.
More important, the fact that even self-report measures of attitudes don’t predict
behavior very well is not cause to abandon such measures. Rather, since the 1970s,
researchers have recognized that the key question is not whether self-reported atti-
tudes predict behavior, but rather when self-reported attitudes predict behavior.
One important lesson from this research is that attitudes better predict behavior
when there is correspondence between the attitude object and the behavior in ques-
tion (Ajzen and Fishbein, 1977). For example, while generic attitudes toward the
environment do not predict recycling behavior very well, specific attitudes toward
recycling do (Oskamp et al., 1991).9 A wealth of theoretical models of attitude–
behavior relations elaborate on issues like this and make principled predictions
about when attitudes do and do not predict behavior. According to dual-process
models, the predictive relations of self-reported and indirectly measured attitudes
to behavior should depend on (a) the type of behavior; (b) the conditions under
which the behavior is performed; and (c) the characteristics of the person who is
performing the behavior.10
From the perspective of these theories, meta-analyses that ignore theoretically
derived moderators are likely to find poor predictive relations between attitudes
(whether self-reported or indirectly measured) and behavior. Oswald et al. (2013)
make exactly this mistake. They included any study in which a race IAT and a behav-
ioral outcome measure were used, but they did not differentiate between behavioral
outcomes that should or should not be predicted on theoretical grounds.
For example, recall from Chapter 2 Amodio and Devine’s (2006) finding that
the standard race IAT predicted how much white participants liked a black student,
but it did not predict how white participants expected a black student to perform
on a sports trivia task. They also found that a stereotyping IAT, which measured
8
Other reviews have estimated relatively higher attitude–behavior correlations for explicit preju-
dice measures. Kraus (1995) finds a correlation of .24 and Talaska et al. (2008) find a correlation of .26.
9
See also footnote 7.
10
See, e.g., the “Reflective Impulse Model” (RIM; Strack and Deutsch, 1994) and “Motivation and
Opportunity as Determinants” (MODE; Fazio, 1990).
220 Appendix
associations between black and white faces and words associated with athleticism
and intelligence, predicted how white participants would expect a black student to
perform on a sports trivia task, but failed to predict white students’ likability rat-
ings of a black student. Amodio and Devine predicted these results on the basis of a
theoretical model distinguishing between “implicit stereotypes” and “implicit prej-
udice.” If Amodio and Devine had reported the average correlation between their
IATs and behavior, they would have found the same weak relationship reported by
Oswald et al. (2013). However, this average correlation would conceal the insight
that predictive relations should be high only for theoretically “matching” types of
behavior.
Because they included any study that arguably measured some form of dis-
crimination, regardless of whether there was a theoretical basis for a correlation,
Oswald and colleagues included all of Amodio and Devine’s (2006) IAT findings
in their analysis. That is, they included the failure of the evaluative race IAT to pre-
dict how participants described the black essay writer, as well as how participants
predicted another black student would perform on an SAT task compared with a
sports trivia task. Likewise, Oswald and colleagues included in their analysis the
failure of Amodio and Devine’s stereotyping IAT to predict seating-distance deci-
sions and likability ratings of the black essay writer. In contrast, Greenwald et al.
(2009) did not include these findings, given Amodio and Devine’s theoretical basis
for expecting them. Amodio and Devine predicted what the two different kinds of
IAT wouldn’t predict, in other words. Oswald and colleagues include this as a failure
of the IAT to predict a discriminatory outcome. In contrast, Greenwald and col-
leagues count this as a theoretically motivated refinement of the kinds of behavior
IATs do and don’t predict.11
Oswald and colleagues (2015) contest Greenwald et al.’s (2014) analysis. They argue that their
11
approach to inclusion criteria is more standard than Greenwald et al.’s (2009) approach. At issue is
whether IAT researchers’ theoretically motivated predictions of results should be taken into account.
For example, Oswald et al. (2015) contrast the findings of Rudman and Ashmore (2007) with those
of Amodio and Devine (2006). They write, “[Rudman and Ashmore (2007)] predicted that an IAT
measuring an attitude would predict stereotyping, and reported a statistically significant correlation
in support of this prediction. In contrast, Amodio and Devine (2006) predicted that a racial attitude
IAT would not predict stereotyping, and reported a set of nonsignificant correlations to support their
prediction. Had we followed an inclusion rule based on the ‘author-provided rationale’ for each of these
studies, then all of the statistically significant effects would have been coded as supporting the IAT—
even though some yielded inconsistent or opposing results—and all of the statistically nonsignificant
results would have been excluded. This was how the Greenwald et al. (2009) meta-analysis handled
inconsistencies” (Oswald et al., 2015, 563). This is, however, not a fully convincing reply. Consider the
studies. Rudman and Ashmore (2007) predicted, and confirmed, that the standard racial attitude IAT
would predict self-reported verbal forms of discrimination and scores on an explicit measure of racial
attitudes (the Modern Racism Scale; McConahay, 1986), whereas an IAT measure of black–violent
associations would better predict self-reported behavioral forms of social exclusion (see Chapter 5,
footnote 5). This is plausible given the content of the specific stereotypes in question. Black–violent
Appendix 221
associations are coherently related to threat perception and social exclusion. Amodio and Devine
(2006) predicted, and confirmed, that the standard racial attitudes IAT would be uncorrelated with
the use of stereotypes about intelligence and athletic ability, but that a specific “mental–physical”
IAT would be. This, too, is plausible, given the match between the specific stereotypes captured by
the measure and the behavioral outcomes being assessed (e.g., picking a teammate for a sports trivia
task). More broadly, Rudman and Ashmore’s (2007) and Amodio and Devine’s (2006) use of different
implicit stereotyping measures makes Oswald et al.’s (2015) direct comparison of them problematic.
In other words, Oswald and colleagues’ claim that these are equivalent measures of implicit stereotypes
with inconsistent findings is too blunt. Note again, however, that relations between putative implicit
stereotypes and implicit prejudice are more complex, in my view, than these analyses suggest. Madva
and Brownstein (2016) propose an “evaluative stereotyping” IAT that might more successfully predict
behavioral manifestations of stereotypes about race, intelligence, and physicality.
Oswald and colleagues (2015) double down on their claims by noting that Oswald et al. (2013)
also included failures of black–white racial attitude IATs to predict discrimination toward white people.
They draw upon the true claim that in-group favoritism is a significant feature of implicit bias, argu-
ably as important as out-group derogation (or even more so according to some; e.g., Greenwald and
Pettigrew, 2014). But this point does not justify Oswald et al.’s (2013) inclusion policy given typical
outcome measures, such as shooting and hiring decisions, and nonverbal insults. It is hardly a failure
of IATs that they do not predict discriminatory behavior originating from white subjects toward white
targets in these domains.
12
See Madva and Brownstein (2016) for detailed discussion and critique of the two-type view of
implicit cognition.
222 Appendix
discrimination over and above explicit measures of attitudes and stereotypes, which
were uncorrelated or were very weakly correlated with the IATs. The predictive
power of the obesity IAT was particularly striking, because explicit measures of
anti-obesity bias did not predict hiring discrimination at all—even though a full
58% of participants openly admitted a preference for hiring normal-weight over
obese individuals.13 This study is significant not just because the indirect measure
outperformed the self-report measure, but arguably because it did so by homing in
on specific affectively laden stereotypes (lazy and incompetent) that are relevant
to a particular behavioral response (hiring) in a particular context (hiring Arabs or
people perceived as obese). This is in keeping with the points I made in § 2.3 about
improving the psychometric properties of indirect measures by targeting the co-
activation of specific combinations of FTBA relata.
The virtue of Oswald and colleagues’ approach is that it avoids cherry-picking
findings that support a specific conclusion. It also confirms that attitudes are rel-
atively poor predictors if person-, context-, and behavior-specific variables are
ignored. But the main point is that this is to be expected. In fact, it would be bizarre
to think otherwise, that is, to think that a generic measure of racial attitudes like
the IAT would predict a wide range of race-related behavior irrespective of whether
such a relation can be expected on theoretical grounds.
When the variables specified by theoretical models of implicit attitudes are con-
sidered, it is clear that indirect measures of attitudes are scientifically worthwhile
instruments. For example, Cameron, Brown-Iannuzzi, and Payne (2012) analyzed
167 studies that used sequential priming measures of implicit attitudes. They
found a small average correlation between sequential priming tasks and behavior
(r = .28). Yet correlations were substantially higher under theoretically expected
conditions and lower under conditions where no relation would be expected.
Cameron and colleagues identified their moderators from the fundaments of three
influential dual-process models of social cognition.14 While these models differ
in important ways, they converge in predicting that indirect measures will cor-
respond more strongly with behavior when agents have low motivation or little
opportunity to engage in deliberation or when implicit associations and delibera-
tively considered propositions are consistent with each other. Moreover, in con-
trast to Greenwald et al. (2009), Cameron and colleagues did not simply take the
stated expectations of the authors of the included studies for granted in coding
moderators. Rather, the dual-process moderators were derived a priori from the
theoretical literature.
13
For additional research demonstrating how implicit and explicit attitudes explain unique aspects
of behavior, see Dempsey and Mitchell (2010), Dovidio et al. (2002), Fazio et al. (1995), Galdi et al.
(2008), and Green et al. (2007).
14
Specifically, from MODE (Fazio, 1990), APE (“Associative-Propositional Evaluation”; Gawronski
and Bodenhausen, 2006), and MCM (“Meta-Cognitive Model”; Petty, Briñol, and DeMarree, 2007).
Appendix 223
Finally, in considering the claim that the IAT provides little insight into who will
discriminate against whom, it is crucial to consider how tests like the IAT might
be used to predict behavior. IAT proponents tend to recommend against using the
IAT as a diagnostic tool of individual attitudes. At present, the measure is far from
powerful enough, on its own, to classify people as likely to engage in discrimination.
Indeed, I have been at pains to emphasize the complexity of implicit attitudes. But
this does not necessarily mean that the IAT is a useless tool. Greenwald and col-
leagues (2014) identify two conditions under which a tool that measures statisti-
cally small effects can track behavioral patterns with large social significance. One is
when the effects apply to many people, and the other is when the effects are repeat-
edly applied to the same person. Following Samuel Messick (1995), Greenwald and
colleagues refer to this as the “consequential validity” of a measure. They provide
the following example to show how small effects that apply to many people can be
significant for predicting discrimination:
As a hypothetical example, assume that a race IAT measure has been adminis-
tered to the officers in a large city police department, and that this IAT meas
ure is found to correlate with a measure of issuing citations more frequently to
Black than to White drivers or pedestrians (profiling). To estimate the mag-
nitude of variation in profiling explained by that correlation, it is necessary
to have an estimate of variability in police profiling behavior. The estimate
of variability used in this analysis came from a published study of profiling
in New York City (Office of the Attorney General, 1999), which reported
that, across 76 precincts, police stopped an average of 38.2% (SD = 38.4%)
more of each precinct’s Black population than of its White population. Using
[Oswald and colleagues’ (2013)] r = .148 value as the IAT–profiling correla-
tion generates the expectation that, if all police officers were at 1 SD below the
IAT mean, the city-wide Black–W hite difference in stops would be reduced
by 9,976 per year (5.7% of total number of stops) relative to the situation if
all police officers were at 1 SD above the mean. Use of [Greenwald and col-
leagues’ (2009)] larger estimate of r = .236 increases this estimate to 15,908
(9.1% of city-wide total stops). (Greenwald et al., 2015)15
15
See the critical discussion of this example in Oswald et al. (2015), where it is argued that infer-
ences about police officers cannot be drawn given that the distribution of IAT scores for police offi-
cers is unknown. This strikes me as unmoving, given both that Greenwald and colleagues present the
example explicitly as hypothetical and that there is little reason to think that police officers would dem-
onstrate less anti-black bias on the IAT compared with the average IAT population pool. Moreover,
Greenwald and colleagues’ general point about small effect sizes having significant consequences has
been made elsewhere, irrespective of the details of this particular example. Robert Rosenthal, for exam-
ple, (1991; Rosenthal and Rubin, 1982) shows that an r of .32 for a cancer treatment, compared with a
placebo, which accounts for only 10% of variance, translates into a survival rate of 66% in the treatment
group compared with 34% in the placebo group.
224 Appendix
This analysis could be applied to self-report measures too, of course. But the effect
sizes for socially sensitive topics are even smaller with these. This suggests that a
measure with a correlational effect size of .236 (or even .148) can contribute to
our understanding of patterns of discriminatory behavior. So, too, is this the les-
son when discriminatory impact accumulates over time by repeatedly affecting the
same person (e.g., in hiring, testing, healthcare experiences, and law enforcement).
With repetition, even a tiny impact increases the chances of significantly undesir-
able outcomes. Greenwald and colleagues draw an analogy to a large clinical trial of
the effect of aspirin on the prevention of heart attacks:
The trial was terminated early because data analysis had revealed an unex-
pected effect for which the correlational effect size was the sub-small value
of r = .035. This was “a significant (P < 0.00001) reduction [from 2.16% to
1.27%] in the risk of total myocardial infarction [heart attack] among those
in the aspirin group” (Steering Committee of the Physicians’ Health Study
Research Group, 1989). Applying the study’s estimated risk reduction of 44%
to the 2010 U.S. Census estimate of about 46 million male U.S. residents 50
or older, regular small doses of aspirin should prevent approximately 420,000
heart attacks during a 5-year period. (2015, p. 558)
The effect of taking aspirin on the likelihood of having a heart attack for any par-
ticular person is tiny, but the sub-small value of the effect was significant enough to
terminate data analysis in order to advance the research for use in public policy.16
The comparatively larger effect sizes associated with measures of implicit attitudes
similarly justify taking immediate action.
16
For further discussion, see Valian (1998, 2005).
Appendix 225
of these activities. This isn’t a critique of Lai and colleagues’ analysis. Their interven-
tions were quick and simple by design, in order to make it easy to administer and in
order to facilitate comparisons between interventions. Their review helps to show
what probably won’t work and moreover indicates that creating lasting implicit atti-
tude change likely demands relatively robust interventions.
It is also important to note that implicit attitude research is itself relatively new and
that the effort to find interventions with lasting effects is even newer. For example,
Patrick Forscher and colleagues (2016) reviewed 426 studies involving procedures for
changing implicit bias, but only 22 of the studies in their sample collected longitudi-
nal data. It may be an unreasonable expectation that feasible implicit attitude change
interventions with durable effects could be discovered more or less in the fifteen years
researchers have been focused on this. Some patience is appropriate.
As before, in consideration of the psychometric properties of indirect measures, my
hope is that my account of implicit attitudes could inspire new directions in research on
effective interventions for implicit bias. I applaud Devine and colleagues’ (2012) habit-
based approach, but perhaps it didn’t go far enough. It did not include, for example,
the kind of rote practice of egalitarian responding found in Kawakami’s research (see
Chapter 7). I suspect that more effective interventions will have to be more robust,
incorporating repeated practice over time like this. This is akin to what athletes do
when they practice. Just as a core feature of skill acquisition is simply doing the work—
that is, putting in the effort over and over—acquiring more egalitarian habits may be
less the result of divining some hitherto unknown formula than, at least in large part,
just putting in the time and (repeated) effort.17
17
See Chapter 8, footnote 1.
226 Appendix
research on priming, such as in Bargh et al., 1996).18 These events led researchers to
attempt large-scale well-organized efforts to replicate major psychological findings.
The findings—for example, of the Open Science Collaboration’s (OSC) (2015)
“Estimating the Reproducibility of Psychological Science”—were not encouraging.
The OSC attempted to replicate one hundred studies from three top journals but
found that the magnitude of the mean effect size of replications was half that of the
original effects. Moreover, while 97% of the original studies had significant results,
only 36% of the replications did.
These and other findings are deeply worrying. They raise serious concerns about
the accuracy of extant research. They also point to problems for future research that
may be very difficult to solve. The solutions to problems embedded in significance
testing, causal inference, and so on are not yet clear.19 One thing that is important to
note is that these are not problems for psychology alone. Replication efforts of land-
mark studies in cell biology, for example, were successful 11 and 25% of the time,
as noted in two reports (Begley and Ellis, 2012; Prinz et al., 2011). Indeed, what
is noteworthy about psychology is probably not that it faces a replication crisis,
but rather how it has started to respond to this crisis, with rigorous internal debate
about methodology, reforming publication practices, and the like. The founding
of the Society for the Improvement of Psychological Science, for example, is very
promising.20 Michael Inzlicht ominously writes, for example:
I lost faith in psychology. Once I saw how our standard operating procedures
could allow impossible effects to seem possible and real, once I understood
how honest researchers with the highest of integrity could inadvertently and
unconsciously bias their data to reach erroneous conclusions at disturbing
rates, once I appreciated how ignoring and burying null results can warp
entire fields of inquiry, I finally grasped how broken the prevailing paradigm
in psychology (if not most of science) had become. Once I saw all these
things, I could no longer un-see them.
But, he continues:
I feel optimistic about psychology’s future. There are just so many good things
afoot right now, I can’t help but feel things are looking up: Replications are
increasingly seen as standard practice, samples are getting larger and larger,
what-has-happened-down-here-is-the-winds-have-changed/.
19
An excellent encapsulation of these and other concerns is found in Sanjay Srivastava’s (2016)
fake, but also deadly serious “Everything Is Fucked: The Syllabus.”
20
See the society’s mission statement at https://mfr.osf.io/render?url=https://osf.io/jm27w/?act
ion=download%26direct%26mode=render.
Appendix 227
Of course, while these efforts are likely to improve future psychological science,
they don’t indicate how much confidence we ought to have in extant studies. My
view is that one can be cautiously confident in presenting psychological research in
a context like this by adhering to a few relatively basic principles (none of which are
the least bit novel to researchers in psychology or to philosophers working on these
topics). As with most principles, though, things are complicated. And cautious con-
fidence by no means guarantees the right outcomes.
For example, studies with a large number of participants are, ceteris paribus, more
valuable than studies with a small number of participants. And so I have tried to
avoid presenting the results of underpowered studies. Indeed, this is a core strength
of IAT research, much of which is done with online samples. As a result, many of the
studies I discuss (when focused on research on implicit attitudes) have tremendous
power to detect significant effects, given sample sizes in the thousands and even
sometimes in the hundreds of thousands. As a methodological principle, however,
there are complications stemming from the demand that all psychological research
require large samples. One complication is that large samples are sometimes diffi-
cult or impossible to obtain. This may be due to research methodology (e.g., neural
imaging research is often prohibitively expensive to do with large samples) or to
research topics (e.g., some populations, such as patients with rare diseases, are dif-
ficult to find). While there have been Herculean efforts to locate difficult to find
populations (e.g., Kristina Olson’s research in the Transyouth Project),22 in some
cases the problem may be insurmountable. A related complication stems from other
kinds of costs one pays in order to obtain large samples. There are downsides, for
example, to studies using online participants. People in these participant pools are
self-selecting and may not be representative of the population at large.23 Their test-
ing environment is also not well controlled. The tension between the importance
of obtaining large samples and the costs associated with online samples is one that
methodologists will continue to work through.
Another seemingly simply, but ultimately not uncomplicated, principle is focus-
ing on sets of studies rather than on any one individual study. It is generally bad
21
See http://michaelinzlicht.com/getting-better/2016/8/22/the-unschooling-of-a-faithful-
mind.
22
See http://depts.washington.edu/transyp/.
23
Of course, this may be a problem facing psychology as a whole. See Henrich et al. (2010). For
additional worries about attrition rates in online samples, see Zhou and Fishbach (2016).
228 Appendix
24
See http://soccco.uni-koeln.de/cscm-2016-debate.html.
Appendix 229
Thus while relying on meta-analyses will not by any stretch solve the problems fac-
ing psychological science and while meta-analyses can have their own problems,
they continue to have a central role to play in adjudicating the credence one ought
to have in a body of research. They can play this role best when researchers evaluate
the design and structure of meta-analyses themselves (as they are now doing).
A third principle to which I have tried to hew closely—and is particularly ger-
mane in the case of implicit attitude research—is to identify reasons to expect fea-
tures of the context to affect the outcomes of studies. This follows from my account
of implicit attitudes as states with FTBA components. It figures into my recommen-
dations for creating IATs with greater predictive validity. And it also speaks to broad
issues in the replication crisis. It is clear from decades of research that social attitudes
and behavior are persistently affected by the conceptual, physical, and motivational
elements of agents’ contexts (see Chapters 3 and 7 and, for a review, see Gawronski
and Cesario, 2013). In one sense, this suggests that we should expect many efforts
to replicate psychological experiments to fail, given the myriad cultural, historical,
and contextual features that could influence the outcome. Of course, replication
under specified circumstances is a core tenet of science in general. In psychological
science, the ideal is to identify outcome-affecting variables that are psychologically
meaningful for principled reasons.
For example, consider the finding that the ambient light of the room in which one
sits affects the activation of implicit racial biases (Schaller et al., 2003). A principled
explanation of the specific moderating contextual factors is available here, I believe.
It stems from the connection between an agent’s cares and the selective activation
of her implicit attitudes (see Chapters 3 and 4). Schaller and colleagues’ findings
obtained only for agents with chronic beliefs in a dangerous world. The dangerous-
ness of the world is something the agent cares about (as chronic beliefs are cer-
tainly sufficient to fix cares). This care is connected to the activation of the agent’s
implicit biases, given that ambient darkness is known to amplify threat perception
(e.g., Grillon et al., 1997) and that black faces are typically seen by white people
as more threatening than white faces (e.g., Hugenberg and Bodenhausen, 2003).
Likewise predictions could be made based on other factors relevant to agents’ cares.
One might expect that the possibilities for action available to an agent, based on
230 Appendix
her physical surroundings, also affect the activation of threat-related implicit biases.
Indeed, precisely this has been found.
Consider a second example, regarding related (and much criticized) research on
priming. Cesario and Jonas (2014) address the replicability of priming research in
their “Resource Computation Model” by identifying conditions under which primes
will and won’t affect behavior. They model social behaviors following priming as a
result of computational processing of information that determines a set of behav-
ioral possibilities for agents. They describe three sources of such information: (1)
social resources (who is around me?); (2) bodily resources (what is the state of
my physiology and bodily position?); and (3) structural resources (where am I and
what objects are around me?). I described one of Cesario and Jonas’s experiments
in Chapter 2. They documented the effects of priming white participants with the
concept “young black male,” contingent upon the participants’ physical surround-
ings and how threatening they perceived black males to be. The participants were
seated either in an open field, which would allow a flight response to a perceived
threat, or in a closed booth, which would restrict a flight response. Their outcome
measure was the activation of flight versus fight words. Cesario and Jonas found that
fight-related words were activated for participants who were seated in the booth and
who associated black men with danger, whereas flight-related words were activated
for participants who were seated in the field and who associated black men with
danger.25
The line demarcating principled predictions about outcome-affecting variables
from ad hoc reference to “hidden moderators” is not crystal clear, however. I take
the articulation of this line to be an important job for philosophers of science. It
is clear enough that one should not attempt to explain away a failure to replicate a
given study by searching after the fact for differences between the first and second
experimental contexts.26 But I also note that this sort of post hoc rationalization can
point the way to fruitful future research, which could help to confirm or discon-
firm the proposal. What starts as post hoc rationalization for a failed replication can
become an important theoretical refinement of previous theory, given additional
research. (Or not! So the research needs to be done!)
4. Conclusion
The aforementioned ambiguity over what constitutes a principled explanation of an
effect points toward a broader set of issues regarding replication, the psychological
25
My point is not that all priming research can be saved through this approach. I do not know if
there are principled explanations of moderators of many forms of priming.
26
For a prescient, helpful, and related critique of post hoc hypothesizing about data, see Kerr
(1998).
Appendix 231
sciences, and philosophy. There is much work to do at this juncture for philosophers
of science. Aside from questions about replication, there are persistently difficult
tensions to resolve in research methodology and interpretation. The comparative
value of field studies versus experiments is particularly fraught, I believe. Each offers
benefits, costs, and seemingly inevitable trade-offs.27 With respect to replication,
there are trade-offs between conceptual and direct replications, and moreover it
isn’t even entirely clear what distinguishes a successful from an unsuccessful repli-
cation. Even more broadly, it is not clear how a successful replication indicates what
is and isn’t true. The OSC (2015) writes, for example, “After this intensive effort to
reproduce a sample of published psychological findings, how many of the effects
have we established are true? Zero. And how many of the effects have we estab-
lished are false? Zero. Is this a limitation of the project design? No.” The OSC rec-
ommends a roughly Bayesian framework, according to which all studies—originals
and replications alike—simply generate cumulative evidence for shifting one’s prior
credence in an idea:28
The original studies examined here offered tentative evidence; the replica-
tions we conducted offered additional, confirmatory evidence. In some cases,
the replications increase confidence in the reliability of the original results;
in other cases, the replications suggest that more investigation is needed to
establish the validity of the original findings. Scientific progress is a cumula-
tive process of uncertainty reduction that can only succeed if science itself
remains the greatest skeptic of its explanatory claims.
27
See, e.g., Berkowitz and Donnerstein (1982) and Falk and Heckman (2009).
28
See Goodman et al. (2016) on Bayesian approaches to understanding reproducibility.
29
See, e.g., Mayo (2011, 2012).
30
For another statement of cautious optimism, see Simine Vazire’s (2006) blog post “It’s the End of
the World as We Know It . . . and I Feel Fine.”
REFERENCES
Aarts, H., Dijksterhuis, A., & Midden, C. 1999. To plan or not to plan? Goal achievement or inter-
rupting the performance of mundane behaviors. European Journal of Social Psychology, 29,
971–979.
Achtziger, A., Bayer, U., & Gollwitzer, P. 2007. Implementation intentions: Increasing the accessibil-
ity of cues for action. Unpublished paper, University of Konstanz.
Agassi, A. 2010. Open: An autobiography. New York: Vintage.
Agerström, J., & Rooth, D. O. 2011. The role of automatic obesity stereotypes in real hiring discrimi-
nation. Journal of Applied Psychology, 96(4), 790–805.
Ajzen, I. 1988. Attitudes, personality, and behavior. Homewood, IL: Dorsey Press.
Ajzen, I., & Fishbein, M. 1970. The prediction of behavior from attitudinal and normative variables.
Journal of Experimental Social Psychology, 6, 466–487. doi:10.1016/0022-1031(70)90057-0.
Ajzen, I., & Fishbein, M. 1977. Attitude–behavior relations: A theoretical analysis and review of
empirical research. Psychological Bulletin, 84, 888–918. doi:10.1037/0033-2909.84.5.888.
Ajzen, I., & Fishbein, M. 2005. The influence of attitudes on behavior. In D. Albarracín, B. T.
Johnson, & M. P. Zanna (Eds.), The handbook of attitudes, 173–221. Mahwah, NJ: Erlbaum.
Akerlof, G. A., & Kranton, R. E. 2000. Economics and identity. Quarterly Journal of Economics, 115,
715–753.
Alcoff, L. M. 2010. Epistemic identities. Episteme, 7(2), 128–137.
Allport, G. 1954. The nature of prejudice. Cambridge, MA: Addison-Wesley.
Amodio, D. M., & Devine, P. G. 2006. Stereotyping and evaluation in implicit race bias: Evidence
for independent constructs and unique effects on behavior. Journal of Personality and Social
Psychology, 91(4), 652.
Amodio, D. M., & Devine, P. G. 2009. On the interpersonal functions of implicit stereotyping
and evaluative race bias: Insights from social neuroscience. In R. E. Petty, R. H. Fazio, & P.
Briñol (Eds.), Attitudes: Insights from the new wave of implicit measures, 193–226. Hillsdale,
NJ: Erlbaum.
Amodio, D. M., & Devine, P. G. 2010. Regulating behavior in the social world: Control in the con-
text of intergroup bias. In R. R. Hassin, K. N. Ochsner, & Y. Trope (Eds.), Self control in society,
mind, and brain, 49–75. New York: Oxford University Press.
Amodio, D. M., Devine, P. G., & Harmon-Jones, E. 2008. Individual differences in the regulation
of intergroup bias: The role of conflict monitoring and neural signals for control. Journal of
Personality and Social Psychology, 94, 60–74.
Amodio, D. M., & Hamilton, H. K. 2012. Intergroup anxiety effects on implicit racial evaluation and
stereotyping. Emotion, 12, 1273–1280.
Amodio, D. M., & Ratner, K. 2011. A memory systems model of implicit social cognition. Current
Directions in Psychological Science, 20(3), 143–148.
233
234 References
Bargh, J. A., Gollwitzer, P. M., Lee-Chai, A., Barndollar, K., & Trotschel, R. 2001. The automated
will: Unconscious activation and pursuit of behavioral goals. Journal of Personality and Social
Psychology, 81, 1004–1027.
Baron, J. (2008), Thinking and deciding. New York: Cambridge University Press.
Barrett, L. F., & Bar, M. 2009. See it with feeling: Affective predictions during object perception.
Philosophical Transactions of the Royal Society, 364, 1325–1334.
Barrett, L. F., Oschner, K. N., & Gross, J. J. 2007. On the automaticity of emotion. In J. A. Bargh
(Ed.), Social psychology and the unconscious: The automaticity of higher mental processes, 173–
218. Philadelphia: Psychology Press.
Barry, M. 2007. Realism, rational action, and the Humean theory of motivation. Ethical Theory and
Moral Practice, 10, 231–42.
Bartsch, K., & Wright, J. C. 2005. Toward an intuitionist account of moral development. Behavioral
and Brain Sciences, 28(4), 546–547.
Bateson, M., Nettle, D., & Roberts, G. 2006. Cues of being watched enhance cooperation in a real-
world setting. Biology letters, 2(3), 412–414.
Baumeister, R., & Alquist, J. 2009. Is there a downside to good self-control? Self and Identity, 8(2–3),
115–130.
Baumeister, R., Bratslavsky, E., Muraven, M., & Tice, D. 1998. Ego depletion: Is the active self a
limited resource? Journal of Personality and Social Psychology, 74(5), 1252.
Bealer, G. 1993. The incoherence of empiricism. In S. Wagner & R. Warner (Eds.), Naturalism: A
critical appraisal, 99–183. Notre Dame, IN: University of Notre Dame Press.
Beaman, L., Chattopadhyay, R., Duflo, E., Pande, R., & Topalova, P. 2009. Powerful women: Does
exposure reduce bias? Quarterly Journal of Economics, 124(4), 1497–1540.
Bechara, A., Damasio, H., Tranel, D., & Damasio, A. 1997. Deciding advantageously before knowing
the advantageous strategy. Science, 275, 1293–1295.
Bechara, A., Damasio, H., Tranel, D., & Damasio, A. 2005. The Iowa Gambling Task and the
Somatic Marker Hypothesis: Some questions and answers. Trends in Cognitive Science, 9(4),
159–162.
Beck, J. S. 2011. Cognitive behavioral therapy, second edition: Basics and beyond. New York:
Guilford Press.
Begley, C., & Ellis, L. 2012. Drug development: Raise standards for preclinical cancer research.
Nature, 483, 531–533. doi: 10.1038/483531a.
Beilock, S. 2010. Choke: What the secrets of the brain reveal about getting it right when you have to.
New York: Free Press.
Beilock, S. L., Wierenga, S. A., & Carr, T. H. (2002). Expertise, attention, and memory in sensorimo
tor skill execution: Impact of novel task constraints on dual-task performance and episodic
memory. The Quarterly Journal of Experimental Psychology: Section A, 55(4), 1211–1240.
Bennett, J. 1974. The conscience of Huckleberry Finn. Philosophy, 49, 123–134.
Berger, J. 2012. Do we conceptualize every color we consciously discriminate? Consciousness and
Cognition, 21(2), 632–635.
Berkowitz, L., & Donnerstein, E. 1982. External validity is more than skin deep: Some answers
to criticisms of laboratory experiments. American Psychologist, 37(3), 245–257. doi: http://
dx.doi.org/10.1037/0003-066X.37.3.245.
Berridge, K., & Winkielman, P. 2003. What is an unconscious emotion? The case for unconscious
“liking.” Cognition and Emotion, 17(2), 181–211.
Berry, J., Abernethy, B., & Côté, J. 2008. The contribution of structured activity and deliberate
play to the development of expert perceptual and decision-making skill. Journal of Sport and
Exercise Psychology, 30(6), 685–708.
Bhalla, M., & Proffitt, D. R. 1999. Visual–motor recalibration in geographical slant perception.
Journal of Experimental Psychology: Human Perception and Performance, 25(4), 1076.
Blair, I. 2002. The malleability of automatic stereotypes and prejudice. Personality and Social
Psychology Review, 6, 242–261.
236 References
Blair, I., Ma, J., & Lenton, A. 2001. Imagining stereotypes away: The moderation of implicit stereo-
types through mental imagery. Journal of Personality and Social Psychology, 81, 828.
Block, J. 2002. Personality as an affect-processing system: Toward an integrative theory. Hillsdale,
NJ: Erlbaum.
Bourdieu, P. 1990. In other words: Essays towards a reflexive sociology. Stanford, CA: Stanford
University Press.
Bratman, M. 1987. Intentions, plans, and practical reason. Stanford, CA: Center for the Study of
Language and Information.
Bratman, M. 2000. Reflection, planning, and temporally extended agency. Philosophical Review,
109(1), 38.
Brewer, M. B. 1999. The psychology of prejudice: Ingroup love and outgroup hate? Journal of Social
Issues, 55(3), 429–444.
Briñol, P., Petty, R., & McCaslin, M. 2009. Changing attitudes on implicit versus explicit mea-
sures: What is the difference? In R. Petty, R. Fazio, and P. Briñol (Eds.), Attitudes: Insights from
the new implicit measures, 285–326. New York: Psychology Press.
Brody, G. H., Yu, T., Chen, E., Miller, G. E., Kogan, S. M., & Beach, S. R. H. 2013. Is resilience only
skin deep? Rural African Americans’ socioeconomic status—related risk and competence
in preadolescence and psychological adjustment and allostatic load at age 19. Psychological
Science, 24(7), 1285–1293.
Brownstein, M. 2014. Rationalizing flow: Agency in skilled unreflective action. Philosophical Studies,
168(2), 545–568. doi: 10.1007/s11098-013-0143-5.
Brownstein, M. 2015, February. Implicit bias. Stanford encyclopedia of philosophy, Spring 2015 ed.
(E. Zalta, Ed.). http://plato.stanford.edu/entries/implicit-bias/.
Brownstein, M. 2016. Implicit bias, context, and character. In M. Brownstein & J. Saul (Eds.),
Implicit bias and philosophy: Vol. 2, Moral responsibility, structural injustice, and ethics, 215–234.
Oxford: Oxford University Press.
Brownstein, M. 2017. Implicit attitudes, social learning, and moral credibility. In J. Kiverstein (Ed.),
The Routledge handbook on philosophy of the social mind, 298–319. New York: Routledge.
Brownstein, M. Self-control and overcontrol: Conceptual, ethical, and ideological issues in positive
psychology. Unpublished paper.
Brownstein, M., & Madva, A. 2012a. Ethical automaticity. Philosophy of the Social Sciences,
42(1), 67–97.
Brownstein, M., & Madva, A. 2012b. The normativity of automaticity. Mind and Language, 27(4),
410–434.
Brownstein, M. & Michaelson, E. 2016. Doing without believing: Intellectualism, knowing-how,
and belief attribution. Synthese, 193(9), 2815–2836.
Brownstein, M., & Saul, J. (Eds.). 2016. Implicit bias and philosophy: Vol. 2, Moral responsibility, struc-
tural injustice, and ethics. Oxford: Oxford University Press.
Bryan, W. L., & Harter, N. 1899. Studies on the telegraphic language: The acquisition of a hierarchy
of habits. Psychological Review, 6(4), 345–375.
Buchsbaum, D., Gopnik A., & Griffiths, T. L. 2010. Children’s imitation of action sequences is
influenced by statistical evidence and inferred causal structure. Proceedings of the 32nd Annual
Conference of the Cognitive Science Society.
Buckner, C. 2011. Two approaches to the distinction between cognition and “mere association.”
International Journal of Comparative Psychology, 24(4), 314–348.
Buckner, C. 2017. The rationality of intuitive inference. Rational inference: The lowest bounds.
Philosophy and Phenomenological Research. doi: 10.1111/phpr.12455
Buon, M., Jacob, P., Loissel, E., & Dupoux, E. 2013. A non-mentalistic cause-based heu-
ristic in human social evaluations. Cognition, 126(2), 149– 155. doi: 10.1016/
j.cognition.2012.09.006.
Byrne, A. 2009. Experience and content. Philosophical Quarterly, 59(236), 429–451. doi: 10.1111/
j.1467-9213.2009.614.x.
References 237
Cameron, C. D., Brown-Iannuzzi, J. L., & Payne, B. K. (2012). Sequential priming measures of
implicit social cognition: A meta-analysis of associations with behavior and explicit attitudes.
Personality and Social Psychology Review, 16, 330–350.
Cameron, C., Payne, B., & J. Knobe. 2010. Do theories of implicit race bias change moral judg-
ments? Social Justice Research, 23, 272–289.
Carey, S. 2009. The origin of concepts. Oxford: Oxford University Press.
Carlsson, R., & Björklund, F. 2010. Implicit stereotype content: Mixed stereotypes can be measured
with the implicit association test. Social Psychology, 41(4), 213–222.
Carruthers, P. 2009. How we know our own minds: The relationship between mindreading and
metacognition. Behavioral and Brain Sciences, 32, 121–138.
Carruthers, P. 2013. On knowing your own beliefs: A representationalist account. In N. Nottelmann
(Ed.), New essays on belief: Constitution, content and structure, 145–165. Basingstoke: Palgrave
MacMillan.
Carver, C. S., & Harmon-Jones, E. 2009. Anger is an approach-related affect: Evidence and implica-
tions. Psychological Bulletin, 135, 183–204.
Carver, C. S., & Scheier, M. F. 1982. Control theory: A useful conceptual framework for personality—
social, clinical, and health psychology. Psychological Bulletin, 92, 111–135.
Cath, Y. 2011. Knowing how without knowing that. In J. Bengson & M. Moffett (Eds.), Knowing-
how: Essays on knowledge, mind, and action, 113–135. New York: Oxford University Press.
Cesario, J., & Jonas, K. 2014. Replicability and models of priming: What a resource computation
framework can tell us about expectations of replicability. Social Cognition, 32, 124–136.
Cesario, J., Plaks, J. E., & Higgins, E. T. 2006. Automatic social behavior as motivated preparation to
interact. Journal of Personality and Social Psychology, 90, 893–910.
Chan, D. 1995. Non-intentional actions. American Philosophical Quarterly, 32, 139–151.
Chemero, A. 2009: Radical embodied cognitive science. Cambridge, MA: MIT Press.
Chen, M., & Bargh, J. A. 1999. Consequences of automatic evaluation: Immediate behavioral pre-
dispositions to approach or avoid the stimulus. Personality and Social Psychology Bulletin, 25,
215–224.
Clark, A. 1997. Being there. Cambridge, MA: MIT Press.
Clark, A. 1999. An embodied cognitive science? Trends in Cognitive Science, 3(9), 345–351.
Clark, A. 2007. Soft selves and ecological control. In D. Spurrett, D. Ross, H. Kincaid, and L.
Stephens, (eds.), Distributed cognition and the will. Cambridge, MA: MIT Press.
Clark, A. 2015. Surfing uncertainty: Prediction, action, and the embodied mind. Oxford: Oxford
University Press.
Cohen, G. L. 2003. Party over policy: The dominating impact of group influence on political beliefs.
Journal of Personality and Social Psychology, 85, 808–822.
Cohen, J. 1990. Things I have learned so far. American Psychologist, 45(12), 1304–1312.
Colombetti, G. 2007. Enactive appraisal. Phenomenology and the Cognitive Sciences, 6, 527–546.
Confucius. 2003. Analects. (E. Slingerland, Trans.). Indianapolis: Hackett.
Conrey, F., Sherman, J., Gawronski, B., Hugenberg, K., & Groom, C. 2005. Separating multiple pro-
cesses in implicit social cognition: The Quad-Model of implicit task performance. Journal of
Personality and Social Psychology, 89, 469–487.
Corbetta, M., and Shulman, G. L. 2002. Control of goal-directed and stimulus-driven attention in
the brain. Nature Reviews: Neuroscience, 3, 201–215. doi: 10.1038/nrn755.
Correll, J., Park, B., Judd, C., & Wittenbrink, B. 2002. The police officer’s dilemma: Using race to
disambiguate potentially threatening individuals. Journal of Personality and Social Psychology,
83, 1314–1329.
Correll, J., Park, B., Judd, C. M., & Wittenbrink, B. 2007. The influence of stereotypes on decisions
to shoot. European Journal of Social Psychology, 37(6), 1102–1117.
Correll, J., Wittenbrink, B., Crawford, M. T., & Sadler, M. S. 2015. Stereotypic vision: How ste-
reotypes disambiguate visual stimuli. Journal of Personality and Social Psychology, 108(2),
219–233.
238 References
Cosmides, L., & Tooby, J. 2000. Evolutionary psychology and the emotions. In M. Lewis & J. M.
Haviland-Jones (Eds.), Handbook of emotions, 2d ed., 91–115. New York: Guilford Press.
Critcher, C., Inbar, Y., & Pizarro, D. 2013. How quick decisions illuminate moral character. Social
Psychological and Personality Science, 4(3), 308–315.
Crockett, M. 2013. Models of morality. Trends in Cognitive Sciences, 17(8), 363–366.
Csikszentmihalyi, M. 1990. Flow: The psychology of optimal experience. New York: Harper & Row.
Currie, G., & Ichino, A. 2012. Aliefs don’t exist, though some of their relatives do. Analysis, 72(4),
788–798.
Cushman, F. 2013. Action, outcome, and value: A dual-system framework for morality. Personality
and Social Psychology Review, 17(3), 273–292.
Cushman, F., Gray, K., Gaffey, A., & Mendes, W. B. 2012. Simulating murder: The aversion to harm-
ful action. Emotion, 12(1), 2.
Damasio, A. 1994. Descartes’ error: Emotion, reason, and the human brain. New York: Putnam’s.
Danziger, S., Levav, J., & Avnaim-Pesso, L. 2011. Extraneous factors in judicial decisions. Proceedings
of the National Academy of Sciences, 108(17), 6889–6892.
Dardenne, B., Dumont, M., & Bollier, T. 2007. Insidious dangers of benevolent sexism: Consequences
for women’s performance. Journal of Personality and Social Psychology, 93(5), 764.
Darley, J., and Batson, C. 1973. From Jerusalem to Jericho: A study of situational and dispositional
variables in helping behavior. Journal of Personality and Social Psychology, 27, 100–108.
Dasgupta, N. 2004. Implicit ingroup favoritism, outgroup favoritism, and their behavioral manifes-
tations. Social Justice Research, 17(2), 143–168.
Dasgupta, N. 2013. Implicit attitudes and beliefs adapt to situations: A decade of research on the
malleability of implicit prejudice, stereotypes, and the self-concept. Advances in Experimental
Social Psychology, 47, 233–279.
Dasgupta, N., & Asgari, S. 2004. Seeing is believing: Exposure to counterstereotypic women leaders
and its effect on automatic gender stereotyping. Journal of Experimental Social Psychology, 40,
642–658.
Dasgupta, N., DeSteno, D., Williams, L. A., & Hunsinger, M. 2009. Fanning the flames of preju-
dice: The influence of specific incidental emotions on implicit prejudice. Emotion, 9(4), 585.
Dasgupta, N., & Greenwald, A. 2001. On the malleability of automatic attitudes: Combating auto-
matic prejudice with images of admired and disliked individuals. Journal of Personality and
Social Psychology, 81, 800–814.
Dasgupta, N., & Rivera, L. 2008. When social context matters: The influence of long-term con-
tact and short-term exposure to admired group members on implicit attitudes and behavioral
intentions. Social Cognition, 26, 112–123.
Davidson, A. R., & Jaccard, J. J. 1979. Variables that moderate the attitude–behavior rela-
tion: Results of a longitudinal survey. Journal of Personality and Social Psychology, 37, 1364–
1376. doi: 10.1037/0022- 3514.37.8.1364.
Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P., & Dolan, R. J. 2011. Model-based influences on
humans’ choices and striatal prediction errors. Neuron, 69, 1204–1215.
D’Cruz, J. 2013. Volatile reasons. Australasian Journal of Philosophy, 91(1), 31–40.
De Becker, G. 1998. The gift of fear. New York: Dell.
De Coster, L., Verschuere, B., Goubert, L., Tsakiris, M., & Brass, M. 2013. I suffer more from your
pain when you act like me: Being imitated enhances affective responses to seeing someone
else in pain. Cognitive, Affective, & Behavioral Neuroscience, 13(3), 519–532.
De Houwer, J. 2014. A propositional model of implicit evaluation. Social Psychology and Personality
Compass, 8(7), 342–353.
De Houwer, J., Teige-Mocigemba, S., Spruyt, A., & Moors, A. 2009. Implicit measures: A normative
analysis and review. Psychology Bulletin, 135(3), 347–368. doi: 10.1037/a0014211.
Dempsey, M. A., & and Mitchell, A. A. 2010. The influence of implicit attitudes on consumer
choice when confronted with conflicting product attribute information. Journal of Consumer
Research, 37(4), 614–625.
References 239
Egan, A. 2008. Seeing and believing: Perception, belief formation and the divided mind. Philosophical
Studies, 140(1), 47–63.
Egan, A. 2011. Comments on Gendler’s “The Epistemic Costs of Implicit Bias.” Philosophical Studies,
156, 65–79.
Elster, J. 2000. Ulysses unbound: Studies in rationality, precommitment, and constraints. Cambridge:
Cambridge University Press.
Epstein, D. 2013. The sports gene. New York: Penguin.
Ericsson, K. A. 2016. Summing up hours of any type of practice versus identifying optimal practice
activities: Commentary on Macnamara, Moreau, & Hambrick. Perspectives on Psychological
Science, 11(3), 351–354.
Ericsson, K. A., Krampe, R., & Tesch-Römer, C. 1993. The role of deliberate practice in the acquisi-
tion of expert performance. Psychological Review, 100(3), 363–406.
Evans, G. 1982. The varieties of reference. Oxford: Oxford University Press.
Evans, J. S. B., & Frankish, K. E. 2009. In two minds: Dual processes and beyond. Oxford: Oxford
University Press.
Evans, J. S. B., & Stanovich, K. E. 2013. Dual-process theories of higher cognition: Advancing the
debate. Perspectives on Psychological Science, 8(3), 223–241. doi: 10.1177/1745691612460685.
Falk, A., & Heckman, J. J. 2009. Lab experiments are a major source of knowledge in the social sci-
ences. Science, 326(5952), 535–538. doi: 10.1126/science.1168244.
Faraci, D., & Shoemaker, D. 2014. Huck vs. Jojo. In T. Lombrozo & J. Knobe (Eds.), Oxford studies
in experimental philosophy, Vol. 1, 7–26. Oxford: Oxford University Press.
Faucher, L. 2016. Revisionism and moral responsibility. In M. Brownstein & J. Saul (Eds.), Implicit
bias and philosophy: Vol. 2, Moral Responsibility, Structural Injustice, and Ethics, 115–146.
Oxford: Oxford University Press.
Fazio, R. H. (1990). Multiple processes by which attitudes guide behavior: The MODE model as an
integrative framework. Advances in Experimental Social Psychology, 23, 75–109.
Fazio, R. H., Jackson, J. R., Dunton, B. C., & Williams, C. J. 1995. Variability in Automatic Activation
as an unobtrusive measure of racial attitudes: A bona fide pipeline? Journal of Personality and
Social Psychology, 69(6), 1013–1027.
Ferriss, T. 2015, 27 April. Tim Ferriss on accelerated learning, peak performance and living the
good life. Podcast. http://thepsychologypodcast.com/tim-ferriss-on-accelerated-learning-
peak-performance-and-living-the-good-life/.
Festinger, L. 1956. A theory of cognitive dissonance. Stanford, CA: Stanford University Press.
Firestone, C. 2013. How “paternalistic” is spatial perception? Why wearing a heavy backpack
doesn’t—and couldn’t—make hills look steeper. Perspectives on Psychological Science, 8(4),
455–473.
Firestone, C., & Scholl, B. J. 2014. “Top-down” effects where none should be found: The El Greco
fallacy in perception research. Psychological Science, 25(1), 38–46.
Firestone, C., & Scholl, B. J. 2015. Cognition does not affect perception: Evaluating the evidence for
“top-down” effects. Behavioral and Brain Sciences, 1, 1–77.
Fischer, J. M., & M. Ravizza. 1998. Responsibility and Control: A Theory of Moral Responsibility.
Cambridge: Cambridge University Press.
Flannigan, N., Miles, L. K., Quadflieg, S., & Macrae, C. N. 2013. Seeing the unex-
pected: Counterstereotypes are implicitly bad. Social Cognition, 31(6), 712–720.
Flavell, J. H., Miller, P. H., & Miller, S. A. 1985. Cognitive development. Englewood Cliffs,
NJ: Prentice-Hall.
Fleeson, W. R., Furr, M., Jayawickream, E., Helzer, E. R., Hartley, A. G., & Meindel, P. 2015.
Personality science and the foundations of character. In C. R. Miller, M. Furr, A. Knobel, &
W. Fleeson (Eds.), Character: New directions from philosophy, psychology, and theology, 41–74.
New York: Oxford University Press.
Fodor, J. A. 1983. The modularity of mind: An essay on faculty psychology. Cambridge, MA: MIT Press.
References 241
Gino, F., Schweitzer, M. E., Mead, N. L., & Ariely, D. 2011. Unable to resist temptation: How self-
control depletion promotes unethical behavior. Organizational Behavior and Human Decision
Processes, 115(2), 191–203.
Ginsborg, H. 2011. Primitive normativity and skepticism about rules. Journal of Philosophy, 108(5),
227–254.
Gladwell, M. 2007. Blink. New York: Back Bay Books.
Gladwell, M. 2011. Outliers. New York: Back Bay Books.
Gladwell, M. 2013, 21 August. Complexity and the ten thousand hour rule. New Yorker. http://
www.newyorker.com/news/sporting-scene/complexity-and-the-ten-thousand-hour-rule/.
Glaser, J. C. 1999. The relation between stereotyping and prejudice: Measures of newly formed automatic
associations. Doctoral dissertation, Harvard University.
Glaser, J., & Knowles, E. 2008. Implicit motivation to control prejudice. Journal of Experimental
Social Psychology, 44, 164–172.
Glasgow, J. 2016. Alienation and responsibility. In M. Brownstein & J. Saul (Eds.), Implicit bias and
philosophy: Vol. 2, Moral responsibility, structural injustice, and ethics, 37–61. Oxford: Oxford
University Press.
Goff, P. A., Jackson, M. C., Di Leone, B. A. L., Culotta, C. M., & DiTomasso, N. A. 2014. The essence
of innocence: Consequences of dehumanizing Black children. Journal of Personality and Social
Psychology, 106(4), 526.
Gollwitzer, P., Parks-Stamm, E., Jaudas, A., & Sheeran, P. 2008. Flexible tenacity in goal pur-
suit. In J. Shah & W. Gardner (Eds.), Handbook of motivation science, 325–341. New York:
Guilford Press.
Gollwitzer, P. M., & Sheeran, P. 2006. Implementation intentions and goal achievement: A meta-
analysis of effects and processes. In M. P. Zanna (Ed.), Advances in experimental social psychol-
ogy, 69–119. New York: Academic Press.
Goodman, S., Fanelli, D., & Loannidis, J. P. A. 2016. What does research reproducibility mean?
Science, 8(341), 341–353.
Gopnik, A. 2016, 31 July. What babies know about physics and foreign languages. New York Times.
http://www.nytimes.com/2016/07/31/opinion/sunday/what-babies-know-about-physics-
and-foreign-languages.html/.
Govorun, O., & Payne, B. K. 2006. Ego-depletion and prejudice: Separating automatic and con-
trolled components. Social Cognition, 24(2), 111–136.
Green, A. R., Carney, D. R., Pallin, D. J., Ngo, L. H., Raymond, K. L., Iezzoni, L. I., Banaji, M. R.
2007. Implicit bias among physicians and its prediction of thrombolysis decisions for Black
and White patients. Journal of General Internal Medicine, 22, 1231–1238.
Greene, J. D. 2013. Moral tribes: Emotion, reason, and the gap between us and them. New York:
Penguin.
Greene, J., & Haidt, J. 2002. How (and where) does moral judgment work? Trends in Cognitive
Sciences, 6(12), 517–523.
Greenwald, A. G., Banaji, M. R., Rudman, L. A., Farnham, S. D., Nosek, B. A., & Mellott, D. S. 2002.
A unified theory of implicit attitudes, stereotypes, self-esteem, and self-concept. Psychological
Review, 109(1), 3.
Greenwald, A. G., Banaji, M. R., & Nosek, B. A. 2014. Statistically small effects of the implicit asso-
ciation test can have societally large effects. Journal of Personality and Social Psychology, 108,
553–561.
Greenwald, A., McGhee, D., & Schwartz, J. 1998. Measuring individual differences in implicit
cognition: The implicit association test. Journal of Personality and Social Psychology, 74,
1464–1480.
Greenwald, A. G., Nosek, B. A., & Banaji, M. R. 2003. Understanding and using the implicit associa-
tion test: I. An improved scoring algorithm. Journal of Personality and Social Psychology, 85,
197–216.
Greenwald, A. G., & Pettigrew, T. F. 2014. With malice toward none and charity for some: Ingroup
favoritism enables discrimination. American Psychologist, 69, 669–684.
References 243
Greenwald, A., Poehlman, T., Uhlmann, E., & Banaji, M 2009. Understanding and using the implicit
association test: III. Meta-analysis of predictive validity. Journal of Personality and Social
Psychology, 97(1), 17–41.
Gregg A., Seibt B., & Banaji, M. 2006. Easier done than undone: Asymmetry in the malleability of
implicit preferences. Journal of Personality and Social Psychology, 90, 1–20.
Grillon, C., Pellowski, M., Merikangas, K. R., & Davis, M. 1997. Darkness facilitates acoustic startle
reflex in humans. Biological Psychiatry, 42, 453–460.
Gross, J. 2002. Emotion regulation: Affective, cognitive, and social consequences. Psychophysiology,
39, 281–291.
Gschwendner, T., Hoffman, W., & Schmitt, M. 2008. Convergent and predictive validity of implicit
and explicit anxiety measures as a function of specificity similarity and content similarity.
European Journal of Psychological Assessment, 24(4), 254–262.
Guinote, A., Guillermo, B. W., & Martellotta, C. 2010. Social power increases implicit prejudice.
Journal of Experimental Social Psychology, 46, 299–307.
Hacker, P. 2007. Human nature: The categorical framework. Oxford: Blackwell.
Hagger, M.S., Chatzisarantis, N.L.D., Alberts, H., Anggono, C.O., Batailler, C., Birt, A.R., Brand,
R., et al. 2016. A multilab preregistered replication of the ego-depletion effect. Perspectives on
Psychological Science, 11(4), 546–573.
Haley, K., & Fessler, D. M. T. 2005. Nobody’s watching? Subtle cues affect generosity in an anony-
mous economic game. Evolution and Human Behavior, 26, 245–256.
Harman, G. 1999. Moral philosophy meets social psychology: Virtue ethics and the fundamental
attribution error. Proceedings of the Aristotelian Society, 99, 315–331.
Haslanger, S. 2015. Social structure, narrative, and explanation. Canadian Journal of Philosophy,
45(1), 1–15. doi: 10.1080/00455091.2015.1019176.
Hasson, U., & Glucksberg, S. 2006. Does negation entail affirmation? The case of negated meta-
phors. Journal of Pragmatics, 38, 1015–1032.
Hawley, K. 2012. Trust, distrust, and commitment. Noûs, 48(1), 1–20.
Hayes, J. R. 1985. Three problems in teaching general skills. Thinking and Learning Skills, 2, 391–406.
Heller, S. B., Shah, A. K., Guryan, J., Ludwig, J., Mullainathan, S., & Pollack, H. A. 2015. Thinking,
fast and slow? Some field experiments to reduce crime and dropout in Chicago (No.
w21178). National Bureau of Economic Research.
Helm, B. W. 1994. The significance of emotions. American Philosophical Quarterly, 31(4), 319–331.
Helm, B. W. 2007. Emotional reason: Deliberation, motivation, and the nature of value. Cambridge:
Cambridge University Press.
Helm, B. W. 2009. Emotions as evaluative feelings. Emotion Review, 1(3), 248–255.
Henrich, J., Heine, S. J., & Norenzayan, A. 2010. The weirdest people in the world? Behavioral and
Brain Sciences, 33(2–3), 61–83.
Herman, B. 1981. On the value of acting from the motive of duty. Philosophical Review, 90(3),
359–382.
Hermans, D., De Houwer, J., & Eelen, P. 2001. A time course analysis of the effective priming affect.
Cognition and Emotion, 15, 143–165.
Hertz, S. G., & Krettenauer, T. 2016. Does moral identity effectively predict moral behavior? A meta-
analysis. Review of General Psychology, 20(2), 129–140.
Hieronymi, P. 2008. Responsibility for believing. Synthese, 161, 357–373.
Hillman, A. L. 2010. Expressive behavior in economics and politics. European Journal of Political
Economy, 26, 403–418.
Hofmann, W., Gawronski, B., Gschwendner, T., Le, H., & Schmitt, M. 2005. A meta-analysis on the
correlation between the implicit association test and explicit self-report measures. Personality
and Social Psychology Bulletin, 31(10), 1369–1385.
Holroyd, C. B., Nieuwenhuis, S., Yeung, N., & Cohen, J. D. 2003. Errors in reward prediction are
reflected in the event-related brain potential. Neuroreport, 14(18), 2481–2484.
Holroyd, J. 2012. Responsibility for implicit bias. Journal of Social Philosophy, Special
issue: Philosophical Methodology and Implicit Bias, 43(3), 274–306.
244 References
Holroyd, J. & Sweetman, J. 2016. The heterogeneity of implicit biases. In M. Brownstein & J.
Saul (Eds.), Implicit bias and philosophy, Vol. 1: Metaphysics and epistemology, 80–103.
Oxford: Oxford University Press.
Hood, H. K., & Antony, M. M. 2012. Evidence-based assessment and treatment of specific phobias
in adults. In T. E. Davis et al. (Eds.), Intensive one-session treatment of specific phobias, 19–42.
Berlin: Springer Science and Business Media.
Hu, X., Gawronski, B., & Balas, R. 2017. Propositional versus dual-process accounts of evalua-
tive conditioning: I. The effects of co-occurrence and relational information on implicit and
explicit evaluations. Personality and Social Psychology Bulletin, 43(1), 17–32.
Huang, J. Y., & Bargh, J. A. 2014. The selfish goal: Autonomously operating motivational structures
as the proximate cause of human judgment and behavior. Behavioral and Brain Sciences, 37(2),
121–135.
Huddleston, A. 2012. Naughty beliefs. Philosophical Studies, 160(2), 209–222.
Huebner, B. 2009. Trouble with stereotypes for Spinozan minds. Philosophy of the Social Sciences,
39, 63–92.
Huebner, B. 2015. Do emotions play a constitutive role in moral cognition? Topoi, 34(2), 1–14.
Huebner, B. 2016. Implicit bias, reinforcement learning, and scaffolded moral cognition. In M.
Brownstein & J. Saul (Eds.), Implicit bias and philosophy, Vol. 1: Metaphysics and epistemology,
47–79. Oxford: Oxford University Press.
Hufendiek, R. 2015. Embodied emotions: A naturalist approach to a normative phenomenon.
New York: Routledge.
Hugenberg, K., & Bodenhausen, G. V. 2003. Facing prejudice: Implicit prejudice and the perception
of facial threat. Psychological Science, 14(6), 640–643.
Hume, D. 1738/1975. A treatise of human nature. (L. A. Selby-Bigge, Ed.; 2d ed. revised by P. H.
Nidditch). Oxford: Clarendon Press.
Hunter, D. 2011. Alienated belief. Dialectica, 65(2), 221–240.
Hursthouse, R. 1999. On virtue ethics. Oxford: Oxford University Press.
Inbar, Y., Pizarro, D., Iyer, R., & Haidt, J. 2012. Disgust sensitivity, political conservatism, and voting.
Social Psychological and Personality Science, 3(5), 537–544.
Inzlicht, M. 2016, 22 August. The unschooling of a faithful mind. http://michaelinzlicht.com/
getting-better/2016/8/22/the-unschooling-of-a-faithful-mind/.
Inzlicht, M., Gervais, W., & Berkman, E. 2015. Bias-correction techniques alone cannot determine
whether ego depletion is different from zero: Commentary on Carter, Kofler, Forster, &
McCullough. Social Science Research Network. doi: http://dx.doi.org/10.2139/ssrn.2659409.
Isen, A., & Levin, P. 1972. Effect of feeling good on helping: Cookies and kindness. Journal of
Personality and Social Psychology, 21(3), 384–388.
Ito, T. A., & Urland, G. R. 2005. The influence of processing objectives on the perception of faces: An
ERP study of race and gender perception. Cognitive, Affective, and Behavioral Neuroscience,
5, 21–36.
James, W. 1884. What is an emotion? Mind, 9, 188–205.
James, W. 1890. The principles of psychology. New York: Henry Holt and Co.
James, W. 1899. The laws of habit. In Talks to teachers on psychology—and to students on some of life’s
ideals, 64–78. New York: Metropolitan Books/Henry Holt and Co. doi: http://dx.doi.org/
10.1037/10814-008.
Jarvis, W. B. G., & Petty, R. E. 1996. The need to evaluate. Journal of Personality and Social Psychology,
70(1), 172–194.
Jaworska, A. 1999. Respecting the margins of agency: Alzheimer’s patients and the capacity to value.
Philosophy & Public Affairs, 28(2), 105–138.
Jaworska, A. 2007a. Caring and internality. Philosophy and Phenomenological Research, 74(3),
529–568.
Jaworska, A. 2007b. Caring and full moral standing. Ethics, 117(3), 460–497.
Johansson, P., Hall, L., Sikström, S., & Olsson, A. 2005. Failure to detect mismatches between inten-
tion and outcome in a simple decision task. Science, 310, 116–119.
References 245
John, O. P., & Srivastava, S. 1999. The Big Five trait taxonomy: History, measurement, and theoreti-
cal perspectives. In In L. A. Pervin & O. P. John (Eds.), Handbook of personality: Theory and
research, 102–138. New York: Guilford.
Johnson, M. K., Kim, J. K., & Risse, G. 1985. Do alcoholic Korsakoff ’s syndrome patients acquire
affective reactions? Journal of Experimental Psychology: Learning, Memory, and Cognition,
11(1), 22–36.
Jordan, C. H., Whitfield, M., & Zeigler-Hill, V. 2007. Intuition and the correspondence
between implicit and explicit self-esteem. Journal of Personality and Social Psychology, 93,
1067–1079.
Kahan, D. 2012. Why we are poles apart on climate change. Nature, 488, 255.
Kahan, D. M. 2013. Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision
Making, 8, 407–424.
Kahan, D. M., Peters, E., Dawson, E. C., & Slovic, P. (2017). Motivated numeracy and enlightened
self-government. Behavioural Public Policy, 1(1), 54–86.
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D. & Mandel, G. 2012. The
polarizing impact of science literacy and numeracy on perceived climate change risks. Nature
Climate Change, 2, 732–735.
Kahneman, D. 2003. A perspective on judgment and choice: Mapping bounded rationality.
American Psychologist, 58(9), 697.
Kahneman, D. 2011. Thinking, fast and slow. New York: Macmillan.
Kamtekar, R. 2004. Situationism and virtue ethics: On the content of our character. Ethics, 114(3),
458–491.
Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S. & Russin, A. 2000. Just say no (to stereotyp-
ing): Effects of training in the negation of stereotypic associations on stereotype activation.
Journal of Personality and Social Psychology, 78, 871–888.
Kawakami, K., Dovidio, J. F., & van Kamp, S. 2007a. The impact of counterstereotypic training and
related correction processes on the application of stereotypes. Group Processes and Intergroup
Relations, 10(2), 139–156.
Kawakami, K., Phils, C., Steele, J., & Dovidio, J. 2007b. (Close) distance makes the heart grow
fonder: Improving implicit racial attitudes and interracial interactions through approach
behaviors. Journal of Personality and Social Psychology, 92(6), 957–971.
Kawakami, K., Steele, J. R., Cifa, C., Phills, C. E., & Dovidio, J. F. 2008. Approaching math increases
math = me, math = pleasant. Journal of Experimental Social Psychology, 44, 818–825.
Keinan, A., & Kivetz, R. 2011. Productivity orientation and the consumption of collectable experi-
ences. Journal of Consumer Research, 37(6), 935–950.
Kelly, D. 2013. Yuck! The nature and moral significance of disgust. Cambridge, MA: MIT Press.
Kelly, S. 2005. Seeing things in Merleau-Ponty. In T. Carman (Ed.), The Cambridge companion to
Merleau-Ponty, 74–110. Cambridge: Cambridge University Press.
Kelly, S. 2010. The normative nature of perceptual experience. In B. Nanay (Ed.), Perceiving the
world, 146–159. Oxford: Oxford University Press.
Kerr, N. L. 1998. HARKing: Hypothesizing after the results are known. Personality and Social
Psychology Review, 2(3), 196–217.
Kidd, C., Palmeri, H., & Aslin, R. N. 2013. Rational snacking: Young children’s decision-making
on the marshmallow task is moderated by beliefs about environmental reliability. Cognition,
126(1), 109–114.
King, M., & Carruthers, P. 2012. Moral responsibility and consciousness. Journal of Moral Philosophy,
9(2), 200–228.
Kishida, K., Saez, I., Lohrenz, T., Witcher, M., Laxton, A., Tatter, S., White, J., et al. 2015. Subsecond
dopamine fluctuations in huam striatum encode superposed error signals about actual
and counterfactual reward. Proceedings of the National Academy of Sciences. doi: 10.1073/
pnas.1513619112.
Kivetz, R., & Keinan, A. 2006. Repenting hyperopia: An analysis of self-control regrets. Journal of
Consumer Research, 33, 273–282.
246 References
Klaassen, P., Rietveld, E. & Topal, J. 2010. Inviting complementary perspectives on situated norma-
tivity in everyday life. Phenomenology and the Cognitive Sciences, 9, 53–73.
Knobe, J. 2003. Intentional action and side effects in ordinary language. Analysis, 63(279), 190–194.
Knowles, M., Lucas, G., Baumeister, R., & Gardner, W. 2015. Choking under social pressure: Social
monitoring among the lonely. Personality and Social Psychology Bulletin, 41(6), 805–821.
Koch, C., & Crick, F. 2001. The zombie within. Nature, 411(6840), 893–893.
Kolling, N., Behrens, T. E. J., Mars, R. B., & Rushworth, M. F. S. 2012. Neural mechanisms of forag-
ing. Science, 366, 95–98.
Korsgaard, C. 1997. The normativity of instrumental reason. In G. Cullity & B. Gaut (Eds.), Ethics
and practical reason, 215–254. Oxford: Oxford University Press.
Korsgaard, C. 2009. The activity of reason. Proceedings and Addresses of the American Philosophical
Association, 83, 27–47.
Kraus, S. J. 1995. Attitudes and the prediction of behavior: A metaanalysis of the empirical literature.
Personality and Social Psychology Bulletin, 21, 58–75. doi:10.1177/0146167295211007.
Kriegel, U. 2012. Moral motivation, moral phenomenology, and the alief/belief distinction.
Australasian Journal of Philosophy, 90(3), 469–486.
Krishnamurthy, M. (2015). (White) tyranny and the democratic value of distrust. Monist, 98(4),
391–406.
Kubota, J., & Ito, T. 2014. The role of expression and race in weapons identification. Emotion, 14(6),
1115–1124.
Kumar, V. 2016a. Nudges and bumps. Georgetown Journal of Law and Public Policy, 14, 861–876.
Kumar, V. 2016b. Moral vindications. Cognition, 176, 124–134.
Lai, C. K., Hoffman, K. M., & Nosek, B. A. 2013. Reducing implicit prejudice. Social and Personality
Psychology Compass, 7, 315–330.
Lai, C. K., Skinner, A. L., Cooley, E., Murrar, S., Brauer, M., Devos, T., Calanchini, J., et al. 2016.
Reducing implicit racial preferences: II. Intervention effectiveness across time. Journal of
Experimental Psychology: General, 145, 1001–1016.
Land, M. F., & McLeod, P. 2000. From eye movements to actions: How batsmen hit the ball. Nature
Neuroscience, 3(12), 1340–1345.
Lange, C. G. 1885. Om sindsbehvaegelser: Et psyko- fysiologisk studie. Copenhagen: Jacob
Lunds. Reprinted in The emotions (C. G. Lange & W. James (Eds.), I. A. Haupt (Trans.),
Baltimore: Williams & Wilkins, 1922.
LaPiere, R. T. 1934. Attitudes vs. actions. Social Forces, 13(2), 230–237.
Laplante, D., & Ambady, N. 2003. On how things are said: Voice tone, voice intensity, verbal
content, and perceptions of politeness. Journal of Language and Social Psychology, 22(4),
434–441.
Layden, T. 2010, 8 November. The art of the pass. Sports Illustrated. http://www.si.com/vault/
2010/11/08/106003792/the-art-of-the-pass#.
Lazarus, R. S. 1991. Emotion and adaptation. New York: Oxford University Press.
Lebrecht, S., & Tarr, M. 2012. Can neural signals for visual preference predict real-world choices?
BioScience, 62(11), 937–938.
Leder, Garson. 2016. Know thyself? Questioning the theoretical foundations of cognitive behav-
ioral therapy. Review of Philosophy and Psychology. doi: 10.1007/s13164-016-0308-1.
Lessig, L. 1995. The regulation of social meaning. University of Chicago Law Review, 62(3),
943–1045.
Levine, L. 1988. Bird: The making of an American sports legend. New York: McGraw-Hill.
Levinson, J. D., Smith, R. J., & Young, D. M. (2014). Devaluing death: An empirical study of implicit
racial bias on jury-eligible citizens in six death penalty states. NYUL Rev., 89, 513–581.
Levy, N. 2011a. Expressing who we are: Moral responsibility and awareness of our reasons for
action. Analytic Philosophy, 52(4), 243–261.
Levy, N. 2011b. Hard luck: How luck undermines free will and moral responsibility. Oxford: Oxford
University Press.
Levy, N. 2012. Consciousness, implicit attitudes, and moral responsibility. Noûs, 48, 21–40.
References 247
Levy, N., 2014. Neither fish nor fowl: Implicit attitudes as patchy endorsements. Noûs, 49(4), 800–
823. doi: 10.1111/nous.12074.
Levy, N. 2017. Implicit bias and moral responsibility: Probing the data. Philosophy and
Phenomenological Research. doi: 10.1111/phpr.12352.
Lewicki, P., Czyzewska, M., & Hoffman, H. 1987. Unconscious acquisition of complex procedural
knowledge. Journal of Experimental Psychology, 13, 523–530.
Lewis, D. 1973. Causation. Journal of Philosophy, 70(17), 556–567.
Li, C. 2006. The Confucian ideal of harmony. Philosophy East and West, 56(4), 583–603.
Limb, C., & Braun, A. 2008. Neural substrates of spontaneous musical performance: An fMRI study
of jazz improvisation. PLoS ONE, 3(2), e1679.
Lin, D., & Lee, R. (Producers), & Lord, P., & Miller, C. (Directors). 2014. The lego movie (motion
picture). Warner Brothers Pictures.
Livingston, R. W., & Drwecki, B. B. 2007. Why are some individuals not racially biased?
Susceptibility to affective conditioning predicts nonprejudice toward blacks. Psychology
Science, 18(9), 816–823.
Loersch, C., & Payne, B. K. 2016. Demystifying priming. Current Opinion in Psychology, 12, 32–36.
doi:10.1016/j.copsyc.2016.04.020.
Machery, E. 2016. De-Freuding implicit attitudes. In M. Brownstein & J. Saul (Eds.), Implicit
bias and philosophy, Vol. 1: Metaphysics and epistemology, 104– 129. Oxford: Oxford
University Press.
Macnamara, B. N., Moreau, D., & Hambrick, D.Z. 2016. The relationship between deliberate prac-
tice and performance in sports: A meta-analysis. Perspectives on Psychological Science, 11(3),
333–350.
Macpherson, F. 2012. Cognitive penetration of colour experience: Rethinking the issue in light of
an indirect mechanism. Philosophy and Phenomenological Research, 84(1), 24–62.
Macpherson, F. 2016. The relationship between cognitive penetration and predictive cod-
ing. Consciousness and Cognition. http://www.sciencedirect.com/science/article/pii/
S1053810016300496/.
Madva, A. 2012. The hidden mechanisms of prejudice: Implicit bias and interpersonal fluency. Doctoral
dissertation, Columbia University.
Madva, A. 2016a. Virtue, social knowledge, and implicit bias. In M. Brownstein & J. Saul (Eds.),
Implicit bias and philosophy, Vol. 1: Metaphysics and epistemology, 191–215. Oxford: Oxford
University Press.
Madva, A. 2016b. Why implicit attitudes are (probably) not beliefs. Synthese, 193(8), 2659–2684.
Madva, A. 2017. Biased against de-biasing: On the role of (institutionally sponsored) self-trans-
formation in the struggle against prejudice. Ergo, 4(6). doi: http://dx.doi.org/10.3998/
ergo.12405314.0004.006.
Madva, A., & Brownstein, M. 2016. Stereotypes, prejudice, and the taxonomy of the implicit social
mind. Noûs. doi: 10.1111/nous.12182.
Mallon, R. 2016. Stereotype threat and persons. In M. Brownstein & J. Saul (Eds.), Implicit bias and
philosophy, Vol. 1: Metaphysics and epistemology, 130–156. Oxford: Oxford University Press.
Mandelbaum, E. 2011. The architecture of belief: An essay on the unbearable automaticity of believing.
Doctoral dissertation, University of North Carolina.
Mandelbaum, E. 2013. Against alief. Philosophical Studies, 165, 197–211.
Mandelbaum, E. 2014. Thinking is believing. Inquiry, 57(1), 55–96.
Mandelbaum, E. 2015a. Associationist theories of thought. In E. Zalta (Ed.), Stanford encyclopedia
of philosophy. http://plato.stanford.edu/entries/associationist-thought/.
Mandelbaum, E. 2015b. Attitude, association, and inference: On the propositional structure of
implicit bias. Noûs, 50(3), 629–658. doi: 10.1111/nous.12089.
Mann, D. T. Y., Williams, A. M., Ward, P., & Janelle, C. M. 2007. Perceptual-cognitive expertise in
sport: A meta-analysis. Journal of Sport & Exercise Psychology, 29, 457–478.
Manzini, P., Sadrieh, A., & Vriend, N. 2009. On smiles, winks and handshakes as coordination
devices. Economic Journal, 119(537), 826–854.
248 References
Martin, J., & Cushman, F. 2016. The adaptive logic of moral luck. In J. Sytsma & W. Buckwalter
(Eds.), The Blackwell companion to experimental philosophy, 190–202. Hoboken, NJ: Wiley.
Martin, L. L., & Tesser, A. 2009. Five markers of motivated behavior. In G. B. Moskowitz & H. Grant
(Eds.), The psychology of goals, 257–276. New York: Guilford Press.
Mayo, D. G. 2011. Statistical science and philosophy of science: Where do/should they meet in
2011 (and beyond)?” Rationality, Markets and Morals (RMM) Special Topic: Statistical Science
and Philosophy of Science, 2, 79–102.
Mayo, D. G. 2012. Statistical science meets philosophy of science, Part 2: Shallow versus deep
explorations. Rationality, Markets, and Morals (RMM), Special Topic: Statistical Science and
Philosophy of Science, 3, 71–107.
McConahay, J. B. 1986. Modern racism, ambivalence, and the modern racism scale. In J. F.
Dovidio & S. L. Gaertner (Eds.), Prejudice, discrimination, and racism, 91–125. San Diego,
CA: Academic Press.
McLeod, P. 1987. Visual reaction time and high-speed ball games. Perception, 16(1), 49–59.
Meehl, P. E. 1990. Why summaries of research on psychological theories are often uninterpretable.
Psychological Reports, 66, 195–244.
Mekawi, Y., & Bresin, K. 2015. Is the evidence from racial bias shooting task studies a smoking gun?
Results from a meta-analysis. Journal of Experimental Social Psychology, 61, 120–130.
Mendoza, S. A., Gollwitzer, P. M., & Amodio, D. M. 2010. Reducing the expression of implicit
stereotypes: Reflexive control through implementation intentions. Personality and Social
Psychology Bulletin, 36(4), 512–523.
Merleau-Ponty, M. 1962/2002. The phenomenology of perception. (C. Smith, trans).
New York: Routledge.
Messick, S. 1995. Validity of psychological assessment: Validation of inferences from persons’
responses and performances as scientific inquiry into score meaning. American Psychologist,
50, 41–749.
Miller, G. A. 1956. The magic number seven, plus or minus two: Some limits on our capacity for
processing information. Psychological Review, 63, 81–97.
Miller, G. E., Yu, T., Chen, E., & Brody, G. H. 2015. Self-control forecasts better psychosocial out-
comes but faster epigenetic aging in low-SES youth. Proceedings of the National Academy of
Sciences, 112(33), 10325–10330. doi: 10.1073/pnas.1505063112.
Millikan, R. 1995. Pushmi-pullyu representations. Philosophical Perspectives, 9, 185–200.
Millikan, R. 2006. Styles of rationality. In S. Hurley & M. Nudds (Eds.), Rational animals?, 117–126.
Oxford: Oxford University Press.
Milner, A. D., & Goodale, M. A. 1995. The visual brain in action. Oxford: Oxford University Press.
Milner, A. D., & Goodale, M. A. 2008. Two visual systems re-viewed. Neuropsychologia, 46, 774–785.
Milyavskaya, M., Inzlicht, M., Hope, N., & Koestner, R. 2015. Saying “no” to temptation: Want-
to motivation improves self-regulation by reducing temptation rather than by increasing
self-control. Journal of Personality and Social Psychology, 109(4), 677–693. doi.org/10.1037/
pspp0000045.
Mischel, W., Shoda, Y., & Peake, P. 1988. The nature of adolescent competencies predicted by pre-
school delay of gratification. Journal of Personality and Social Psychology, 54(4), 687.
Mitchell, C. J., De Houwer, J., & Lovibond, P. F. 2009. The propositional nature of human associa-
tive learning. Behavioral and Brain Sciences, 32(2), 183–198.
Mitchell, J. P., Nosek, B. A., & Banaji, M. R. 2003. Contextual variations in implicit evaluation.
Journal of Experimental Psychology: General, 132, 455–469.
Monin, B., & Miller, D. T. 2001. Moral credentials and the expression of prejudice. Journal of
Personality and Social Psychology, 81(1), 33.
Monteith, M. 1993. Self-regulation of prejudiced responses: Implications for progress in prejudice-
reduction efforts. Journal of Personality and Social Psychology, 65(3), 469–485.
Monteith, M., Ashburn-Nardo, L., Voils, C., and A. Czopp, A. 2002. Putting the brakes on preju-
dice: On the development and operation of cues for control. Journal of Personality and Social
Psychology, 83(5), 1029–1050.
References 249
Montero, B. 2010. Does bodily awareness interfere with highly skilled movement? Inquiry, 53(2),
105–122.
Moran, T., & Bar-Anan, Y. 2013. The effect of object–valence relations on automatic evaluation.
Cognition and Emotion, 27(4), 743–752.
Moskowitz, G. B. 2002. Preconscious effects of temporary goals on attention. Journal of Experimental
Social Psychology, 38(4), 397–404.
Moskowitz, G. B., & Balcetis, E. 2014. The conscious roots of selfless, unconscious goals. Behavioral
and Brain Sciences, 37(2), 151.
Moskowitz, G. B., & Li, P. 2011. Egalitarian goals trigger stereotype inhibition: A proactive form of
stereotype control. Journal of Experimental Social Psychology, 47(1), 103–116.
Moskowitz, G., Gollwitzer, P., Wasel, W., & Schaal, B. 1999. Preconscious control of stereotype
activation through chronic egalitarian goals. Journal of Personality and Social Psychology, 77,
167–184.
Moskowitz, G. B., Li, P., Ignarri, C., & Stone, J. 2011. Compensatory cognition associated with egali-
tarian goals. Journal of Experimental Social Psychology, 47(2), 365–70.
Muller, H., & B. Bashour. 2011. Why alief is not a legitimate psychological category. Journal of
Philosophical Research, 36, 371–389.
Muraven, M., Collins, R. L., & Neinhaus, K. 2002. Self-control and alcohol restraint: An initial
application of the self-control strength model. Psychology of Addictive Behaviors, 16(2), 113.
Nagel, J. 2012. Gendler on alief. Analysis, 72(4), 774–788.
Nanay, B. 2011. Do we see apples as edible? Pacific Philosophical Quarterly, 92, 305–322.
Nanay, B. 2013. Between perception and action. Oxford: Oxford University Press.
Neal, D. T., Wood, W., Wu, M., & Kurlander, D. 2011. The pull of the past: When do habits persist
despite conflict with motives? Personality and Social Psychology Bulletin, 1–10. doi: 10.1177/
0146167211419863.
Neenan, M., & Dryden, W. 2005. Cognitive therapy. Los Angeles: Sage.
Nesse, R., & Ellsworth, P. 2009. Evolution, emotion, and emotional disorders. American Psychologist,
64(2), 129–139.
Newen, A. 2016. Defending the liberal-content view of perceptual experience: Direct social percep-
tion of emotions and person impressions. Synthese. doi:10.1007/s11229-016-1030-3.
Newman, G., Bloom, P., & Knobe, J. 2014. Value judgments and the true self. Personality and Social
Psychology Bulletin, 40(2), 203–216.
Newman, G., De Freitas, J., & Knobe, J. 2015. Beliefs about the true self explain asymmetries based
on moral judgment. Cognitive Science, 39(1), 96–125.
Nieuwenstein, M. R., Wierenga, T., Morey, R. D., Wicherts, J. M., Blom, T. N., Wagenmakers, E. J.,
& van Rijn, H. 2015. On making the right choice: A meta-analysis and large-scale replication
attempt of the unconscious thought advantage. Judgment and Decision Making, 10, 1–17.
Nisbett, R. E., & Wilson, T. D. 1977. Telling more than we can know: Verbal reports on mental
processes. Psychological Review, 84(3), 231.
Noë, A. 2006. Perception in action. Cambridge: Cambridge University Press.
Nosek, B. A., & Banaji, M. R. 2001. The go/no-go association task. Social Cognition, 19(6), 161–176.
Nosek, B. A., Banaji, M. R., & Greenwald, A. G. 2002. Harvesting intergroup implicit attitudes and
beliefs from a demonstration website. Group Dynamics, 6, 101–115.
Nosek, B., Greenwald, A., & M. Banaji. 2007. The implicit association test at age 7: A methodologi-
cal and conceptual review. In J. A. Bargh (Ed.), Automatic processes in social thinking and behav-
ior, 265–292. Philadelphia: Psychology Press.
Nussbaum, M. 2001. Upheavals of thought: The intelligence of emotion. Cambridge: Cambridge
University Press.
Office of the Attorney General. (1999). The New York City Police Department’s “stop & frisk” prac-
tices: A report to the people of the State of New York. Retrieved from www.oag.state.ny.us/
bureaus/civil_rights/pdfs/stp_frsk.pdf.
Olson, M., & R. Fazio, 2006. Reducing automatically activated racial prejudice through implicit
evaluative conditioning. Personality and Social Psychology Bulletin, 32, 421–433.
250 References
Rooth, D. O. 2010. Automatic associations and discrimination in hiring: Real world evidence.
Labour Economics, 17(3), 523–534.
Rosenthal, R. 1991. Meta-analytic procedures for social research (Rev. ed.). Newbury Park,
CA: Sage.
Rosenthal, R., & Rubin, D.B. 1982. A simple, general purpose display of magnitude of experimental
effect. Journal of Educational Psychology, 74, 166–169.
Rowbottom, D. 2007. “In-between believing” and degrees of belief. Teorema, 26, 131–137.
Rudman, L. A., & Ashmore, R.D. 2007. Discrimination and the implicit association test. Group
Processes & Intergroup Relations, 10, 359–372.
Rydell, R. J., & Gawronski, B. 2009. I like you, I like you not: Understanding the formation of
context-dependent automatic attitudes. Cognition and Emotion, 23, 1118–1152.
Ryle, G. 1949. The concept of mind. London: Hutchinson.
Salzman, C. D., & Fusi, S. 2010. Emotion, cognition, and mental state representation in amygdala
and prefrontal cortex. Annual Review of Neuroscience, 33, 173.
Sarkissian, H. 2010a. Confucius and the effortless life of virtue. History of Philosophy Quarterly,
27(1), 1–16.
Sarkissian, H. 2010b. Minor tweaks, major payoffs: The problems and promise of situationalism in
moral philosophy. Philosopher’s Imprint, 10(9), 1–15.
Sartre, J. P. 1956/1993. Being and nothingness. New York: Washington Square Press.
Scanlon, T. 1998. What we owe each other. Cambridge, MA: Harvard University Press.
Schaller, M., Park, J. J., & Mueller, A. 2003. Fear of the dark: Interactive effects of beliefs about
danger and ambient darkness on ethnic stereotypes. Personality and Social Psychology Bulletin,
29, 637–649.
Schapiro, T. 2009. The nature of inclination. Ethics, 119(2), 229–256.
Scharlemann, J., Eckel, C., Kacelnik, A., & Wilson, R. 2001. The value of a smile: Game theory with
a human face. Journal of Economic Psychology, 22, 617–640.
Schechtman, M. 1996. The constitution of selves. Ithaca, NY: Cornell University Press.
Schmajuk, N. A., & Holland, P. C. 1998. Occasion setting: Associative learning and cognition in ani-
mals. Washington, DC: American Psychological Association.
Schneider, W. & Shiffrin, R. 1977. Controlled and automatic human information processing.
I: Detection, search and attention. Psychological Review, 84, 1–66.
Schwartz, N. 2002. Situated cognition and the wisdom of feelings: Cognitive tuning. In L. F. Barrett
& P. Salovey (Eds.), The wisdom of feeling, 144–166. New York: Guilford Press.
Schwartz, N., & Clore, G. L. 2003. Mood as information: 20 years later. Psychological Inquiry, 14,
296–303.
Schweiger Gallo, I., Keil, A., McCulloch, K., Rockstroh, B., & Gollwitzer, P. 2009. Strategic automa-
tion of emotion regulation. Journal of Personality and Social Psychology, 96(1), 11–13.
Schwitzgebel, E. 2002. A phenomenal, dispositional account of belief. Noûs, 36, 249–275.
Schwitzgebel, E. 2006/2010. Belief. In E. Zalta (Ed.), Stanford encyclopedia of philosophy. http://
plato.stanford.edu/entries/belief/.
Schwitzgebel, E. 2010. Acting contrary to our professed beliefs, or the gulf between occurrent judg-
ment and dispositional belief. Pacific Philosophical Quarterly, 91, 531–553.
Schwitzgebel, E. 2013. A dispositional approach to attitudes: Thinking outside of the belief box.
In N. Nottelmann (Ed.), New essays on belief: Constitution, content and structure, 75–99.
Basingstoke: Palgrave MacMillan.
Schwitzgebel, E., & Cushman, F. 2012. Expertise in moral reasoning? Order effects on moral judg-
ment in professional philosophers and non‐philosophers. Mind & Language, 27(2), 135–153.
Schwitzgebel, E., & Rust, J. 2014. The moral behavior of ethics professors: Relationships among self-
reported behavior, expressed normative attitude, and directly observed behavior. Philosophical
Psychology, 27(3), 293–327.
Seligman, M., Railton, P., Baumeister, R., & Sripada, C. 2013. Navigating into the future or driven by
the past. Perspectives on Psychological Science, 8(2), 119–141.
References 253
Velleman, J. D. 2000. On the aim of belief. In J. D. Velleman (Ed.), The possibility of practical reason,
244–282. Oxford: Oxford University Press.
Velleman, J. D. 2015. Morality here and there. Whitehead Lecture at Harvard University.
Vohs, K. D., & Heatherton, T. F. 2000. Self-regulatory failure: A resource-depletion approach.
Psychological Science, 11(3), 249–254.
Vohs, K. D., & Baumeister, R. (Eds.). 2011. Handbook of self-regulation: Research, theory, and applica-
tions. New York: Guilford Press.
Wagenmakers, E. J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., Adams, R. B., Albohn, D. N.,
et al. 2016. Registered replication report: Strack, Martin, & Stepper (1998). Perspectives on
Psychological Science, 11(6), 917–928.
Waldron, J. 2014, 9 October. It’s all for your own good [review of Nudge]. New York Review of Books. http://
www.nybooks.com/articles/archives/2014/oct/09/cass-sunstein-its-all-your-own-good/.
Wallace, D. F. 1999. Brief interviews with hideous men. New York: Back Bay Books.
Wallace, D. F. 2006. How Tracy Austin broke my heart. In Consider the lobster. New York: Little, Brown.
Wallace, D. F. 2011. The pale king. New York: Back Bay Books.
Wansink, B., Painter, J. E., & North, J. 2005. Bottomless bowls: Why visual cues of portion size may
influence intake. Obesity Research, 13(1), 93–100.
Washington, N., & Kelly, D. 2016. Who’s responsible for this? Implicit bias and the knowledge con-
dition. In M. Brownstein & J. Saul (Eds), Implicit bias and philosophy: Vol. 2, Moral responsibil-
ity, structural injustice, and ethics, 11–36. Oxford: University Press.
Watson, G. 1975. Free agency. Journal of Philosophy, 72(8), 205–220.
Watson, G. 1996. Two faces of responsibility. Philosophical Topics, 24(2), 227–248.
Webb, T., & Sheeran, P. 2008. Mechanisms of implementation intention effects: The role of goal
intentions, self- efficacy, and accessibility of plan components. British Journal of Social
Psychology, 47, 373–379.
Webb, T., Sheeran, P., & Pepper, A. 2010. Gaining control over responses to implicit attitude
tests: Implementation intentions engender fast responses on attitude-incongruent trials.
British Journal of Social Psychology, 51(1), 13–32.
Wegner, D. 1984. Innuendo and damage to reputation. Advances in Consumer Research, 11, 694–696.
Wegner, D., Schneider, D., Carter, S., & White, T. 1987. Paradoxical effects of thought suppression.
Journal of Personality and Social Psychology, 53(1), 5–13.
Weisbuch, M., Pauker, K., & Ambady, N. 2009. The subtle transmission of race bias via televised
nonverbal behavior. Science, 326(5960), 1711–1714.
Wennekers, A. M. 2013. Embodiment of prejudice: The role of the environment and bodily states.
Doctoral dissertation, Radboud University.
West, R. F., Meserve, R., & Stanovich, K. 2012. Cognitive sophistication does not attenuate the bias
blind spot. Journal of Personality and Social Psychology, 103(3), 506–519.
Wheatley, T., & Haidt, J. 2005. Hypnotic disgust makes moral judgments more severe. Psychological
Science, 16(10), 780–784.
Whitehead, A. N. 1911. An introduction to mathematics. New York: Holt.
Wicklund, R. A. & Gollwitzer, P. M. 1982. Symbolic self-completion. New York: Routledge.
Wieber, F., Gollwitzer, P., Gawrilow, C., Odenthal, G., & Oettingen, G. 2009. Matching principles in
action control. Unpublished manuscript, University of Konstanz.
Wiers, R. W., Eberl, C., Rinck, M., Becker, E. S., & Lindenmeyer, J. 2011. Retraining automatic
action tendencies changes alcoholic patients’ approach bias for alcohol and improves treat-
ment outcome. Psychological Science, 22(4), 490–497.
Williams, B. 1981. Moral luck: Philosophical papers, 1973– 1980. Cambridge: Cambridge
University Press.
Wilson, T. D., Lindsey, S., & Schooler, T. Y. 2000. A model of dual attitudes. Psychological Review,
107, 101–126.
Winkielman, P., Berridge, K. C., & Wilbarger, J. L. 2005. Unconscious affective reactions to masked
happy versus angry faces influence consumption behavior and judgments of value. Personality
and Social Psychology Bulletin, 31(1), 121–135.
256 References
Witt, J. K., & Proffitt, D. R. 2008. Action-specific influences on distance perception: A role for
motor simulation. Journal of Experimental Psychology: Human Perception and Performance, 34,
1479–1492.
Witt, J. K., Proffitt, D. R., & Epstein, W. 2004. Perceiving distance: A role of effort and intent.
Perception, 33, 570–590.
Wittenbrink, B., Judd, C. M., & Park, B. 2001. Spontaneous prejudice in context: Variability in auto-
matically activated attitudes. Journal of Personality and Social Psychology, 81(5), 815.
Wittgenstein, L. 1966. Lectures and conversations on aesthetics, psychology and religious belief.
Oxford: Blackwell.
Wolitzky-Taylor, K. B., Horowitz, J. D., Powers, M. B., & Telch, M. J. 2008. Psychological approaches
in the treatment of specific phobias: A meta- analysis. Clinical Psychology Review, 28,
1021–1037.
Woodward, J., & Allman, J. 2007. Moral intuition: Its neural substrates and normative significance.
Journal of Physiology–Paris, 101(4), 179–202.
Wright, C. 2014. On epistemic entitlement. II: Welfare state epistemology. In E. Zardini & D. Dodd
(Eds.), Scepticism and perceptual justification, 213–247. New York: Oxford University Press.
Wu, W. 2014a. Against division: Consciousness, information, and the visual streams. Mind &
Language, 29(4), 383–406.
Wu, W. 2014b. Attention. New York: Routledge.
Wu, W. 2015. Experts and deviants: The story of agentive control. Philosophy and Phenomenological
Research, 93(1), 101–126.
Wylie, C. D. 2015. “The artist’s piece is already in the stone”: Constructing creativity in paleontology
laboratories. Social Studies of Science, 45(1), 31–55.
Xu, K., Nosek, B., & Greenwald, A. G., 2014. Psychology data from the race implicit association test
on the project implicit demo website. Journal of Open Psychology Data, 2(1), p.e3. doi: http://
doi.org/10.5334/jopd.ac.
Yancy, G. 2008. Black bodies, white gazes: The continuing significance of race. Lanham, MD: Rowman
& Littlefield.
Yarrow, K., Brown, P., & Krakauer, J. 2009. Inside the brain of an elite athlete: The neural processes
that support high achievement in sports. Nature Reviews: Neuroscience, 10, 585–596.
Zajonc, R. B. 1984. On the primacy of affect. American Psychologist, 39, 117–123.
Zeigarnik, B. 1927/1934. On finished and unfinished tasks. In W. D. Ellis & K. Koffka (Eds.), A
source book of Gestalt psychology, 300–314. Gouldsboro, ME: Gestalt Journal Press.
Zheng, R. 2016. Attributability, accountability and implicit attitudes. In M. Brownstein & J. Saul
(Eds.), Implicit bias and philosophy: Vol. 2, Moral responsibility, structural injustice, and ethics,
62–89. Oxford: Oxford University Press.
Zhou, H., & Fishbach, A. 2016. The pitfall of experimenting on the web: How unattended selective
attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social
Psychology 111(4), 493–504.
Zimmerman, A. 2007. The nature of belief. Journal of Consciousness Studies, 14(11), 61–82.
INDEX
257
258 Index