Escolar Documentos
Profissional Documentos
Cultura Documentos
Moral Psychology
Advances in Experimental Philosophy
Series Editor:
James R. Beebe, Associate Professor of Philosophy, University at Buffalo, USA
Editorial Board:
Joshua Knobe, Yale University, USA
Edouard Machery, University of Pittsburgh, USA
Thomas Nadelhoffer, College of Charleston, USA
Eddy Nahmias, Neuroscience Institute at Georgia State University, USA
Jennifer Nagel, University of Toronto, Canada
Joshua Alexander, Siena College, USA
Edited by
Hagop Sarkissian and Jennifer Cole Wright
www.bloomsbury.com
Hagop Sarkissian and Jennifer Cole Wright have asserted their right under the Copyright,
Designs and Patents Act, 1988, to be identified as the Editors of this work.
Index 253
Notes on Contributors
Editors
Contributors
and intuitive, emotional reactions. Yoel has applied his research to study how
intuition affects our choices, how our moral beliefs determine our actions and
judgment of others, and how the emotion of disgust can predict our moral and
political attitudes.
Jesse Graham received his PhD (psychology) from the University of Virginia
in2010, a Masters (Religious Studies) from Harvard University in2002, and a
Bachelors (Psychology) from the University of Chicago in1998. He is currently
assistant professor of psychology at the University of Southern California, where
he hovers menacingly over the Values, Ideology, and Morality Lab. His research
interests are in moral judgment, ideology, and implicit social cognition.
Kevin Uttich received his PhD in psychology from the University of California-
Berkeley in 2012 after receiving a BA in psychology from the University of
Chicago. Dr Uttichs research examines issues at the intersection of social
cognition and moral psychology with a particular focus on how people
understand prescriptive norms. His work has received the poster prize from
the Society for Philosophy and Psychology and has been recognized by the
publication Science as an Editors Choice.
The second question is: In what ways are the answers to the first set of question
philosophically interesting? Do they inform philosophical theoryinsuring
that our theoretical conceptions of morality properly line up with the empirical
data? Do they help us adjudicate between competing philosophical views? Do
they raise problems for long-standing philosophical commitments? Of course,
this second question presupposes that what we learn about actual moral
functioning is meaningful to philosophical theorizing, something that is not
universally accepted but, as evidenced by increasing interdisciplinary activity,
is nonetheless maintained by a substantial group of researchers. Examples of
the fruitfulness of this mindset can be found not only in this volume, but also in
other interdisciplinary volumes on moral psychology published in recent years,
such as those edited by Walter Sinnott-Armstrong (2007), Darcia Narvaez and
Daniel K. Lapsley (2009), Thomas Nadelhoffer etal. (2010), and John Doris and
the Moral Psychology Research Group (2012). And while philosophers have
much to gain through their collaborations with psychologists and their use
of empirical data to inform their projects, psychologists empirical endeavors
likewise can only benefit from an increased awareness of and appreciation for
the theoretical models arising out of philosophical work on moral normativity,
moral epistemology, and metaethics. The rich history of philosophical reflection
on these questions can serve as grounds to generate new hypotheses for testing
and new avenues for research.
Given this interdisciplinary interaction, we see moral psychology as a sort
of role model for the more recent developments in what people are calling
experimental philosophywhich is, broadly speaking, the use of empirical
and experimental methods to investigate philosophical questions. The early
(turn of this century) focus on using empirical methods to largely challenge
or undermine philosophical theories (e.g., the situationist critique of virtue
ethics) and philosophical methodology (e.g., the role of intuitions in theory
formation) has been labeled the negative program. However, philosophers
(and others) have increasingly begun to shift the focus of experimental
philosophy to other, more constructive endeavors (see Systma and Livengood
in press, for a discussion of what they call the developing positive,
naturalist, and pragmatist programs). These endeavors embrace the
exploration of peoples intuitions, judgments, and cognitive process more
broadly in order to clarify what they are, when they count as philosophical
Experimental Moral Psychology: AnIntroduction 3
evidence, and also more generally what they reveal about human cognition,
language, and behavior. It is here that we think moral psychology has much to
offer as a role model.
Moral persons
both do whats best for me and help other peopleI have to choose. (Indeed,
much ink has been spilled in the history of moral philosophy trying to defend
the view that it actually is in our self-interest to be moral, which might be seen
as an attempt to accommodate these competing tendencies.) Thus, it is not
surprising that we tend to take peoples expression of community values (or
warmth) as indicators of their moral goodness, while taking any expression
of agentic values (or competence) as indicators of their self-interested
motives.
Frimer and Oakes argue, however, that these two value systems do not
necessarily have to be in conflict. Indeed, for some individualsnamely,
recognized moral exemplarsthey are not. Frimer and Oakess research
provides evidence that the chasm between peoples agentic and community
values decreases along the developmental trajectory until, at the limit, it may
disappear entirely. Morally good people have learned to synthesize their agentic
and community values such that they are able to pursue both simultaneously;
promoting their own well-being becomes effectively linked with the well-being
of others. And as their intellectual and social skills become oriented toward and
focused on the same ends, they also become more effective moral agentsa
finding consonant with Valdesolos point that warmth and competence are
both important for moral goodness.
In ancient Greece, Aristotle described the virtuous person as someone
who elicits others admiration and respect, and a notable feature of classical
Confucian virtue ethics is its emphasis on the magnetic qualities of capable
moral leaderstheir ability to gain the assent and loyalty of others in an
effortless fashion. It is natural to assume that virtuous conduct is something
that is universally recognized and esteemed, especially since there never seems
to be enough of it going around. However, according to Gabriela Pavarini and
Simone Schnall, individuals exemplifying moral virtue are sometimes denied
feelings of approbation and approval. Indeed, they can instead be subjected to
ridicule and censure, or otherwise disparaged.
Pavarini and Schnall highlight the paradoxical nature of peoples reactions to
displays of genuine virtue by others. As they point out, morally good behavior
rarely occurs in a social vacuum, and being the witness to anothers morally
good deeds has implications beyond the immediate act itself. On the one hand,
witnessing displays of virtue can lead to a sense of elevation, and a desire to
6 Advances in Experimental Moral Psychology
praise, reward, and cooperate or associate with those who act virtuously.
Morally good people elicit elevation, respect, and gratitude. Through their
actions they can raise the standards of an entire group, spurring individuals
to greater levels of prosociality, and reminding them of the resources that
lie untapped within them that could be marshaled toward improving their
communities and making the world a better place. Moral exemplars can renew
in others a belief that they can shape the world around them in positive ways.
Yet, on the other hand, such elevation of moral standards can also be
perceived as a threat to a ones integrity and sense of self-worth, and to ones
standing among ones peers; after all, one might be seen by others as falling
short of expectations when someone else has just shown that morally excellent
behavior is within reach. When one is threatened in this way, the reaction
may be to denigrate the moral exemplar, or to explain away her good deeds as
products of self-interest, situational demands, or other extraneous factors. In
short, moral goodness can engender envy, hostility, and suspicion just as easily
as it can inspire awe, gratitude, and respect. At its extreme, the motivation
to avoid comparison with the moral exemplar and/or save face can lead to
hatred and a desire to belittle, demean, and even destroy the source of this
reputational threat.
Pavarini and Schnall observe (as Valdesolo does) that qualities of warmth
and kindness make us feel safe, protected, uplifted, grateful, while qualities
of competence or resourceful make us feel challenged, inadequate, and
threatened. We feel united with those who display warmthand perceive
more warmth in those with whom we are unitedand feel in competition
with those who display competence. Given Frimer and Oakess suggestion
that moral exemplars come to exemplify traits related to both warmth and
competence, it would make sense that people could react either positively or
negatively to their example depending upon their relationship to the exemplar
and which of her traits (her warmth or competence) is most salient to them at
the time.
However we respond to and evaluate morally good people, we commonly
think of the virtues they display as residing within them, constituting part of
their identities. Indeed, philosophers have long emphasized that laudable traits
of character are important features of moral persons. Mark Alfanos chapter
raises a skeptical worry about whether there is such a thing as a good person
Experimental Moral Psychology: AnIntroduction 7
separate from the social and asocial environments in which people display
good behavior. Rather than argue (as many have) that moral agents goodness
is keyed to their possession of stable internal character traits that manifest
in displays of virtuous behavior across diverse contexts, Alfano argues that
we need to rethink the nature of character itself. Specifically, we need to stop
thinking about it as a disposition internal to the moral agent and start thinking
of it as a trifold, relational property, composed of the interactions between
(a) a persons internal states/capacities, (b) the social environment in which
the person is imbedded, and (c) a variety of asocial features of the physical
environment (e.g., noise levels, smells, lighting) that impact the persons
thoughts, feelings, and behavior in specific ways.
The upside to this view is that what constitutes a morally good person is not
just the internal states/capacities that she possesses, but also the sort of social
and asocial environment she finds herself in and/or has taken part in creating.
This means that when it comes to moral development, as much (if not more) of
the burden falls on the world around developing moral agents as it does on the
agents themselves, for the environment figures into the very structure of moral
character they possess. Individuals themselves are not bearers of virtue. People
with the right sorts of internal states/capacities represent just one leg in the
trifold relation; if stuck within a morally corrupt social environment, and/or
an asocial environment filled with hardship, danger, or distractions, virtue will
be incomplete. This highlights an interesting link between Frimer and Oakess
and Alfanos contributionsnamely, that as peoples values and motives become
more integrated and synchronized, so do their social/asocial environments.
This may be because the synchronization of their values results in the active
selection and creation of social and asocial environments that promote and
protect those valuesenvironments that positively reflect the rewards and
benefits of their chosen lifestyleallowing for increased dispositional stability
and expression. (Of course, exemplars must often construct and integrate such
environments where they do not previously exist.)
Finally, if something along these lines is correct (viz., that virtue consists
in features extrinsic to individuals to some considerable extent), then it
suggests that certain kinds of environmentsnamely, those in which you
are socially rewarded for thinking hard about morality, but not necessarily
behaving morallyare not going to be enough to generate virtue. And this
8 Advances in Experimental Moral Psychology
Moral groundings
Thus far, the contributions weve canvassed have largely focused on the nature
of moral persons and moral exemplarshow they are constituted, how they
are motivated, and how others evaluate/respond to them. But, we havent said
much about the nature of morality itself. Here, several of the contributions
help to enrich our understanding of the psychological mechanisms that may
constitute part of moral life, as well as the standing that we take morality
tohave.
A widely experienced facet of morality is its importance and weightiness
relative to other evaluative practices and domains. We might disagree with
others across a wide range of domains, including matters of convention and
aesthetics. However, disagreement about, say, standards of physical beauty
seldom seem as pressing or compelling to us as disagreements about basic
moral issues such as racial equality, reproductive rights, or distributive justice.
When it comes to these latter topics, we tend to think that we are arguing over
issues of central importance to human lifeissues that must ultimately admit
of correct answers even in the face of entrenched disagreement. Similarly,
when we condemn certain acts as right or wrong, virtuous or depraved, it
seems as though these judgments have a certaintyan objectivitythat elides
judgments concerning matters of convention or taste. Why is this so?
Evolutionary accounts have been featured in both the Valdesolo and
Pavarini and Schnall contributions above. For Valdesolo, the promise of
immediate gains and the need to properly understand others dispositions and
intentions toward us may have fostered a favorable disposition toward those
who are kind and warm as opposed to persistent and focused. For Pavarini
and Schnall, our divergent reactions to moral exemplars can be understoodas
facilitating very different evolutionary demandsto cooperate and build
cohesive communities on the one hand, and to maintain ones reputation and
status within the group on the other. Other contributions also link current
moral psychological tendencies to evolved capacities.
For example, Yoel Inbar and David Pizzaro note the widespread role that
disgust reactions play in our moral lives, where moral transgressions evoke not
only moral condemnation but also a visceral feelings of repulsion. In particular,
they focus on research suggesting both that disgust can arise as a consequence
10 Advances in Experimental Moral Psychology
of making certain types of moral appraisals (e.g., when confronted with taboo
behavior or unfair treatment) and that disgust can amplify moral judgments
when it is elicited in the formation of a moral evaluation. Yet why should this
be so? Why should disgust be recruited in moral judgment at all? Inbar and
Pizarro argue that disgust is part of an evolved general motivational system
whose function is to distance us from potential threatsnamely, disease-
bearing substances and individuals. Disgust was also co-opted by higher
order systems to serve as a warning mechanism against socially and morally
prohibited behaviors, as well as any potential contaminant or contagion.
Sexual acts, food taboos, physical abnormalities or deformities, individuals
from strange or foreign cultures-each of these triggers our core behavioral
immune system, which serves to create distance between the individual and
these potential sources of physical and moral threat. One upshot of this is that
moral transgressions often repulse us in ways that other sorts of transgressions
dont, eliciting from us a feeling of undeniable wrongness. But if Inbar and
Pizarro are correct, then it leads us inexorably to a question: Should we trust
our moral judgments when they involve disgust reactions? Or should we
instead recognize such responses as likely to be biased or erroneous, pushed
around by a mechanism that was shaped by forces not designed to reliably
track morally relevant considerations?
Daniel Kelly, in his contribution, argues for the latter claim. Disgust cannot,
according to Kelly, be treated as a reliable indicator of moral transgressions; it
is overly sensitive to cues related to its older and more primitive function of
protecting the individual against poisons and contaminants. What is more, it
is a system for which false positives are much more advantageous than false
negatives, so we are prone to find things disgusting even when nothing actually
disgust worthy is present. And given the particularly phobic response that
disgust generates (the experience of nausea and/or the intense desire to
remove oneself from the presence of the triggering stimulus), Kelly worries
that disgust has the potential to cause more harm (particularly in the form of
outgroup denigration and rejection of physical/cultural abnormalities) than
good, even when it is on track.
The general form of this debunking story is familiar: psychological
mechanisms that are involved in moral judgment are sensitive to irrelevant
considerations and should therefore be viewed with suspicion. However,
Experimental Moral Psychology: AnIntroduction 11
while Kelly acknowledges that many of the psychological mechanisms that are
recruited in moral life will have an evolutionary past that may render them
suspect, he argues that each particular mechanism needs to be examined on
its own; there can be no straightforward debunking of the entirety of moral
psychology from the basic fact that many of the psychological mechanisms
underwriting it were shaped by forces and pressures whose chief function was
not to track moral truth.
As noted, feelings of disgust can strengthen ones moral judgments, rendering
them more severe or certain in character. However, this tendency is not limited
to judgments that have obvious connections with disgust. Put another way,
we need not experience disgust in particular to feel as though certain moral
transgressions are obviouslyperhaps even objectivelywrong. Whether we
are reflecting on general moral principles, more specific moral rules, or even
judgments about particular cases, it is a familiar feature of moral cognition
to feel as though it is imbued with objectivitythat is, with a commitment to
moral questions having right and wrong answers independent of any given
persons or societys beliefs or practices.
In her contribution, Linda Skitka points out that our moral attitudes play an
important role in generating this phenomenology. Moral attitudes are stronger,
more enduring, and more predictive of a persons behavior than other attitudes
or preferences they might hold. Moral attitudes are distinguished by the fact
that they are highly resistanteven imperviousto other desires or concerns,
and have the force of imperatives for those who hold them. They have authority
independent of others opinions or social conventions, and have particularly
strong ties to emotions. People experience moral attitudes, convictions, or
mandates as tracking objective features of the world that apply to all universally
rather than subjective facts about themselves. Such convictions are inherently
motivating and accompanied by strong affect.
Indeed, Skitkas contribution creates an interesting wrinkle in our thinking
about the morally good person. It is natural to think that morally good people
have strong moral convictionsthat they are willing to stand on principle
and fight for what they believe is morally right. Yet, according to Skitka,
this represents a potential dark side to our moral psychology. Strong moral
convictions come with a price. Specifically, stronger moral convictions are often
accompanied by intolerance of different moral beliefs, values, and practices;
12 Advances in Experimental Moral Psychology
the stronger the conviction, the greater the intolerance. Indeed, strong moral
convictionsmore so than any other strong attitudespredict peoples lack of
tolerance for different cultures, their unwillingness to interact with and help
people with moral beliefs different from their own, their propensity to engage
in vigilante justice against perceived wrongdoings, and their imperviousness
to clear social disagreement with their views. In sum, Skitkas contribution is a
cautionary tale for moral psychologists; while we tend to focus on the positive
aspects of moral development, there are pitfalls as well, including intolerance
to differences and insensitivity to the rich complexity of moral life. We must
keep in mind that morally good people walk a fine line between integrity and
conviction on the one hand and intolerance or dogmatism on the other.
Most of us have such strong moral experiences, supported by moral
attitudes that seem particularly compelling, unshakeable, and rooted in some
set of objective moral facts about the world around us. Nevertheless, we may
wonder whether there are such things as moral facts and, if so, whether they are
actually objective in ways suggested by our moral attitudesthat is, whether
they are independent from any persons or any groups beliefs, values, or ways
of life. Metaethicists have long sought to answer this question. Do moral
judgments refer to objective moral facts or do they merely express subjective
moral attitudes? Do participants in moral disputes argue over claims that can
aspire to truth, or are they merely trading opinions with no objective basis? In
pursuit of these questions, metaethicists have sought to capture the essence
of morality as reflected in ordinary practicehow we as moral creatures
experience morality in our daily lives. Recent experimental work has helped
to reveal the mechanisms that may underlie our ordinary commitments to
objectivism about morality, and how they might be related to other aspects of
our psychological lives.
James Beebe points out in his chapter that while people tend to attribute
more objectivity to moral issues than other social or personal issues, their
beliefs concerning the objectivity of morality do not stand alone. Rather, they
are affected by a number of rather surprising factors. For example, people seem
to be sensitive to the perceived existence (or absence) of consensus concerning
moral issues in judging whether there is an objective truth underlying them.
Moreover, Beebe discovered that people tend to ground issues more objectively
when they consider them concretelyfor example, as being contested by
Experimental Moral Psychology: AnIntroduction 13
Measuring morality
the challenges faced by Rust and Schwitzgebel is the question of what behaviors
to measurethat is, how to operationalize moral goodness (or, in this case, its
behavioral expression). They chose a wide variety of measures, everything from
the extent to which people display courtesy and engage in free-riding behavior
at conferences to peer evaluations and self-report measures of behavior. This
range of different kinds of measurements is useful because it allows for a sort
of triangulation on the subject of interestin this case, the degree to which
ethics scholars moral behavior differs (or not) from other academics. But,
their study raises the very important question of how best to operationalize
and measure moral cognition and behavior. Alfanos model encourages
researchers to look beyond the person herself to the entire context of moral
behavior (including social/asocial environments) when operationalizing and
measuring virtue, while Valdesolos encourages researchers to expand on the
kinds of virtues (caring/other-oriented vs. competence/self-oriented) they
include in an assessment of peoples moral psychology. Finally, Quintelier
et al. raise yet another an important methodological consideration, arguing
that those researching the metaethical commitments of ordinary folk need to
be careful to specify (among other things) what type of relativism they are
investigating, or whose perspective is being taken into account when assessing
the objectivity of moral claims. This serves as just one important example
of how collaboration between philosophers and psychologists would aid in
the development of methodological approaches that are both scientifically
rigorous and appropriately sensitive to important philosophical distinctions.
Conclusion
The papers in this volume all speak to the vibrancy of research in moral
psychology by philosophers and psychologists alike. Enduring questions
concerning the nature of moral persons, the motivations to become moral,
how to measure morality, and even the status and grounding of morality itself
are each the focus of considerable research activity. This activity is driven both
by theoretical commitments and by a sensitivity to empirical data that might
shed light on the subject. Weve highlighted the ways in which the research
included here informs (and in some cases problematizes) our understanding
of morality.
Experimental Moral Psychology: AnIntroduction 17
And while the contributions to this volume fall fairly evenly across the
disciplines of philosophy and psychology, we hope it will be apparent that, at
some level, these disciplinary categories seek to be of interest on their own.
The questions at the heart of this research program have long histories in both
disciplines, and the methods between them have begun to blur. Philosophers
now use experimental methods, and experimental psychologists draw from
(and contribute to) philosophical theorizing. The field is expanding, and we
are delighted to mark some of its direction and vigor with this volume.
References
Doris, J. (2010). The Moral Psychology Handbook. Oxford: Oxford University Press.
Nadelhoffer, T., Nahmias, E., and Nichols, S. (2010). Moral Psychology: Historical and
Contemporary Readings. Malden, MA: Wiley-Blackwell.
Narvaez, D., and Lapsley, D. K. (2009). Personality, Identity and Character:
Explorations in Moral Psychology. New York: Cambridge University Press.
Sinnott-Armstrong, W. (2007a). Moral Psychology: The Evolution of Morality:
Adaptations and Innateness (Vol. 1). Cambridge, MA: Bradford Book.
(2007b). Moral Psychology: The Cognitive Science of Morality: Intuition and
Diversity (Vol. 2). Cambridge, MA: Bradford Book.
(2007c). Moral Psychology: The Neuroscience of Morality: Emotion, Brain Disorders,
and Development (Vol. 3). Cambridge, MA: Bradford Book.
Sytsma, J., and Livengood, J. (in press). The New Experimental Philosophy:
AnIntroduction and Guide. Peterborough, Ontario, Canada: Broadview Press.
18
Part One
Moral Persons
20
1
the actorin other words, behavior that is more likely to bring about behavior
that profits the self.
The valence of the emotional responses to targets categorized along these
dimensions supports this view. The stereotype content model posits specific
sets of emotional responses triggered by the various combinations of these
two dimensions and, in thinking about their relevance to moral judgments, it
is instructive to examine the content of these emotions. The perception of both
warmth and competence in targets elicits primarily admiration from others
(Cuddy etal. 2008). These individuals are evaluated as having goals that are
compatible with those of the perceiver, and they have the skills requisite to
help perceivers achieve those goals. In other words, individuals high in both
warmth and competence are our most socially valued interaction partners. The
perception of warmth without competence elicits pity, competence without
warmth triggers envy, and the absence of both triggers contempt and disgust
(Cuddy etal. 2008).
The organization of these emotional responses with regard to self- versus
other-benefiting actions also squares nicely with recent research showing
the paradoxical nature of perceivers responses to moral behavior on the
part of individuals perceived to have traits that seem similar to warmth or
competence (Pavarini and Schnall 2014). Admiration is elicited in response
to the moral behavior of warm (i.e., low competition) others, while the same
behavior by those considered to be low in warmth (i.e., high competition)
elicits envy.
Indeed, the fact that perceivers discount the moral relevance of competence
traits relative to warmth traits could be a simple function of an ingroup bias.
We have negative emotional responses toward outgroup competent others,
because they might not be favorably oriented toward us. The bias against the
importance of general competence in judgments of moral character, compared
to general warmth, seems to be a reflection of a self-interested motivation to
maximize the likelihood of resource acquisition. Evidence in line with this
interpretation shows that perceivers value competence more positively in
close others (a close friend) compared to less close others (distant peers).
Though warmth judgments still carry more weight in predicting positivity
toward others compared to competence, competence only becomes relevant
to character when it is perceived to have consequences for the self (Abele and
26 Advances in Experimental Moral Psychology
The idea that groups flourish when individuals are motivated to pursue
their own interests is not new. Adam Smith argued in the Wealth of Nations
forthe pursuit of immediate self-interest as the key to flourishing societies. His
theorizing on the power of free markets suggests that it is precisely the drive for
self-interest through which societies advance. This idea was captured in Smiths
metaphor of the invisible hand: collective well-being is best achieved by groups
of individuals who pursue their own advancement without concern for others.
The engine of this process is specialization. Focusing individuals efforts on
skills/domains in which they have a comparative advantage ultimately benefits
a community by maximizing the collective capabilities of group members,
allowing for a potentially wider and richer distribution of resources as well as a
competitive advantage relative to other groups. Consequently, cultivating traits
that foster such an end may ultimately benefit the community by enhancing
the collective competence of a population.
This argument assumes that collective value is, at least in part, created by
self-focused motivational states associated with competence-based traits.
Indeed, I largely agree with Smiths sentiment that by pursuing his own
interest he frequently promotes that of the society more effectually than when
he really intends to promote it (Smith 1776/1937).
That said, societal flourishing cannot be achieved through these kinds of
motivations alone. Specialization only pays off when a collective defined by
the free-flowing exchange of resources has been established. In other words,
societies flourish when composed of individuals who (a) maximize their
individual potential in terms of skills/abilities and (b) are willing to exchange
those resources with others. What drives this willingness? Smith initially offers
a strong answer: it is not from the benevolence of the butcher, brewer or baker
that we should expect our dinner, but from a regard for their self-interest. The
argument put forward in this chapter, however, suggests an alternative. It is
not solely through regard to self-interest that people in groups should expect
the beneficence of othersit is also through the benevolence of the butcher,
brewer, and baker. Societies composed of the warm and the competent should
ultimately thrive, and structural and legal constraints should reflect the
collective value inherent in both traits.
A critical insight provided by sociobiology has been the adaptive value of
other-interested drivesthose that cultivate perceptions of warmth in others.
28 Advances in Experimental Moral Psychology
Conclusion
In conclusion, this chapter serves as a call for increased attention toward the
processes underlying the evaluation of others competence in moral judgments
of them and, consequently, renewed attention to the role of such traits in theories
of morality more generally. Moral cognition has focused almost exclusivelyon
traits related to warmth (kindness, altruism, trustworthiness) and has paid
relatively little attention to how we assess others capacities to achieve their
goals. These self-focused traitsdiscipline, focus, industriousnesshave
long been considered relevant to moral character by virtue ethicists, and their
absence from psychological theories of person perception is, at the very least,
worthy of more direct empirical attention.
Note
References
Abele, A. E., and Wojciszke, B. (2007). Agency and communion from the perspective
of self versus others. Journal of Personality and Social Psychology, 93(5), 751.
Aquino, K., and Reed, A. (2002). The self-importance of moral identity. Journal
ofPersonality and Social Psychology, 83(6), 142340.
Aristotle (4th Century, B.C.E./1998). The Nicomachean Ethics. Oxford: Oxford
University Press.
34 Advances in Experimental Moral Psychology
Cuddy, A. J., Fiske, S. T., and Glick, P. (2008). Warmth and competence as universal
dimensions of social perception: The stereotype content model and the BIAS map.
Advances in Experimental Social Psychology, 40, 61149.
Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and
intentional analyses in moral judgment. Cognition, 108(2), 35380.
Cushman, F., Dreber, A., Wang, Y., and Costa, J. (2009). Accidental outcomes guide
punishment in a trembling hand game. PloS ONE, 4(8), e6699.
Dent, N. J. H. (1975). Virtues and actions. The Philosophical Quarterly, 25(101),
31835.
Fiske, S. T., Cuddy, A. J., and Glick, P. (2007). Universal dimensions of social
cognition: Warmth and competence. Trends in Cognitive Sciences, 11(2), 7783.
Frimer, J. A., Walker, L. J., Dunlop, W. L., Lee, B. H., and Riches, A. (2011). The
integration of agency and communion in moral personality: Evidence of
enlightened self-interest. Journal of Personality and Social Psychology, 101(1),
14963.
Frimer, J. A., Walker, L. J., Riches, A., Lee, B., and Dunlop, W. L. (2012). Hierarchical
integration of agency and communion: A study of influential moral figures.
Journal of Personality, 80(4), 111745.
Gray, H. M., Gray, K., and Wegner, D. M. (2007). Dimensions of mind perception.
Science, 315(5812), 619.
Gray, K., Young, L., and Waytz, A. (2012). Mind perception is the essence of morality.
Psychological Inquiry, 23(2), 10124.
Grube, G. M. A., and Reeve, C. D. C. (1992). Plato: Republic. Hackett: NJ.
Pavarini, G., and Schnall, S. (2014). Is the glass of kindness half full or half empty?
In J. Wright and H. Sarkissian (eds), Advances in Experimental Moral Psychology:
Affect, Character, and Commitments. Continuum Press.
Phalet, K., and Poppe, E. (1997). Competence and morality dimensions of national
and ethnic stereotypes: A study in six eastern-European countries. European
Journal of Social Psychology, 27(6), 70323.
Pizarro, D. A., and Tannenbaum, D. (2011). Bringing character back: How the
motivation to evaluate character influences judgment of moral blame. In M.
Mikulincer and Shaver, P. (eds), The Social psychology of morality: Exploring the
causes of good and evil. APA Press: Washington DC.
Rosenberg, S., Nelson, C., and Vivekananthan, P. S. (1968). A multidimensional
approach to the structure of personality impressions. Journal of Personality and
Social Psychology, 9(4), 2.
Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical
advances and empirical tests in20 countries. Advances in Experimental Social
Psychology, 25(1), 165.
The Character in Competence 35
Sherman, N. (1989). The Fabric of Character: Aristotles Theory of Virtue (Vol. 6).
Oxford: Clarendon Press.
Smith, A. (1937). The Wealth of Nations (1776). New York: Modern Library, p. 740.
Todorov, A., Pakrashi, M., and Oosterhof, N. N. (2009). Evaluating faces on
trustworthiness after minimal time exposure. Social Cognition, 27(6), 81333.
Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of
Biology, 46(1), 3557.
Tversky, A. (1977). Features of similarity. Psychological Review, 84(4), 32752.
Valdesolo, P., and DeSteno, D. (2011). Synchrony and the social tuning of
compassion. Emotion-APA, 11(2), 262.
Willis, J., and Todorov, A. (2006). First impressions making up your mind after a
100-ms exposure to a face. Psychological Science, 17(7), 5928.
Wojciszke, B., and Abele, A. E. (2008). The primacy of communion over agency and
its reversals in evaluations. European Journal of Social Psychology, 38(7), 113947.
2
While running for the US presidency in2012, Mitt Romney made numerous
promises. To various audiences, he accumulated at least 15 major pledges of
what he would accomplish on his first day in office, should he be elected. These
included approving an oil pipeline, repealing Obamacare, sanctioning China
for unfair trading, submitting five bills to congress, increasing oil drilling, and
meeting with Democrat leaders. By any realistic account, this collection of
pledges was unfeasible for a single day.1 Romneys ambitious avowals raise
the question: Is such unrealistic over-promising out of the ordinary? Perhaps
Romneys promises are revealing of the situational pressures that politicians
face when trying to appeal to voters. More broadly, perhaps most people
politicians and the populus alikeregularly feign a desirable exterior to garner
social approval.
Then again, perhaps Romneys campaign vows are also revealing of
something specific about Romneys personality. When characterizing Romneys
policies after the primary elections, his advisor commented, Everything
changes. Its almost like an Etch-A-Sketch. You kind of shake it up and restart
all over again (Cohen 2012). In other words, Romneys team did not see his
pledges as necessitating congruent actions. Perhaps some people, like Romney,
are more duplicitous than other people.
In this chapter, we present a case for both possibilities: that feigning a moral
self is the norm and that some people do it more than others. We begin by
reviewing an apparent paradox, that most people claim to be prosocial yet
Spoken Words Reveal Selfish Motives 37
Prosocial
13%
Evenhanded
17%
Selfish
70%
Figure 2.1 Most people behave selfishly. The pie chart shows the percentage of people
who behave in three different ways in the dictator game (N20,813). The majority of
players selfishly take more money than they give. Minorities of people prosocially give
more than they take or even-handedly divide the money equally between the self and
partner. Adapted from Engel (2011).
Traits 0.84
Values 1.18
Goals 1.47
Figure 2.2 Most people claim to be moral. Bars represent the effect sizes of self-
reported inventories of goals, values, and traits, based on published norms. Across
all three psychological constructs, people see prosocial items as being more self-
descriptive than selfish items. Calculated from published norms in Trapnell and
Broughton (2006), Schwartz etal. (2012), and Schmuck etal. (2000), for traits, values,
and goals, respectively.
By conventional standards, effect sizes are in the large or very large ranges. Most
people claim to be more prosocial than selfish.
The general impression from the social sciences (e.g., from the dictator
game)that people are selfishappears to contradict the general impression
from personality psychologythat people claim to be moral. We will make a
case that this apparent paradox is in fact revealing of a complexity (to put it
nicely) or a hypocrisy (to put it bluntly) built in to human nature: a desire to
appear prosocial while behaving selfishly.
Moral hypocrisy
How do these disparate motives play out in human interaction? Daniel Batsons
coin-flipping experiments provide a compelling account of how, for most people,
morality is primarily for show (Batson etal. 1997). Research participants were
asked to choose one of two tasks to complete. One task offered participants
a chance to win money; the second was boring and rewardless. Individuals
were told that the next participant would have to complete whichever task
they did not choose. However, this participant would remain unaware of the
assignment process.
The experimenter described the situation facing participants as a kind of
moral dilemma, and explained that most people think the fair way to decide is
by flipping a coin. However, participants were not required to flip the coin, nor
were they required to adhere to the coin toss results should they choose to flip
the coin. Participants were then left alone in a room with a coin and a decision
to make. This set up a zero-sum situation in which one persons benefit meant
another persons loss (essentially a variation of the dictator game). Moreover,
the situation was effectively anonymous, with reputational forces stripped
away. What would participants do?
As we might expect from Figure 2.1, unapologetic selfishness was common
in these studies. Roughly half of participants never bothered to toss the coin.
Almost all (90%) of these participants immediately assigned themselves to the
favorable task. The other half of the sample, however, submitted to the fair
procedure of tossing a coin. Probabilistically speaking, about 50 percent of
these participants would have won the coin toss and assigned themselves to the
favorable task. However, 90 percent of participants who tossed a coin assigned
40 Advances in Experimental Moral Psychology
themselves to the favorable task, a full 40 percent more than probability odds
would predict.
Given the anonymity of their decision, participants who lost the coin
toss found themselves in a bind. They were caught between the desire to act
upon the result of the fair procedure and the desire to win some money. In
the end, most (80%) of the people who lost the coin toss ignored the results
and assigned themselves to the favorable task. Batson interpreted these data to
suggest that among the corpus of human motives are distinct desires to behave
selfishly and appear moral.
Flipping a coin to fairly adjudicate the assignment of unequal tasks is a display
of the desire to appear moral. Not only do we try to convince others of our good
nature, we try to convince ourselves too by internalized and generalized self-
beliefs. In the Batson studies, participants completed a self-report measure of
their moral responsibility. The measure predicted whether participants would
flip the coin, but it did not predict how they actually behaved, meaning that
participants self-proclamations were more closely linked to how they wanted
others to see them than they were to private behavior (viz., assigning the task).
Having discussed the majority of participants in Batsons studies who
exhibited moral hypocrisy or unabashed selfish behavior, we are left with the
few laudable participants who assigned the other person to the good task, either
with or without a coin toss. With no one watching, this important minority
gave of themselves to benefit another person. Introducing these givers into
iterated economic games directly benefits fellow players (in terms of payouts)
and encourages generosity from them to one another (Weber and Murninghan
2008). Whereas these givers may appear to be self-sacrificial, over time, they
tend to reap rewards for their generosity. What sets them apart from the selfish
and the hypocrites is that their self-interest is interwoven with the interests of
those around them.
Givers are probably the sort of people one would prefer as a babysitter,
colleague, or government representative, given their honorable behavior.
Society would benefit from an efficient means of detecting this minority of the
population, which would also raise the likelihood of catching hypocrisy, thus
making prosocial behavior more attractive to would-be hypocrites. We next
explore whether and how moral psychology might develop a personality tool
that detects these honorable individuals.
Spoken Words Reveal Selfish Motives 41
The protagonist of the 1995 film, Forrest Gump, played by Tom Hanks, was
not a bright man. He had an IQ of 75. He was inarticulate and had a poor
grasp of social rules, cues, and expectations. Yet his behavior was mysteriously
brilliant. To name a few of his accomplishments, he was a football star, taught
Elvis Presley to dance, was a Vietnam war hero, started a multimillion dollar
company, and met three US presidents. Was Forrest smart? Or was he stupid,
as his IQ and peers suggested? When asked directly, Forrest retorted, Stupid is
as stupid does. In other words, behaviornot thoughts or wordsis the true
measure of a person.
Forrests ontological stance coincides with the general feeling in social
psychology: the best way to know a person is by observing their actions, not
their words. A person is as a person does, not as he/she claims. This assertion
may be grounded in the notion that people have poor insight about the causes
of their own behavior (Nisbett and Wilson 1977). The self-congratulatory
impression emerging from self-reports (see Figure 2.2) in conjunction with
self-serving behavior in economic games (see Figure 2.1) might seem to add
to the skepticism.
We believe that this degree of skepticism about the validity and utility of
self-reports in understanding behavior is overly dismissive. Reports from the
person (self-report inventories and projective measures, combined) can offer
a reasonably accurate picture of human nature and improve predictions of
behavior. The key to prediction is to expand the toolset beyond inventories.
Objectivity
Researchers may be weary of projective methods because science demands
objective, replicable measurements with researcher bias minimized. The
moment an interview begins, an avalanche of conflating factors compromise
the validity of the data. Among these are the interviewers personal views,
preferences, and knowledge of the status of the individual.
Expedience
Conversely, researchers may be attracted to self-report methods owing to
expedience. Self-report measures require few resources, can be collected online
or from groups of participants at the same time, and can be analyzed the same
day. In the current era, expedience is a prerequisite to feasibility. In contrast,
interviewing, transcribing, and coding require a significant resource investment.
Spoken Words Reveal Selfish Motives 43
earlier life, or what meaning those stories hold in the present. Additionally,
personal stories contain important quantifiable information that is not
accessible via self-report questionnaires.
To illustrate this point in the moral domain, we consider whether personality
measures can distinguish bona fide moral heroes from the general population.
Walker and Frimer (2007) studied the personalities of 25 recipients of the
Caring Canadian Award, a national award for sustained prosocial engagement.
The authors also recruited a set of demographically matched comparison
individuals, drawn from the community. All participants completed a battery
of measures including inventories of self-reported traits (Wiggins 1995) and
a projective measurean individual Life Story Interview (McAdams 1995).
The interview includes questions about high point events, low point events,
turning point events, and so on. Common among the variety of stories that
people told were weddings, the birth of children, the death or illness of friends
or family, and work transitions. Awardees scored higher than comparisons on
many of the measures, both inventory and projective. As we will show next,
projective measures were more distinguishing of exemplars from comparisons
than were inventories.
Imagine reviewing the personality scores of the participants without
knowing whether each participant was an awardee or comparison individual.
The first line, for example, would contain an array of numbers representing
the scores of a particular individual, say Joe. How accurately could you guess
whether Joe is a moral exemplar or a comparison subject, based on the data
at hand? To find out, we performed a logistic regression on the original data
set, predicting group status (exemplar or comparison). With no predictors,
correct classification was at chance levelsspecifically, 50 percent. In the first
step of the regression, we entered self-report personality data (the Big 5) for
all participants. Correct classification improved from 50 percent to 72 percent,
a significant increase above chance, Nagelkerke R2 0.27, p0.04. In the
second step, we added the projective measure data listed above. Figure 2.3
shows how correct classification increased to near perfection (72% 94%)
with the addition of projective data, a further significant increase, Nagelkerke
R2 0.55, p0.001.
We tested which of inventory or projective data is a more powerful predictor
by entering the variables in the reverse order (projective then inventory). In the
Spoken Words Reveal Selfish Motives 45
Step 2. Add
Projective
Data
Step 0. Step 0.
22% Step 1. Add
Chance Chance
50% Projective 50%
Step 1. Add Data
Inventory 38%
Data
22%
Figure 2.3 Projective data adds to the predictive power of moral behavior. Correct
classification of moral exemplars and ordinary comparison individuals in two
logistic regression analyses. The left panel shows the correct classification based
on chance (Step 0), Big 5 trait inventories alone (Step 1), and inventory with five
projective measures (Step 2). The right panel shows correct classification based
on the reverse orderingprojective then inventory. Calculated from Walker and
Frimer (2007).
first step, projective data increased prediction above chance, from 50 percent
to 88 percent, a significant increase, Nagelkerke R2 0.74, p0.001. In the
second step, inventory data did not significantly augment the differentiation
of exemplars from comparisons (88% 94%), Nagelkerke R2 0.08,
p0.19.
By knowing a persons self-report inventory scores and projective scores
(and nothing else), one could correctly guess whether or not Joe was a moral
exemplar, 19 times out of 20. Projective data, if anything, is the more powerful
predictor of moral behavior.
Telling a story is inherently different than making global attributions about
oneself. For starters, telling a story is an ambiguously defined task. Even if a
particular episode is specified (e.g., a turning point), one still needs to select
a particular memory, determine a starting point, and then build a coherent
story from there forward. The result is that each person tells a rather unique
story. Consider one of the comparison participants in Walker and Frimers
(2007) study. In response to a question about a high point event in his life, this
46 Advances in Experimental Moral Psychology
comparison participant began his story by describing how he had prepared for
a vacation to Europe:
I was going on a vacation to Italy. I invested around $4000 for the whole tour
and whatnot....Getting the Canadian passport was easy enough because I
had one before, and it only took two weeks...
...I went down, I showed them my income tax forms, and...that Id paid
my taxes, and this, that, and the other. And if you could speak to a Canadian
government guy, and he could get you on the computer and talk to you just like
you and I, it makes sense. But theres no sense. Youre talking to a number....
The sob story continued for roughly 10 minutes. Eventually the interviewer
interjected, asking the participant to return to the high point of his story by
asking whether he had received his passport in time for his vacation. His high
point event ended in disappointment.
No....I didnt want to go to no doctor and, you know, have to lie about
being sick and all that....As far as I was concerned, the holiday was over.
You know, Id spent that money. That was it.
Christmas Eve one year...[my wife and I] looked at all the gifts under our
tree....It was a true mass of gifts to be opened. And yet we still looked
Spoken Words Reveal Selfish Motives 47
at each other, and asked, sincerely, Is there enough for the kids...to be
happy? We realized how fortunate our kids were, and how fortunate we
were, that regardless of how the impact was going to be, or how minimal or
how large it was going to be, we were going to start a program the following
year....It evolved very, very slowly from going to local stores asking for
a hockey stick and a baseball glove, to donated wrapping paper....Six
hundred and fifty gifts, the first year...evolving to literally 80,000 gifts one
year....[We would take the gifts] into these small communities....Very
isolated, and exceedingly poor....I can remember this one little girl...sat
on Santas knee....She was nervous. We provided her with a gift, which I
knew was a doll from the shape of it....[I was] quite anxious for her to open
the gift; I wanted to see her reaction. But she didnt....After all the kids had
received their gifts, I took one of the people from the community aside, and
I said, I was quite anxious for this one young girl to open her gift, but she
didnt. I said....I wonder if she felt embarrassed, or if she felt awkward,
or maybe she doesnt understand the tradition of Christmas.... And they
said, No, she fully understands, but this is December 22nd. That will be the
only gift that she has. She will wait until Christmas morning to open that
gift.... And [thats] the true essence of what that program is all about.
The contrast between these two stories (about the lost passport vs. the
disadvantaged child receiving a cherished gift) illustrates both the profound
richness and also the predictive utility of spoken words. Personal stories reveal
a great deal about a person.
Motives in stories
and the results from economics, biology, and social psychologythat people
tend to behave selfishlyare accurate.
Which contentagentic or communalemerges most frequently when
people speak about topics that matter to them? We predict that agentic
content will be more common than communal content. Preliminary findings
are confirming hypothesis 1: when describing important goals, people
produce more agentic content than communal content (Frimer and Oakes
2013). This effect is not attributable to base rates of agency and communion
in the dictionaries or typical English. When people talk about what matters
to them, they selectively use more agentic words than communal words,
communicating/revealing a selfish lifestyle.
This selfish portrait emergent from the projective measure was the opposite
of the impression emerging from a comparable endorsement inventory.
Participants also rated the importance of their goals using the Aspiration
Index (Grouzet etal. 2005), a standard self-report inventory of goals. The effect
reversed: participants now rated their communal goals as more important
than their agentic goals. These results support a dualistic theory of motivation.
Results from the projective measure coincide with the general conclusion that
people are selfish; results from the endorsement measure suggest the opposite,
and may tap socially desirable appearance motives.
In Batsons studies, hypocrisy existed between two opposing behaviors
(moral coin-tossing vs. selfish task assignment), with self-deception keeping
the two at bay. In the present study, people demonstrated the coming apart of
their own motives by acknowledging the primacy of their own selfish goals
on a projective measure, then declaring their moral goals as most important
while endorsing items on a goal inventory. On self-report inventories, people
tend to present a moral self (see Figure 2.2); on projective measures, they tend
to reveal their selfishness. Thus, the first criterion of a successful projective
measure is supported: mean-level estimates from projective methods coincide
with the general interdisciplinary conclusion that people are selfish.
Conclusion
Notes
2 For goals, we compared means for community (prosocial) against the average
of financial success, appearance, and social recognition (selfish). For values, we
contrasted an aggregate of benevolence and universalism (prosocial) against
an aggregate of achievement, power, and face (selfish). For traits, we contrasted
nurturance for both genders (prosocial) against the orthogonal assertiveness for
both genders (selfish).
References
Haidt, J. (2007). The new synthesis in moral psychology. Science, 316(5827), 9981002.
doi:10.1126/science.1137651
James, W. (1890). The Principles of Psychology. New York, NY: Holt.
McAdams, D. P. (1995). The Life Story Interview (Rev.). Unpublished manuscript,
Northwestern University, Illinois, USA.
McAdams, D. P., Hoffman, B. J., Mansfield, E. D., and Day, R. (1996). Themes
of agency and communion in significant autobiographical scenes. Journal of
Personality, 64, 33977. doi:10.1111/j.1467-6494.1996.tb00514.x
McAdams, D. P. (2001). The psychology of life stories. Review of General Psychology,
5, 10022. doi:10.1037/1089-2680.5.2.100
McClelland, D. C. (1975). Power: The Inner Experience. Oxford England: Irvington.
Nisbett, R. E., and Wilson, T. D. (1977). Telling more than we can know: Verbal
reports on mental processes. Psychological Review, 84, 23159. doi:10.1037/0033-
295X.84.3.231
Pennebaker, J. W., Booth, R. J., and Francis, M. E. (2007). Linguistic Inquiry and
Word Count: LIWC [Computer software]. Austin, TX: LIWC.net.
Schmuck, P., Kasser, T., and Ryan, R. M. (2000). Intrinsic and extrinsic goals: Their
structure and relationship to well-being in German and U. S. college students.
Social Indicators Research, 50, 22541. doi:10.1023/A:1007084005278
Schwartz, S. H., Cieciuch, J., Vecchione, M., Davidov, E., Fischer, R., Beierlein, C.,
and Konty, M. (2012). Refining the theory of basic individual values. Journal of
Personality and Social Psychology, 103, 66388. doi:10.1037/a0029393
Time 100: Heroes and icons. (14 June 1999). Time, 153(23).
Time 100: Leaders and revolutionaries. (13 April 1998). Time, 151(14).
Trapnell, P. D., and Broughton, R. H. (2006). The Interpersonal Questionnaire (IPQ):
Duodecant markers of Wiggins interpersonal circumplex. Unpublished data, The
University of Winnipeg, Winnipeg, Canada.
Walker, L. J., and Frimer, J. A. (2007). Moral personality of brave and caring
exemplars. Journal of Personality and Social Psychology, 93, 84560.
doi:10.1037/0022-3514.93.5.845
Weber, J. M., and Murninghan, J. K. (2008). Suckers or saviors? Consistent
contributors in social dilemmas. Journal of Personality and Social Psychology, 95,
134053. doi:10.1037/a0013326
Wiggins, J. S. (1995). Interpersonal Adjective Scales: Professional Manual. Odessa, FL:
Psychological Assessment Resources.
3
Mahatma Gandhi is one of the worlds most famous and influential symbols of
peace. His philosophy of nonviolence has moved, transformed, and inspired
individuals and communities. Yet, he was accused of racism (e.g., Singh 2004),
and was never awarded a Nobel Peace Prize, despite having been nominated
five times. Mother Teresa, an equally remarkable symbol of compassion
andaltruism, dedicated her life to helping the poor and the dying in over a
hundred countries. Her funeral procession in Calcutta brought together
thousands of people who lined the route in expression of admiration and
respect. Yet, the entry Mother Teresa was a fraud returns 65,300 results on
Google. Indisputably, people are strongly affected by witnessing the good deeds
or heroic actions of exceptional individuals, but at the same time, such actions
invoke sentiments that vary from appreciation and warmth to cynicism and
bitterness.
The central goal of this chapter is to address this paradox: Under what
conditions does the kindness of others inspire and move individuals to tears,
or invoke envy and a desire to derogate the other persons intentions? We
review what is known about both reactions and present a functional analysis,
suggesting that assimilative and contrastive reactions to virtuous others
serve distinct purposes: whereas feeling moved or uplifted binds individuals
together in cooperative contexts and communities, contrastive responses serve
to regulate ones own social status within a group.
56 Advances in Experimental Moral Psychology
Human beings have a remarkable capacity to set aside self-interest, help one
another, and collaborate (Becker and Eagly 2004). Prosocial behavior enables
groups to achieve feats that could never be achieved by individuals alone.
Some have proposed that the formation of large cooperative communities
that include genetic strangers is possible only through a series of affective
mechanisms (McAndrew 2002; Burkart et al. 2009). Even before the age
of2, toddlers derive more happiness from giving treats to others than from
receiving treats themselves, and find it more emotionally rewarding when
giving is costlythat is, when they give away their own treats rather than a
treat that was found or given to them (Aknin et al. 2012). Similarly, adults
derive greater happiness from spending money on others than spending on
themselves (Aknin etal. 2013; Dunn etal. 2008). Finally, the most prosocial
individuals are the least motivated by the pursuit of status among peers (Willer
etal. 2012).
Although engagement in prosocial behavior may not be necessarily
motivated by the pursuit of status, prosocial others are nonetheless often
preferred and, as a consequence, ascribed greater status. Infants as young as 6
months show a preference for characters who help others over mean or neutral
characters (Hamlin et al. 2007). Later on, adults tend to affiliate with kind
rather than attractive others when they find themselves in stressful situations
(Li etal. 2008), and usually prefer morally virtuous others as potential mates
(Miller 2007). Evidence also suggests that individuals who build a reputation
as generous by giving more in a public goods game are more likely to be chosen
as a partner in subsequent games, as well as to receive social rewards (i.e.,
honor) than those who do not (Dewitte and Cremer 2004; Sylwester and
Roberts 2010). In other words, displays of altruistic behavior signal ones moral
quality and desirability as a potential partner and thus induce the tendency in
others to praise and affiliate (Miller 2007; Roberts 1998).
Because of the individual benefits of being generous, people may also
behave altruistically to improve their own reputation. This strategic route
to prosociality has been widely documented. When reputational concerns
are at stakefor example, when all participants have access to individual
contributions in an economic gamepeople behave more prosocially (Barclay
Is the Glass of Kindness Half Full or HalfEmpty? 57
and Willer 2007; Hardy and Van Vugt 2006). Similarly, after having been
primed with status motives individuals are more likely to purchase products
that benefit the environment (Griskevicius etal. 2010). Even minimal cues of
being observed and therefore evaluated by others, such as images of eyes in
ones surroundings, lead to more prosocial choices in economic games and
greater charitable giving (Bateson etal. 2006; Bereczkei etal. 2007; Haley and
Fessler 2005). Thus, by displaying prosocial behavior when one is likely to be
seen, one enhances the chances of receiving status benefits (e.g., Hardy and
Van Vugt 2006).
From an individual perspective, however, all members of a group wish to
maintain their reputational status and optimize their chances of being chosen as
future cooperation partners. In this context, others exemplary moral behavior
represents an increase in the standard for prosociality and challenges those
who observe it to display equally costly prosocial behavior to upregulate and
protect their moral status (Fessler and Haley 2003). In other words, witnessing
somebody else acting prosocially might be threatening because it raises the
bar for others. Therefore, observers defuse the threat imposed by the morally
superior other by engaging in prosocial acts that would strategically improve
their own status. Alternatively, they may try to reduce the standard for prosocial
behavior by derogating the virtuous other or excluding him or her from the
group (Monin2007). In any of these cases, witnessing others generosity can
possibly lead to negative emotions such as fear and envy, followed by efforts to
regulate ones own status in a group.
moral as the moral rebel. These results show that whereas a non-threatening
prosocial other triggers tendencies to affiliate and praise, exposure to a similar
other that does a good deed for which you have missed your opportunity leads
to derogation and cynicism.
This pattern applies to meat-eaters reactions to vegetarians. Minson and
Monin (2012) asked participants to indicate whether they themselves were
vegetarians or not, to report whether they thought vegetarians felt morally
superior, and to freely list three words that came to mind when thinking of
vegetarians. Meat-eaters expected vegetarians to feel morally superior to
themselves and to non-vegetarians in general, and nearly half of them listed at
least one negative quality, generally referring to negative traits (e.g., arrogant,
weird, self-righteous, opinionated). Indeed, the more they expected vegetarians
to feel morally superior, the more negative words participants listed. In a
second study, participants who were first asked to rate how they would be
seen by vegetarians rated vegetarians more negatively in comparison to those
who were not primed with threat of being morally judged. These studies are
a compelling demonstration of how engaging in a stark comparison between
oneself and a morally superior other triggers defensive reactions, which may
serve to regulate ones own sense of morality.
Another defensive reaction consists of expressing a desire to expel
excessively generous members from a cooperative group. Fair participants,
who receive proportional rewards for their contributions, are significantly
more popular than both extremely benevolent and selfish participants (Parks
and Stone 2010). Ironically, unusually benevolent members are ironically
rated just as unfavorably as selfish members in terms of the extent to which
they should be allowed to remain in the group. When asked to explain the
desire to expel unselfish individuals from the group, 58 percent of participants
used comparative reasons (e.g., people would ask why we cant be like him).
These findings suggest that people attempt to reduce the standard for prosocial
behavior when reputational demands are present and members compete for
cooperation partners.
Further, large-scale cross-cultural evidence for the existence of punishment
of prosocial others was provided by Herrmann etal. (2008). Participants from
16 countries participated in an economic game in groups of four. They were
given tokens to contribute to a group project, and contributions were distributed
60 Advances in Experimental Moral Psychology
equally among partners. After each round, players could punish other players
by taking tokens away from them. Beyond the well-known punishment of
selfish participants, the authors observed that participants across the world also
punished those who were more prosocial than themselves. Unlike altruistic
punishment, levels of antisocial punishment were highly variable across
communities, and greatly influenced by economic and cultural backgrounds:
more equalitarian societies, with high levels of trust, high GDP per capita,
strong norms of civic cooperation, and a well-functioning democracy were the
least likely to punish virtuous others.
These results as a whole indicate that in conditions of high competition
and demand for reputation maintenance, individuals tend to react defensively
to virtuous others. These reactions assume a number of configurations that
include attributing negative traits to virtuous others, punishing or excluding
them from the group, and denying the ethical value of their acts. The exclusion
and derogation of the extremely generous members of ones group might be
effective in regulating ones moral reputation by decreasing the competitive
and comparative standard for prosocial behavior to a less costly level. Such
contrastive reactions may also help regulate the stability of ones group morality
and establish achievable norms of prosocial behavior. When one derogates
another persons moral status, it reduces the standard for prosocial behavior
for all members of the group.
We have so far discussed negative and defensive reactions that arise from
unfavorable social comparisons in the moral domain. There are, however,
circumstances under which negative reactions take place for reasons other than
comparative ones. One example is when exceedingly benevolent behavior by
an ingroup member is interpreted as deviant from the group norm. Previous
research has shown that ingroup members are highly sensitive to behavior that
differs from the norms set for members of ones ingroup, and so derogate such
behavior in an attempt to maintain group cohesiveness (Marques etal. 1988;
Abrams etal. 2000). In fact, in Parks and Stones (2010) study, 35 percent of
the participants did use normative reasons to justify their desire to expel the
selfless member of the group (e.g., Hes too different from the rest of us).
The authors suggest that this shows a desire for equality of participation even
from participants who are willing to give more, as well as a resistance against
changing the group norm in an undesirable direction.
Is the Glass of Kindness Half Full or HalfEmpty? 61
A second example is when either the prosocial act benefits an outgroup but
provides an ingroup disadvantage, or the others behavior is not considered
virtuous from the perspective of the observer. For example, a teacher who
lectures chastity-based sex education at school may be considered virtuous
by some people, but a violation of teenagers freedom of conscience by others.
Peoples moral values vary (Graham etal. 2009; Schwartz 2006) and so their
emotional reactions to actions that either support or undermine different
values should vary as well. In these cases, derogating the virtuous other is
not a reaction to a threat to ones reputation but rather to a threat to the ones
personal or political interests.
On the flip side, others generosity can trigger positive reactions that include
feelings of respect and admiration. Haidt (2000, 2003) employed the term
elevation to refer to this warm, uplifting feeling that people experience when
they see unexpected acts of human goodness, kindness, and compassion
(Haidt 2000, p. 1). Elevation is generally associated with feelings of warmth in
the chest and feeling a lump in the throat. The distinctive appraisal, physical
sensations, and motivations related to elevation differentiate it from happiness
and other positive moral emotions, such as gratitude or admiration for skill
(Algoe and Haidt 2009).
To date, the most remarkable evidence in this field has been a positive
relationship between elevation and prosocial behavior. Participants exposed
to stories showing expressions of forgiveness or gratitude are more likely to
donate money for charity (Freeman etal. 2009), or volunteer for an unpaid
study, and spend time helping an experimenter by completing a tiresome
task compared to neutral or mirth-inducing conditions (Schnall etal. 2010).
Importantly, the more participants report feelings relating to elevation, such
as warmth in the chest and optimism about humanity, the more time they
engage in helping behavior (Schnall et al. 2010). Similar effects have been
observed in real-life settings outside of the laboratory. Employees who evaluate
their boss as highly fair and likely to self-sacrifice report greater feelings of
62 Advances in Experimental Moral Psychology
elevation, and are in turn more likely to show organizational citizenship and
affective commitment (Vianello etal. 2010). Further, self-reported frequency
of feelings of elevation during a volunteering service trip predicts trip-specific
volunteerism 3 months later. This effect holds above and beyond the effect of
personality traits, such as empathy, extroversion, and openness to experience
(Cox 2010).
Although the emotional correlates of these prosocial initiatives substantially
differ from those that arise from strategic prosocial behavior, this does not rule
out the possibility that they represent a reaction to a threatening comparison.
In other words, after being confronted with somebody more moral than
they are, participants may have felt motivated to act prosocially in order to
restore their self-worth and moral reputation. Recent evidence, however, has
convinced us otherwise. We have found, for example, that individuals who feel
elevation after exposure to morally upstanding others rarely report feelings of
envy or engage in contrastive comparisons between their moral qualities and
the ones of the protagonist. Rather, they often justify the magnificence of the
other persons act by referring to general standards (e.g., what she did was a
very sort of selfless act. I dont think many people would have chosen to do
that) suggesting little self-threat (Pavarini etal. 2013).
In another study, we explored the effects of self-affirmation before the
exposure to an elevating video clip and the opportunity to engage in helping
(Schnall and Roper 2012). Previous research has shown that self-affirmation
can reduce defensive responses to self-threatening information (McQueen
and Klein2006; Sherman and Cohen 2006). Therefore, we predicted that being
reminded of ones qualities would make participants more receptive to being
inspired by a virtuous other, increasing prosocial responding. Indeed, our
results suggested that participants who affirmed their personal qualities before
watching an uplifting clip engaged in more helping behavior than participants
who self-affirmed before watching a neutral clip. Further, those who specifically
affirmed moral self-qualities showed the highest level of helping, more than
participants in the elevation condition who affirmed a more selfish value or no
value at all. Affirming ones prosocial qualities possibly reminded participants
of their core values, as well as their ability to do good. The exposure to a
prosocial other under these conditions had an empowering effect on prosocial
responding.
Is the Glass of Kindness Half Full or HalfEmpty? 63
Belike, youre such an awesome person sort-of-hug, and although I have never
seen this guy as more than just a friend, I felt a hint of romantic feeling for him
at this moment (Haidt 2000; Immordino-Yang, personal communication).
Beyond a desire for affiliation, moral elevation is accompanied by feelings
of respect and a tendency to praise the virtuous other. Forty-eight percent
of participants who recall a virtuous act report having gained respect for
the virtuous other (Algoe and Haidt 2009), and spontaneous verbal reports
also suggest a tendency to enhance the persons status (e.g., I felt like telling
everyone about his good deed; Haidt 2000). Outside the emotion literature,
peoples tendency to ascribe rewards and status to virtuous others has been
widely documented (e.g., Hardy and Van Vugt 2006; Milinski etal. 2000; Willer
2009). Willer (2009), for example, observed that both players and observers of
an economic game rate generous participants as more prestigious, honorable,
and respected than low-contributing participants. Further, when given the
opportunity to freely allocate $3 between themselves and a game partner,
benevolent others are given a greater share than those who had contributed less
in the previous game. Moral elevation may be the emotional underpinning of
a tendency to ascribe social and material rewards to highly prosocial members
of a group.
It is important, however, to differentiate the prestige attributed to prosocial
members from related constructs such as power, authority, and dominance.
A prestigious other is defined as somebody who is respected and listened
to, normally due to socially desirable skills, whereas dominance implies use
of intimidation and coercion (Henrich and Gil-White 2001). Thus, positive
emotional reactions to acts of virtue possibly support attributions of prestige,
but do not necessarily lead to attributions of dominance.
Strong and recurring feelings of elevation, respect, and honor toward virtuous
others may transform them into role models or heroes. There is evidence that
virtuous others are indeed viewed as role models for children and youths
(Bucher 1998) and their stories of bravery, courage, and compassion used as a
tool to foster moral development (e.g., Conle 2007; Puka 1990). After exposure
to prosocial members of their own community, students generally report
having identified positive qualities of the local hero and mention feelings
of inspiration (e.g., he can deal with everything so well; I think everyone
shouldbe told this. It touched me so much; Conle and Boone 2008, pp.323).
Is the Glass of Kindness Half Full or HalfEmpty? 65
Concluding remarks
into action. Interestingly, although these are individual processes, such reactions
may influence how the virtuous other reacts and may ultimately regulate a group
morality. For example, rewarding others generosity may encourage them to
engage in further prosocial acts, and help to establish stronger prosocial bonds,
whereas derogating their intentions may prevent an increase in the general
standard for prosocial behavior for all members of the group.
The capacity to evaluate and react emotionally to others moral behavior is
essential for navigating the social world. Such evaluations help observers to
identify who may help them and who might be a foe. Yet, prosocial others are
not always judged positively. Our review suggests that reactions to expressions
of uncommon goodness vary strikingly. The glass of kindness can be
perceived as half empty or half full, depending on whether the act is appraised
from a competitive or cooperative mindset, and on whether the virtuous other
is seen as a suitable social partner, a potential leader, or a possible rival.
Note
References
Abrams, D., Marques, J. M., Bown, N., and Henson, M. (2000). Pro-norm and anti-
norm deviance within and between groups. Journal of Personality and Social
Psychology, 78, 90612. doi:10.1037//0022-3514.78.5.906
Aknin, L. B., Hamlin, J. K., and Dunn, E. W. (2012). Giving leads to happiness in
young children. PLoS ONE, 7, e39211. doi:10.1371/journal.pone.0039211
Aknin, L. B., Barrington-Leigh, C. P., Dunn, E. W., Helliwell, J. F., Burns, J., Biswas-Diener,
R., Kemeza, I., Nyende, P., Ashton-James, C. E., and Norton, M. I. (2013). Prosocial
spending and well-being: Cross-cultural evidence for a psychological universal.
Journal of Personality and Social Psychology, 104, 635-52. doi: 10.1037/a0031578
68 Advances in Experimental Moral Psychology
Algoe, S., and Haidt, J. (2009). Witnessing excellence in action: The other-praising
emotions of elevation, admiration, and gratitude. Journal of Positive Psychology, 4,
10527. doi:10.1080/17439760802650519
Aquino, K., McFerran, B., and Laven, M. (2011). Moral identity and the experience of
moral elevation in response to acts of uncommon goodness. Journal of Personality
and Social Psychology, 100, 70318. doi:10.1037/a0022540
Barclay, P., and Willer, R. (2007). Partner choice creates competitive altruism in
humans. Proceedings of the Royal Society of London, Series B, 274, 74953.
doi:10.1098/rspb.2006.0209
Bateson, M., Nettle, D., and Roberts, G. (2006). Cues of being watched enhance
cooperation in a real-world setting. Biology Letters, 2, 41214. doi:10.1098/
rsbl.2006.0509
Becker, S. W., and Eagly, A. H. (2004). The heroism of women and men. American
Psychologist, 59, 16378. doi:10.1037/0003-066X.59.3.163
Bereczkei, T., Birkas, B., and Kerekes, Zs. (2007). Public charity offer as a proximate
factor of evolved reputation-building strategy: An experimental analysis of a
real life situation. Evolution and Human Behavior, 28, 27784. doi:10.1016/j.
evolhumbehav.2007.04.002
Brandt, M. J., and Reyna, C. (June 2010). Beyond infra-humanization: The perception
of human groups, the self, and supernatural entities as more or less than human.
Poster presented at the annual meeting of the Society for Personality and Social
Psychology, Las Vegas, NV.
(2011). The chain of being: A hierarchy of morality. Perspectives on Psychological
Science, 6, 42846. doi:10.1177/1745691611414587
Brown, D. E. (1991). Human universals. New York: McGraw-Hill.
Bucher, A. A. (1998). The influence of models in forming moral identity. International
Journal of Educational Research, 27, 61927. doi:10.1016/S0883-0355(97)00058-X
Burkart, J. M., Hrdy, S. B., and van Schaik, C. P. (2009). Cooperative breeding and
human cognitive evolution. Evolutionary Anthropology, 18, 17586. doi:10.1002/
evan.20222
Conle, C. (2007). Moral qualities of experiential narratives. Journal of Curriculum
Studies, 39, 1134. doi:10.1111/j.1467-873X.2007.00396.x
Conle, C., and Boone, A. (2008). Local heroes, narrative worlds and the imagination:
The making of a moral curriculum through experiential narratives. Curriculum
Inquiry, 38, 737. doi:10.1111/j.1467-873X.2007.00396.x
Conway, P., and Peetz, J. (2012). When does feeling moral actually make you a better
person? Conceptual abstraction moderates whether past moral deeds motivate
consistency or compensatory behavior. Personality and Social Psychology Bulletin,
6, 90719. doi: 10.1177/0146167212442394
Is the Glass of Kindness Half Full or HalfEmpty? 69
(2006). The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom. New
York: Basic Books.
Haidt, J., and Algoe, S. (2004). Moral amplification and the emotions that attach us
to saints and demons. In J. Greenberg, S. L. Koole, and T. A. Pyszczynski (eds),
Handbook of Experimental Existential Psychology. New York: Guilford, pp. 32235.
Halevy, N., Chou, E. Y., Cohen, T. R., and Livingston, R. W. (2012). Status conferral
in intergroup social dilemmas: Behavioral antecedents and consequences of
prestige and dominance. Journal of Personality and Social Psychology, 102, 35166.
doi:10.1037/a0025515
Haley, K. J., and Fessler, D. M. T. (2005). Nobodys watching? Subtle cues affect
generosity in an anonymous economic game. Evolution and Human Behavior, 26,
24556. doi:10.1016/j.evolhumbehav.2005.01.002
Hamlin, J. K., Wynn, K., and Bloom, P. (2007). Social evaluation by preverbal infants.
Nature, 450, 5579. doi:10.1038/nature06288
Hardy, C., and Van Vugt, M. (2006). Nice guys finish first: The competitive
altruism hypothesis. Personality and Social Psychology Bulletin, 32, 140213.
doi:10.1177/0146167206291006
Henrich, J., and Gil-White, F. J. (2001). The evolution of prestige: Freely conferred
status as a mechanism for enhancing the benefits of cultural transmission.
Evolution and Human Behaviour, 22, 16596. doi:10.1016/S1090-5138(00)00071-4
Herrmann, B., Thni, C., and Gchter, S. (2008). Antisocial punishment across
societies. Science, 319, 13627. doi:10.1126/science.1153808
Immordino-Yang, M. H., and Sylvan, L. (2010). Admiration for virtue:
Neuroscientific perspectives on a motivating emotion. Contemporary Educational
Psychology, 35, 110115. doi:10.1016/j.cedpsych.2010.03.003
Immordino-Yang, M. H., McColl, A., Damasio, H., and Damasio, A. (2009). Neural
correlates of admiration and compassion. Proceedings of the National Academy of
Sciences USA, 106, 80216. doi:10.1073/pnas.0810363106
Israel, S., Lerer, E., Shalev, I., Uzefovsky, F., Riebold, M., Laiba, E., Bachner-Melman,
R., Maril, A., Bornstein, G., Knafo, A., and Ebstein, R. P. (2009). The oxytocin
receptor (OXTR) contributes to prosocial fund allocations in the dictator game
and the social value orientations task. PLoS ONE, 4, e5535. doi:10.1371/journal.
pone.0005535
Keltner, D., and Haidt, J. (2003). Approaching awe, a moral, spiritual, and aesthetic
emotion. Cognition and Emotion, 17, 297314. doi:10.1080/02699930302297
Li, N. P., Halterman, R. A., Cason, M. J., Knight, G. P., and Maner J. K. (2008). The
stress-affiliation paradigm revisited: Do people prefer the kindness of strangers or
their attractiveness? Journal of Personality and individual Differences, 44, 38291.
doi: 10.1016/j.paid.2007.08.017
Is the Glass of Kindness Half Full or HalfEmpty? 71
Marques, J. M., Yzerbyt, V. Y., and Leyens, J. P. (1988). The black sheep effect: Extremity
of judgments towards ingroup members as a function of group identification.
European Journal of Social Psychology, 18, 116. doi:10.1002/ejsp.2420180102
McAndrew, F. T. (2002). New evolutionary perspectives on altruism: Multilevel-
selection and costly-signaling theories. Current Directions in Psychological Science,
11, 7982. doi:10.1111/1467-8721.00173
McQueen, A., and Klein, W. M. P. (2006). Experimental manipulations of self-affirmation:
A systematic review. Self and Identity, 5, 289354. doi:10.1080/15298860600805325
Merritt, A. C., Effron, D. A., and Monin, B. (2010). Moral self-licensing: When being
good frees us to be bad. Social and Personality Psychology Compass, 4/5, 34457.
doi:10.1111/j.1751-9004.2010.00263.x
Milinski, M., Semmann, D., and Krambeck, H. (2000). Donors to charity gain in both
indirect reciprocity and political reputation. Proceedings of Royal Society, 269,
88183. doi:10.1098/rspb.2002.1964
Miller, G. F. (2007). Sexual selection for moral virtues. Quarterly Review of Biology,
82, 97125. doi:10.1086/517857
Minson, J. A., and Monin, B. (2012). Do-Gooder derogation: Disparaging morally-
motivated minorities to defuse anticipated reproach. Social Psychological and
Personality Science, 3, 2007. doi:10.1177/1948550611415695
Monin, B. (2007). Holier than me? Threatening social comparison in the moral
domain. International Review of Social Psychology, 20, 5368.
Monin, B., Sawyer, P. J., and Marquez, M. J. (2008). The rejection of moral rebels:
Resenting those who do the right thing. Journal of Personality and Social
Psychology, 95, 7693. doi:10.1037/0022-3514.95.1.76
Parks, C. D., and Stone, A. B. (2010). The desire to expel unselfish members from the
group. Journal of Personality and Social Psychology, 99, 30310. doi:10.1037/a0018403
Pavarini, G., Schnall, S., and Immordino-Yang, M. H. (2013). Verbal and Nonverbal
Indicators of Psychological Distance in Moral Elevation and Admiration for Skill.
Manuscript in preparation.
Puka, B. (1990). Be Your Own Hero: Careers in Commitment. Troy, NY: Rensselaer
Polytechnic Institute.
Reeder, G. D., and Coovert, M. D. (1986). Revising an impression of morality. Social
Cognition, 4, 117. doi:10.1521/soco.1986.4.1.1
Roberts, G. (1998). Competitive altruism: From reciprocity to the handicap principle.
Proceedings of the Royal Society of London, Series B: Biological Sciences, 265,
42731. doi:10.1098/rspb.1998.0312
Sachdeva, S., Iliev, R., and Medin, D. L. (2009). Sinning saints and saintly sinners: The
paradox of moral self-regulation. Psychological Science, 20, 5238. doi:10.1111/
j.1467-9280.2009.02326.x
72 Advances in Experimental Moral Psychology
Salovey, P., and Rodin, J. (1984). Some antecedents and consequences of social-
comparison jealousy. Journal of Personality and Social Psychology, 47, 78092.
doi:10.1037//0022-3514.47.4.780
Schnall, S., and Roper, J. (2012). Elevation puts moral values into action. Social
Psychological and Personality Science, 3, 3738. doi:10.1177/1948550611423595
Schnall, S., Roper, J., and Fessler, D. (2010). Elevation leads to altruistic behavior.
Psychological Science, 21, 31520. doi:10.1177/0956797609359882
Schwartz, S. H. (2006). A theory of cultural value orientations: Explication and
applications. Comparative Sociology, 5, 13682. doi:10.1163/156913306778667357
Sherman, D. K., and Cohen, G. L. (2006). The psychology of self-defense: Self-
affirmation theory. In M. P. Zanna (ed.), Advances in Experimental Social
Psychology (Vol. 38). San Diego, CA: Academic Press, pp. 183242.
Silvers, J., and Haidt, J. D. (2008). Moral elevation can induce nursing. Emotion, 8,
2915. doi:10.1037/1528-3542.8.2.291
Singh, G. B. (2004). Gandhi: Behind the Mask of Divinity. New York: Prometheus.
Sullivan, M. P., and Venter, A. (2005). The hero within: Inclusion of heroes into the
self. Self and Identity, 4, 10111. doi:10.1080/13576500444000191
Sylwester, K., and Roberts, G. (2010). Cooperators benefit through reputation-
based partner choice in economic games. Biology Letters, 6, 65962. doi:10.1098/
rsbl.2010.0209
Thomson, A. L., and Siegel, J. T. (2013). A moral act, elevation, and prosocial
behavior: Moderators of morality. Journal of Positive Psychology, 8, 5064.
doi:10.1080/17439760.2012.754926
Uvanas-Moberg, A., Arn, I., and Magnusson, D. (2005). The psychobiology of
emotion: The role of the oxytocinergic system. International Journal of Behavioral
Medicine, 12, 5965. doi:10.1207/s15327558ijbm1202_3
Vianello, M., Galliania, E. M., and Haidt, J. (2010). Elevation at work: The effects of
leaders moral excellence. Journal of Positive Psychology, 5, 390411. doi:10.1080/
17439760.2010.516764
Willer, R. (2009). Groups reward individual sacrifice: The status solution to
the collective action problem. American Sociological Review, 74, 2343.
doi:10.1177/000312240907400102
Willer, R., Feinberg, M., Flynn, F., and Simpson, B. (2012). Is generosity sincere
or strategic? Altruism versus status-seeking in prosocial behavior. Revise and
resubmit from Journal of Personality and Social Psychology.
Zak, P. J., Kurzban, R., and Matzner, W. T. (2005). Oxytocin is associated with human
trustworthiness. Hormones and Behavior, 48, 52227. doi:10.1016/j.yhbeh.2005.07.009
Zak, P. J., Stanton, A. A., and Ahmadi, S. (2007). Oxytocin increases generosity in
humans. PLoS ONE, 2, e1128. doi:10.1371/journal.pone.0001128
4
Despite the recent hubbub over the possibility that the concepts of character
and virtue are empirically inadequate,1 researchers have only superficially
considered the fact that these concepts purport to refer to dispositional
properties.2 For the first time in this controversy, we need to take the
dispositional nature of virtue seriously. Once we do, one question immediately
arises: What are the bearers of virtues?
In this chapter, I argue for an embodied, embedded, and extended answer
to this question. It is generally hopeless to try to say what someone would do
in a given normative state of affairs without first specifying bodily and social
features of her situation. Theres typically no fact of the matter, for instance,
about whether someone would help when there is sufficient reason for her to
help. However, there typically is a fact of the matter about whether someone
in a particular bodily state and social environment would help when there is
sufficient reason to help.
If thats right, it puts some pressure on agent-based theories of virtue, which
tend to claim or presume that the bearers of virtue are individual agents (Russell
2009; Slote 2001). Such theories hold that a virtue is a monadic property of
an individual agent. Furthermore, this pressure on agent-based and agent-
focused theories suggests a way of reconceiving of virtue as a triadic relation
among an agent, a social milieu, and an asocial environment (Alfano 2013).
On this relational model, the bearers of virtue are not individual agents but
ordered triples that include objects and properties outside the agent.
Here is the plan for this chapter. Section 1 summarizes the relevant literature
on dispositions. Section 2 sketches some of the relevant psychological
findings. Section 3 argues that the best response to the empirical evidence
74 Advances in Experimental Moral Psychology
its fragility is masked by protective packaging. A sugar pill can mimic a real
analgesic via the placebo effect. An electrons velocity, which it is naturally
disposed to retain, inevitably changes when it is measureda case of
finking.3 Such possibilities are not evidence against the presence or absence
of the disposition in question; instead, they are exceptions to the subjunctive
conditional. The vase really is fragile, despite its resistance to chipping,
cracking, and shattering. The sugar pill is not really an analgesic, despite the
pain relief. The electron really is disposed to follow its inertial path, despite
Heisenbergs uncertainty principle.
Lets stipulate that finks, masks, and mimics be collectively referred to
as disrupters. Since it is possible to possess a disposition that is susceptible
to finks and masks, and to lack a disposition that is mimicked, the simple
subjunctive conditional analysis fails. In my view, the most attractive response
to the constellation of disrupters is Chois (2008) anti-disrupter SCA:
If the object fails to A in C when there are finks or masks present, the right-
hand side of the definition is false. For instance, if the fragile vase were encased
in protective bubble wrap, its fragility would be masked: it is not disposed to
chip, crack, or shatter when struck, dropped, or abraded. But a disrupter is
present, which means that the biconditional is still true. Likewise, if the object
As in C when a mask is present, the right-hand side of the definition is again
false, which would not yield the undesirable conclusion that the object has the
disposition in question.
In light of such cases, its helpful to provide weak and comparative analyses
of dispositions. For instance, we can analyze a weak disposition as follows:
In the standard semantics, this means that there are more nearby undisrupted
C-worlds where o As than nearby undisrupted C-worlds where o A*s.
Which, if any, of these notions is appropriate to an analysis of virtues?
Elsewhere (Alfano 2013), I have argued for an intuitive distinction between
high-fidelity and low-fidelity virtues. High-fidelity virtues, such as honesty,
chastity, and loyalty, require near-perfect manifestation in undisrupted
conditions. For these, AD-SCA seems most appropriate. Someone only counts
as chaste if he never cheats on his partner when cheating is a temptation. Low-
fidelity virtues, such as generosity, tact, and tenacity, are not so demanding.
For them, some combination of the W-AD-SCA and C-AD-SCA seems
appropriate. Someone might count as generous if she were more disposed to
give than not to give when there was sufficient reason to do so; someone might
count as tenacious if she were more disposed to persist than not to persist in
the face of adversity.4
If this is on the right track, the analysis of virtuous dispositions adds one
additional step before Lewiss two. First, determine whether the virtue in
question is high fidelity or low fidelity. For instance, it seems reasonable to say
that helpfulness is a low-fidelity virtue whereas loyalty is a high-fidelity virtue.
Second, identify the stimulus conditions and characteristic manifestations.
The most overt manifestation of helpfulness is of course helping behavior,
but more subtle manifestations presumably include noticing opportunities to
What are the Bearers of Virtues? 77
Another primary asocial influence is the set of affect modulators, very broadly
construed to include mood elevators, mood depressors, emotion inducers, and
arousal modifiers. There are documented effects for embarrassment (Apsler
1975), guilt (Regan 1971), positive affect (Isen 1987), disgust (Schnall et al.
2008), and sexual arousal (Ariely 2008), among others. As with sensibilia, affect
modulators are connected in weak but significant ways to the manifestation
(or not) of virtue. Fair moods dont necessarily make us fair, nor do foul
moods make us foul. The valence of the effect depends on what is normatively
appropriate in the particular circumstances.
Its important to point out, furthermore, that while asocial influences
tend to have fairly predictable effects on behavioral dispositions, they by no
means explain action all by themselves. Indeed, any particular factor will
typically account for at most 16 percent of the variance in behavior (Funder
and Ozer 1983).
Social influences
In addition to the asocial influences canvassed above, there are a variety of
social influences on the manifestation of virtue. Two of the more important
are expectation confirmation and outgroup bias. In cases of expectation
confirmation, what happens is that the agent reads others expectations
off explicit or implicit social cues, and then acts in accordance with the
expectations so read. People often enough mistake or misinterpret others
expectations, so what they end up doing isnt necessarily what others expect,
but what they think others expect. In cases of outgroup bias, the agent
displays unwarranted favoritism toward the ingroup or prejudice toward the
outgroup. Since everyone belongs to myriad social groups, whether someone
is perceived as in or out depends on which group identities are salient at
the time; (Tajfel 1970, 1981); hence, the perceived social distance of a given
person will vary over time with seemingly irrelevant changes in the salience
of various group identities. In this section, I have room to discuss only social
expectations.
Much of the groundbreaking social psychology of the second half of the
twentieth century investigated the power of expectation confirmation. The
most dramatic demonstration was of course the Milgram paradigm (1974),
80 Advances in Experimental Moral Psychology
In the previous section, I argued that both social and asocial factors shape how
people are disposed to think, feel, and act. What we notice, what we think, what
we care about, and what we do depend in part on bodily and social features of
our situations. This is not to deny that people also bring their own distinctive
personalities to the table, but it suggests that both high-fidelity and low-fidelity
virtues, as traditionally conceived, are rare.
To see why, lets walk through the three-step analysis of a traditional virtue:
honesty. For current purposes, Ill assume that its uncontroversial that honesty
is high fidelity. Next, we specify the stimulus conditions and characteristic
manifestations. I dont have space to do justice to the required nuances
here, but it wouldnt be too far off to say that the stimulus conditions C are
temptations to lie, cheat, or steal despite sufficient reason not to do so, and
that the characteristic manifestations A are behavioral (not lying, cheating,
or stealing), cognitive (noticing the temptation without feeling too much
of its pull), and affective (disapprobation of inappropriate behavior, desire
to extricate oneself from the tempting situation if possible, perhaps even
prospective shame at the thought that one might end up acting badly). Finally,
we slot these specifications into the schema for high-fidelity virtue:
fink someones honesty? What good is honesty if honest people need constant
monitoring? Ruling out all of the asocial and social influences described in
the previous section as disrupters isnt just ad hoc; it threatens to disqualify
honesty from being a virtue.
Furthermore, social and asocial influences are ubiquitous. Indeed, its
difficult even to think of them as influences because they are so common.
Should we say that very bright lighting is the default condition, and that lower
levels of light are all situational influences? To do so would be to count half of
each day as disrupted. Or should we say that twilight is the default, and that
both very bright and very dark conditions are situational influences? Its hard
to know what would even count in favor of one of these proposals.
Even more to the point, what one might want to rule out as a disrupter in
one case is likely to contribute to what seems like a manifestation of virtue in
other cases. Should we say that being in a bad mood is a situational influence?
People in a bad mood give much less than other people to charities that are
good but not great; they also give much more than other people to charities
that are very good indeed (Weyant 1978). You cant have it both ways. If bad
moods mask generosity in the former case, they mimic it in the latter. Failure to
give in the former type of case would then not be evidence against generosity,
but even giving quite a bit in the latter type of case would not be evidence for
it. If we try to rule out all of these factors, leaving just the agent in her naked
virtue or vice, we may find that she disappears too. Strip away the body and the
community, and you leave not the kernel of authentic character, but something
thats not even recognizably human.
Instead of filtering out as much as possible, I want to propose including
as much as possible by expanding the unit of analysis, the bearer of virtue.
Instead of thinking of virtue as a property of an individual agent, we should
construe it as a triadic relation among a person, a social milieu, and an asocial
environment.
There are two ways of fitting the milieu and the environment into the
subjunctive conditional analysis. They could be incorporated into the stimulus
conditions:
According to AD-SCA*, the person is still the sole bearer of the disposition; its
just a more limited disposition, with much stronger stimulus conditions. This
can be seen as a rendering of Doriss (2002) theory of local traits in the language
of disposition theory. An important problem with such dispositions is that,
even if they are empirically supportable, they are normatively uninspiring.
According to AD-SCA, in contrast, the bearer of the disposition is now a
complex, extended object: the person, the milieu, and the environment. What I
want to suggest is that, given the sorts of creatures we areembodied, socially
embedded, with cognition and motivation extended beyond the boundaries
of our own skin (Clark 2008; Clark and Chalmers 1998)AD-SCA is more
attractive.
Virtue would inhere, on this view, in the interstices between the person
and her world. The object that possesses the virtue in question would be a
functionally and physically extended complex comprising the agent, her social
setting, and her asocial environment. The conditions under which the social
and asocial environment can be legitimately included in such an extended
whole are complex, but we can take a cue here from Pritchard (2010, p. 15),
who argues that phenomena that extend outside the skin of [the] agent can
count as part of ones cognitive agency just so long as they are appropriately
integrated into ones functioning. Pritchard is here discussing cognitive
rather than ethical dispositions, but the idea is the same: provided that the
social and asocial phenomena outside the moral agents skin are appropriately
integrated into her functioning, they may count as part of her moral agency
and partially constitute her moral virtues. A paradigm example is the ongoing
and interactive feedback we have with our friends. At least when we are at
our best, we try to live up to our friends expectations; we are attuned to their
reactive attitudes; we consider prospectively whether they would approve or
disapprove of some course of action; we consult with them both explicitly
and imaginatively; we revise our beliefs and values in light of their feedback.5
When we are functionally integrated with friends in this way, on the model
84 Advances in Experimental Moral Psychology
Iam proposing here, they are partial bearers of whatever virtues (and vices)
we might have. Or rather, to the extent that a virtue or vice is possessed in this
context, it is possessed by the pair of friends together, and not by either of them
on her own. Friendship is an ideal example of the kind of functional integration
I have in mind here, though it may well be possible to integrate other social
and asocial properties and objects into a moral agents functioning.
This doesnt mean that we couldnt also continue to think of individuals as
(potential) bearers of virtues, but the answer to the question, Is there virtue
here?, might differ depending on which bearer the questioner had in mind.
For example, it might not be the case that the individual agent has the virtue
in question, but that the complex object constituted by the agent, her social
milieu, and her asocial environment does have the virtue in question.
One consequence of this view is that virtue is multiply realizable, with
different levels of contribution made by each of the relata. To be honest, for
example, would be to have certain basic personality dispositions, but also
some combination of the following: to be considered honest by ones friends
and peers (and to know it), to consider oneself honest, to be watched or at
least watchable, and to be in whatever bodily states promote the characteristic
manifestations of honesty. Someone could become honest, on this view, in
standard ways, such as habituation and reflection on reasons. But someone
could also become honest in non-standard ways, such as noticing others
signaling of expectations or an increase in local luminescence. This makes it
both easier and harder to be virtuous: deficiencies in personality can be made
up for through social and bodily support, but strength of personality can also
be undermined by lack of social and bodily support. To illustrate this, consider
the differences among Figures 4.1, 4.2, and 4.3.
One of the salutary upshots of this way of thinking about virtue is that it
helps to make sense of the diversity named by any given trait term. Different
people are more or less generous, and on several dimensions. By making explicit
reference to the social milieu and the asocial environment, this framework
suggests ways in which partial virtue could be differently instantiated. Two
people might both count, at a very coarse-grained level of description, as
mostly honest, but one could do so because of personal and social strengths
and despite asocial weaknesses, while the other does so because of social and
asocial strengths and despite some personal weaknesses. One way to capture
What are the Bearers of Virtues? 85
Personal 1
Environmental 3 Personal 2
Environmental 2 Personal 3
Environmental 1 Social 1
Social 3 Social 2
Figure 4.1 Represents a case of perfect virtue: all three relata (personal, social, and
environmental) make maximal contributions. But virtue-concepts are threshold
concepts. Someone can be generous even if she sometimes doesnt live up to the ideal
of perfect generosity.
Personal 1
Environmental 3 Personal 2
Environmental 2 Personal 3
Environmental 1 Social 1
Social 3 Social 2
Figure 4.2 Represents one way of doing that, with a modest contribution from the
environment and more substantial contributions from social and personal resources.
this idea is to specify, for each virtue, the minimum area of the relevant radar
graph that would need to be filled for the agent-in-milieu-and-environment to
be a candidate for possessing that virtue.
Furthermore, the framework allows for the plausible idea that there is a
kind of asymmetry among the relata that bear virtues. Someones personality
can only be so weak before we are no longer inclined to call him (or even the
86 Advances in Experimental Moral Psychology
Personal 1
Environmental 3 Personal 2
Environmental 2 Personal 3
Environmental 1 Social 1
Social 3 Social 2
Figure 4.3 Represents another way of being generous enough without being perfectly
generousthis time with a primary contribution from social factors and more modest
contributions from both personal and environmental factors.
Notes
* Author Note: Mark Alfano, Princeton University Center for Human Values&
Department of Philosophy, University of Oregon. The author thanks Philip
Pettit, Hagop Sarkissian, Jennifer Cole Wright, Kate Manne, Jonathan Webber,
and Lorraine Besser-Jones for helpful comments, criticisms, and suggestions.
Correspondence should be addressed to Mark Alfano, 321 Wallace Hall, Princeton
University, Princeton, NJ 08544. Email: mark.alfano@gmail.com.
1 The canonical firebrands are Doris (2002) and Harman (1999). Flanagan arrived
at the party both too early (1991) and too late (2009) to shape the course of the
debate.
2 Upton (2009) is the only book-length effort, but her work makes little use of the
literature on dispositions, relying instead on her own naive intuitions. Sreenivasan
(2008) also discusses the dispositional nature of virtues without reference to the
literature on dispositions.
3 The concepts of masking, mimicking, and finking were introduced by Johnston
(1992), Smith (1977), and Martin (1994), respectively.
4 This distinction is based only on my own hunches, but conversations with
philosophers and psychologists have left me confident in it. An empirical study
ofits plausibility would be welcome.
5 See Millgram (1987, p. 368), who argues that, over the course of a friendship, one
becomes (causally) responsible for the friends being who he is, and Cocking and
Kennett (1998, p. 504), who argue that a defining feature of friendship is that as a
close friend of another, one is characteristically and distinctively receptive to being
directed and interpreted and so in these ways drawn by the other.
6 Sarkissian (2010) makes a similar point.
88 Advances in Experimental Moral Psychology
References
Adams, R. M. (2006). A Theory of Virtue: Excellence in Being for the Good. New York:
Oxford University Press.
Alfano, M. (2013). Character as Moral Fiction. Cambridge: Cambridge University Press.
Annas, J. (2011). Intelligent Virtue. New York: Oxford University Press.
Apsler, R. (1975). Effects of embarrassment on behavior toward others. Journal of
Personality and Social Psychology, 32, 14553.
Ariely, D. (2008). Predictably Irrational. New York: Harper Collins.
Baron, R. A., and Thomley, J. (1994). A whiff of reality: Positive affect as a potential
mediator of the effects of pleasant fragrances on task performance and helping.
Environment and Behavior, 26, 76684.
Bateson, M., Nettle, D., and Roberts, G. (2006). Cues of being watched enhance
cooperation in a real-world setting. Biology Letters, 12, 41214.
Blass, T. (1999). The Milgram paradigm after 35years: Some things we now know
about obedience to authority. Journal of Applied Social Psychology, 29(5), 95578.
Boles, W., and Haywood, S. (1978). The effects of urban noise and sidewalk density
upon pedestrian cooperation and tempo. Journal of Social Psychology, 104, 2935.
Burnham, T. (2003). Engineering altruism: A theoretical and experimental
investigation of anonymity and gift giving. Journal of Economic Behavior and
Organization, 50, 13344.
Burnham, T., and Hare, B. (2007). Engineering human cooperation. Human Nature,
18(2), 88108.
Carlsmith, J., and Gross, A. (1968). Some effects of guilt on compliance. Journal of
Personality and Social Psychology, 53, 117891.
Choi, S. (2008). Dispositional properties and counterfactual conditionals. Mind, 117,
795841.
Choi, S., and Fara, M. (Spring 2012). Dispositions. The Stanford Encyclopedia of
Philosophy, Edward N. Zalta (ed.), http://plato.stanford.edu/archives/spr2012/
entries/dispositions/
Clark, A. (2008). Supersizing the Mind: Embodiment, Action, and Cognitive Extension.
New York: Oxford University Press.
Clark, A., and Chalmers, D. (1998). The extended mind. Analysis, 58, 719.
Cocking, D., and Kennett, J. (1998). Friendship and the self. Ethics, 108(3), 50227.
Darley, J., and Latan, B. (1968). Bystander intervention in emergencies: Diffusion
ofresponsibility. Journal of Personality and Social Psychology, 8, 37783.
Donnerstein, E., and Wilson, D. (1976). Effects of noise and perceived control on
ongoing and subsequent aggressive behavior. Journal of Personality and Social
Psychology, 34, 77481.
What are the Bearers of Virtues? 89
Peer ratings
Our second study examined peer opinion about the moral behavior of
professional ethicists (Schwitzgebel and Rust 2009). We set up a table in a central
location at the 2007 Pacific Division meeting of the American Philosophical
Association (APA) and offered passersby gourmet chocolate in exchange
for taking a 5-minute philosophical-scientific questionnaire, which they
completed on the spot. One version of the questionnaire asked respondents
their opinion about the moral behavior of ethicists in general, compared to other
philosophers and compared to non-academics of similar social background
(with parallel questions about the moral behavior of specialists in metaphysics
The Moral Behavior of Ethicists and the Power of Reason 93
Voting rates
We assume that regular participation in public elections is a moral duty, or
at least that it is morally better than non-participation (though see Brennan
2011). In an opinion survey to be described below, we found that over 80
percent of sampled US professors share that view. Accordingly, we examined
publicly available voter participation records from five US states, looking for
name matches between voter rolls and online lists of professors in nearby
universities, excluding common and multiply-appearing names (Schwitzgebel
and Rust 2010). In this way, we estimated the voting participation rates of four
groups of professors: philosophical ethicists, philosophers not specializing in
ethics, political scientists, and professors in departments other than philosophy
and political science. We found that all four groups of professors voted at
approximately the same rates, except for the political science professors, who
voted about 1015 percent more often than did the other groups. This result
survived examination for confounds due to gender, age, political party, and
affiliation with a research-oriented versus teaching-oriented university.
94 Advances in Experimental Moral Psychology
nine-point scale from very morally bad to very morally good with the
midpoint marked morally neutral. On this normative question, there were
large differences among the groups: 60 percent of ethicist respondents rated
meat-eating somewhere on the bad side of the scale, compared to 45percent
of non-ethicist philosophers and only 19 percent of professors from other
departments (c2 64.2, p 0.001). Later in the survey, we posed two
behavioral questions. First, we asked During about how many meals or
snacks per week do you eat the meat of mammals such as beef or pork? Next,
we asked Think back on your last evening meal, not including snacks. Did
you eat the meat of a mammal during that meal? On the meals-per-week
question, we found a modest difference among the groups: Ethicists reported
a mean of 4.1 meals per week, compared to 4.6 for non-ethicist philosophers
and 5.3 for non-philosophers (ANOVA, F5.2, p0.006). We also found
27percent of ethicists to report no meat consumption (zero meat meals per
week), compared to 20 percent of non-ethicist philosophers and 13 percent of
non-philosophers (c29.3, p0.01). However, statistical evidence suggested
that respondents were fudging their meals-per-week answers: Self-reported
meals per week was not mathematically consistent with what one would
expect given the numbers reporting having eaten meat at the previous evening
meal. (For example, 21% of respondents who reported eating meat at only one
meal per week reported having eaten meat at their previous evening meal.)
And when asked about their previous evening meal, the groups self-reports
differed only marginally, with ethicists in the intermediate group: 37 percent
of ethicists reported having eaten the meat of a mammal at their previous
evening meal, compared to 33 percent of non-ethicist philosophers and 45
percent of non-philosophers (c25.7, p0.06).
Conclusion
Across a wide variety of measures, it appears that ethicists, despite expressing
more stringent normative attitudes on some issues, behave not much
differently than do other professors. However, we did find some evidence that
philosophers litter less in environmental ethics sessions than in other APA
sessions, and we found some equivocal evidence that might suggest slightly
The Moral Behavior of Ethicists and the Power of Reason 99
higher rates of charitable giving and slightly lower rates of meat-eating among
ethicists than among some other subsets of professors. On one measurethe
return of library booksit appears that ethicists might behave morally worse.
Narrow principles
Professional ethicists might have two different forms of expertise. One
might concern the most general principles and unusually clean hypothetical
casesthe kinds of principles and cases at stake when ethicists argue about
The Moral Behavior of Ethicists and the Power of Reason 101
in fact permissible (thusmotivatingone to avoid it, e.g., to get off the couch)
and the result that what one might have thought was morally impermissible
is in fact permissible (thus licensing one not to do the morally ideal thing,
e.g., to stay on the couch). If reasoning does generate these two results about
equally often, people who tend to engage in lots of moral reflection of this
sort might be well calibrated to permissibility and impermissibility, and thus
behave more permissibly overall than do other people, despite not acting
morally better overall. The Power of Reason view might work reasonably well
for permissibility even if not for goodness and badness. Imagine someone who
tends to fall well short of the moral ideal but who hardly ever does anything
that would really qualify as morally wrong, contrasted with a sometimes-sinner
sometimes-saint.
This model, if correct, could be straightforwardly reconciled with our data
as long as the issues we have studiedexcept insofar as they reveal ethicists
behaving differentlyallow for cross-cutting patterns of permissibility, for
example, if it is often but not always permissible not to vote. It would also be
empirically convenient for this view if it were more often permissible to steal
library books than non-ethicists are generally inclined to think and ethical
reflection tends to lead people to discover that fact.
emotions. And maybe those people, then, are disproportionately drawn into
philosophical ethics. More or less, they are trying to figure out intellectually
what the rest of us are gifted with effortlessly. These people have basically made
a career out of asking What is this crazy ethics thing, anyway, that everyone
seems so passionate about? and Everyone else seems to have strong opinions
about donating to charity or not, and when to do so and how much, but they
dont seem able to defend those opinions very well and I dont find myself
with that same confidence; so lets try to figure it out. Clinical psychopathy
isnt what were imagining here, nor do we mean to assume any particularly
high uniformity in ethicists psychological profile. All this view requires is that
whatever positive force moral reflection delivers to the group as a whole is
approximately balanced out by a somewhat weaker set of pretheoretical moral
intuitions in the group as a whole.
If this were the case, one might find ethicists, even though no morally
better behaved overall, more morally well behaved than they would have
been without the crutch of intellectual reflection, and perhaps also morally
better behaved than non-ethicists are in cases where the ordinary intuitions
of the majority of people are in error. Conversely, one might find ethicists
morally worse behaved in cases where the ordinary intuitions of the majority
of people are a firmer guide than abstract principle. We hesitate to conjecture
about what issues might fit this profile but if, for example, ordinary intuition
is a poorer guide than abstract principle about issues such as vegetarianism,
charity, and environmentalism and a better guide about the etiquette of day-
to-day social interactions with ones peers, then one would expect ethicists to
behave better than average on the issues of the former sort and worse on issues
of the latter sort.
Conclusion
We decline to choose among these five models. There might be truth inall of
them; and still other views are available too. Maybe ethicists find themselves
increasingly disillusioned about the value of morality at the same time they
improve their knowledge of what morality in fact requires. Or maybe ethicists
learn to shield their personal behavior from the influence of their professional
reflections, either to improve the objectivity of their reasoning or as a kind of
self-defense against the apparent unfairness of being held to higher standards
because of their choice of profession. In short, we believe the empirical evidence
is insufficient to justify even tentative conclusions. We recommend the issues
for further empirical study and for further armchair reflection.
Notes
* Authors Note: Joshua Rust, Stetson University, and Eric Schwitzgebel, University
of California at Riverside. For helpful discussion of earlier drafts, thanks to
Gunnar Bjornnson, Jon Haidt, Linus Huang, Hagop Sarkissian, and Jen Wright.
Correspondence should be sent to: Eric Schwitzgebel, Department of Philosophy,
University of California at Riverside, Riverside, CA 92521-0201, Email: eschwitz@
ucr.edu or Joshua Rust, Stetson University, Department of Philosophy 421 North
Woodland Boulevard, DeLand, Florida 32723, Phone: 386.822.7581, Email: jrust@
stetson.edu.
1 The APA sent the list of names of APA paid registrants to a third party (U.C.R.s
Statistical Consulting Collaboratory) who were not informed of the nature of the
The Moral Behavior of Ethicists and the Power of Reason 107
research or the significance of the list of names. To them, it was just a meaningless
list of names. Separately, we (2nd author and an Research Assistant (RA)) generated
a list of names of people listed as participants on the APA program. Finally, a
2nd RA generated a mathematical formula unknown to us (but using certain
guidelines) that would convert names into long number strings. This 2nd RA then
converted the list of program participants into those number strings according
to that formula and told the formula to the Collaboratory, who then separately
converted their name lists into number strings using that same formula. Finally,
the 2nd author received both encrypted lists and wrote a program to check for
encrypted name matches between the lists. Names were matched just by last name
and first initial to reduce the rate of false negatives due to different nicknames (e.g.,
Thomas vs. Tom), and common or repeated last names were excluded to prevent
false positives, as were names with spaces, mid-capitals, diacritical marks, or in
which the person used only an initial as the first name. Although the APA Pacific
Division generously supplied the encrypted data, this research was neither solicited
by nor conducted on behalf of the APA or the Pacific Division.
2 Survey recipients were among the people whose voting and email responsiveness
we had examined in the studies reported above. The other observational measures
were collected in the course of the survey study.
References
Brennan, J. (2011). The Ethics of Voting. Princeton, NJ: Princeton University Press.
Confucius. (5th c. BCE/2003). Analects. (E. Slingerland, trans.). Indianapolis, IN:
Hackett Publishing Company.
Cooper, J. (2007). Cognitive Dissonance. London: Sage.
Ditto, P. H., and Liu, B. (2011). Deontological dissonance and the consequentialist
crutch. In M. Mikulincer and P. R. Shaver (eds), The Social Psychology of Morality.
Washington, DC: American Psychological Association.
Festinger, L. (1957). A Theory of Cognitive Dissonance. Evanston, IL: Row, Peterson.
Green, J. D. (2008). The secret joke of Kants soul. In W. Sinnott-Armstrong (ed.),
Moral Psychology, vol. 3. Cambridge, MA: MIT Press.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach
to moral judgment. Psychological Review, 108, 81434. doi:10.1037/0033-
295X.108.4.814
(2012). The Righteous Mind. New York: Pantheon.
Hauser, M. D. (2006). Moral Minds. New York: Ecco/HarperCollins.
108 Advances in Experimental Moral Psychology
Moral Groundings
110
6
Disgust, an emotion that most likely evolved to keep us away from noxious
substances and disease, seems especially active in our moral lives. People report
feeling disgust in response to many immoral acts (e.g., Rozin etal. 1999), make
more severe moral judgments when feeling disgusted (e.g., Wheatley and
Haidt 2005), and are more likely to view certain acts as immoral if they have a
tendency to be easily disgusted (Horberg etal. 2009). Yet, despite the wealth of
evidence linking disgust and morality, the reason for the link remains unclear.
This may be because the bulk of empirical work on the topic has been aimed
at simply demonstrating that disgust and moral judgment are connecteda
claim that, given the influence of rationalist models of moral judgment such as
Kohlbergs (1969), is novel and surprising. Fewer researchers have attempted
to explain why disgust and moral judgment should be so connected (for recent
exceptions, see Kelly 2011 and Tybur etal. 2012). Here, we present an attempt
to do so.
Our primary claim is that disgust functions as part of a general
motivational system that evolved to keep individuals safe from disease. As
such, disgust motivates negative evaluations of acts that are associated with
a threat of contamination (e.g., norm violations pertaining to food and sex);
negative attitudes toward unfamiliar groups who might pose the threat of
contamination through physical contact (e.g., outgroups characterized by
these norm violations, or who are unfamiliar); and greater endorsement of
certain social and political attitudes that minimize contamination risk (such
as increased sexual conservatism, reduced contact between different social
112 Advances in Experimental Moral Psychology
claim regarding the role of disgust, and has been made by researchers who have
experimentally manipulated disgust independently of the act being evaluated,
for example by inducing disgust with a post-hypnotic suggestion (Wheatley and
Haidt 2005), with a foul odor, or with disgusting film clips (Schnall etal. 2008).
Finally, the strongest causal claim regarding the influence of disgust on moral
judgment is that of disgust as moralizer. On this view, morally neutral acts
can enter the moral sphere by dint of their being perceived as disgusting. For
instance, an act (such as smoking) can move from unhealthy to immoral
if reliably accompanied by the emotion of disgust. This claim has the least
empirical support of the three, although it is consistent with the finding that
morally dumbfounded participants defend their self-admittedly irrational
moral judgments with an appeal to the disgusting nature of an act (Haidt and
Hersch 2001).
Our argument here relies primarily on evidence for the disgust-as-
consequence and disgust-as-amplifier views, for which the evidence is
strongest (see Pizarro etal. 2011). In particular, the view we will defend here
is a combination of these two approaches that takes into account additional
research on the specificity of these effectsthat disgust is more likely to arise
and amplify judgments within a particular domain (viz., when the threat of
pathogens is involved).
Why disgust?
It turns out that there is a good reason that disgust, rather than a more
general-purpose rejection response, would have become associated with
some moral violationsnamely, that disgust evolved to motivate individuals
not only to avoid ingesting (or touching) poisons and contaminants, but
also to distance themselves from people who posed a risk of pathogen
transmission. Schaller and colleagues (Faulkner etal. 2004; Park etal. 2003;
Schaller and Duncan 2007) have argued that, over the course of human
evolution, people developed a behavioral immune system that functioned
as a first line of defense against exposure to pathogens or parasites. According
to this theory, individuals who show cues of infection or disease should
trigger the behavioral immune system, leading to disgust and, consequently,
rejection or avoidance of that individual. Because this system would have
evolved independently of any explicit knowledge about pathogens, its
disease detection mechanism would need to be heuristic in nature
most likely, something like any significant anomaly in an individuals
physical appearance. This means that the behavioral immune system can
be expected to respond to any individuals who deviate from normative
physical appearance, regardless of whether they actually pose a contagion
risk (Schaller and Park 2011). Likewise, individuals seen as engaging in
unusual (i.e., non-normative) practices regarding food, cleanliness, and
sexactivities that carry an especially high risk of pathogen transmission
should also be likely to evoke disgust and rejection.
Finally, strangers (i.e., members of other groups or tribes) would have
been especially likely to harbor novel (and therefore particularly dangerous)
infectious agents. Encountering such individuals should thus also activate
the behavioral immune system, motivating hostility, rejection, and the
accompanying emotion of disgust. Indeed, individuals in hunter-gatherer
cultures are often intensely hostile to strangers. The anthropologist Margaret
Mead wrote that most primitive tribes feel that if you run across one of these
subhumans from a rival group in the forest, the most appropriate thing to
do is bludgeon him to death (as cited in Bloom 1997, p. 74). Likewise, the
geographer and anthropologist Jared Diamond wrote that for New Guinean
tribesmen, to venture out of ones territory to meet [other] humans, even
if they lived only a few miles away, was equivalent to suicide (Diamond
2006, p. 229).
116 Advances in Experimental Moral Psychology
Importantly, this argument does not assume that all or even most of the
individuals or groups evoking disgust and rejection actually pose a risk of
infection. But because risks of failing to detect a contagious individual (serious
illness and possibly premature death) greatly outweighed the cost of wrongly
identifying a harmless individual as contagious (the foregone benefits of a
positive interaction), one would expect the behavioral immune system to tend
toward hypervigilance (Schaller and Duncan 2007; Shaller and Park 2011).
Cues that might be associated with the risk of contamination would have
become heuristics, whose mere presence would trigger disgust and rejection,
but which could easily be overgeneralized.
that induced disgust led to more negative implicit and explicit evaluations of
gay men, respectively. Inbar etal. (2009) found that dispositional sensitivity to
disgust was associated with more negative implicit evaluations of gay people,
and Terrizzi etal. (2010) found a relationship between disgust sensitivity and
explicit evaluations of gay people.
Disgust is the emotion most closely linked to the behavioral immune system,
in that it motivates individuals to distance themselves from people or groups
seen (implicitly or explicitly) as contaminated or contagious (Oaten et al.
2009). Is it possible that disgust is implicated in moral judgment for similar
reasonsthat is, because it arises as a reaction to perceived physical contagion
threats? The most common disgust-eliciting contagion threats involve sex,
food, and outgroups (Oaten et al. 2009). If disgust is involved in moral
judgment primarily for violations having to do with contagion threats, moral
disgust should largely be limited to these specific domains.
This prediction comes close to the view endorsed by Haidt and Graham
(2007) in their description of the moral domain of purity/sanctity. They write
that moral disgust is attached at a minimum to those whose appearance
(deformity, obesity, or diseased state), or occupation (the lowest castes in caste-
based societies are usually involved in disposing of excrement or corpses)
makes people feel queasy (p. 106). Certainly, on the basis of the behavioral
immune system literature one would expect avoidance of these groups.
However, Haidt and Graham expand their argument, proposing that the moral
domain of purity/sanctity includes a metaphorical conception of impurity as
well, such that disgust (and judgments of immorality) is also evoked by those
who seem ruled by carnal passions such as lust, gluttony, greed, and anger
(p. 106). But how much empirical evidence is there for this more extended,
metaphorical role for disgust in moral judgment? In the next section, we
examine the research bearing on this question.
Pollution and Purity in Moral andPoliticalJudgment 119
Sex
Many of the studies showing disgust at moral violations have asked participants
to evaluate sexual practices, including homosexuality, incest, and unusual
forms of masturbation. Haidt and Hersch (2001), for example, asked liberal and
conservative undergraduates to evaluate examples of gay and lesbian sex, unusual
masturbation (e.g., a woman who masturbates while holding her favorite teddy
bear), and consensual sibling incest. Haidt etal. (1993) did not directly measure
disgust responses, but two of the three behaviors that they expected a priori to
elicit disgust involved sex (having sex with a dead chicken and then consuming
it, and consensual sibling incest). Perhaps the most commonly studied moral
violation of this class has been incestan act known to elicit disgust reliably.
For instance, Rozin et al. (1994) asked participants about their responses to
incest in general, Royzman etal. (2008) asked participants to evaluate parent-
child incest, Gutierrez and Giner-Sorolla (2007) asked about sibling incest, and
Horberg etal. (2009) used the same sibling incest vignette originally used by
Haidt etal., along with the chicken sex vignette from the same source.
Repugnant foods
Consumption of repugnant foods has been another commonly studied type of
moral violation that appears to reliably elicit disgust. For instance, both Haidt
etal. (1993) and Russell and Giner-Sorolla (2011) used a scenario in which a
family ate their deceased dog. Similarly, Gutierrez and Giner-Sorolla (2007),
and Russell and Giner-Sorolla (2011) presented participants with a scenario in
which a scientist grew and consumed a steak made of human muscle cells.
Chapman and Anderson 2013). In one notable example, Chapman etal. (2009)
examined reactions to people who made unfair offers in an ultimatum game.
This economic game involves two parties: a proposer and a responder.
The proposer suggests a division of a sum (in the current study, $10) between
the two, and the responder can either accept this suggestion or reject it (in
which case neither party receives anything). In this study, the proposer was
(unbeknownst to the participants) a computer program that sometimes
made very unfair offers (i.e., $9 for the proposer and $1 for the responder).
Both participants self-reports and their facial expressions showed that they
felt disgusted by very unfair offersand the more disgusted they were, the
more likely they were to reject the offer. Similarly, when people read about
unfairness (e.g., someone cheating at cards), they showed increased activation
in a facial muscle (the levator) involved in the expression of disgust (Cannon
etal. 2011).
Other studies sometimes cited as showing that disgust can occur as a response
to general moral violations are harder to interpret. Some neuroimaging studies
have demonstrated overlapping regions of neural activation (as measured by
fMRI) for physically disgusting acts and acts of moral indignation (Moll
etal. 2005). However, the stimuli used in the study to evoke moral indignation
often contained basic, physical elicitors of disgust (e.g., You took your mother
out to dinner. At the restaurant, she saw a dead cockroach floating on the
soap pan.). The overlapping brain regions found when participants read the
indignation statements and the pure disgust statements (e.g., One night
you were walking on a street. You saw a cat eating its own excrement) could
therefore be due to the fact that both statement types contain powerful elicitors
of basic disgust.
One study has found that people report feeling disgust in response to
pictures that depict violations such as ethnic cleansing or child abuse (but
do not show physical disgust elicitors; Simpson, Carter etal. 2006). However,
self-reported disgust in this study was highly correlated with self-reported
anger, leaving open the possibility that participants were using the term in
a metaphorical rather than literal sense (see Nabi 2002). Similarly, young
children agree that moral violations such as being very mean to someone
can be described as disgusting, and that a disgust face can go with these
violations (Danovitch and Bloom 2009). However, in these studies other
Pollution and Purity in Moral andPoliticalJudgment 121
negative emotion words and faces were not possible responses, leaving open
the possibility that children simply endorsed the one negatively valenced
emotion available to them.
Summary
Most disgusting moral violations involve unusual sex or foodstuffs (or, in
the case of the chicken sex vignette, both). This is what one would expect if
disgust-evoking moral violations activated the behavioral immune system and
negative evaluations of these acts were driven by avoidance in the same way
that behavioral immune system-relevant intergroup and political attitudes are.
The pattern of data is also compatible with the first part of the view advanced by
Haidt and Graham (2007)that disgust functions as the guardian of physical
purity. However, empirical support for the second half of their viewthat
violations of spiritual purity also evoke disgustis lacking.
Furthermore, some findings are explained poorly by both accounts
namely, that unfair or selfish behavior also evokes disgust, at least under some
circumstances. Such behavior is neither straightforwardly related to pathogen
threats, nor to physical or spiritual purity. Of course, these findings are from
only two studies, and further research is necessary to determine the robustness
and generality of the relationship between witnessing unfairness or selfishness
and disgust. One (admittedly speculative) possibility is that cheaters and non-
reciprocators are seen as an outgroup that evokes a distancing motivation in
the same way that groups seen as unfamiliar or unclean do.
masturbating with a kitten (Schnall et al. 2008; Wheatley and Haidt 2005).
Schnall etal. examined whether the type of moral infraction (purity-violating,
e.g., dog-eating or sex between first cousins, vs. non-purity-violating, e.g.,
falsifying ones resume) moderated the effects of induced disgust on moral
judgment and found that it did not. However, Horberg et al. (2009) found
that inducing disgust (as opposed to sadness) had a stronger amplification
effect on judgments of purity violations (such as sexual promiscuity) than
harm/care violations (such as kicking a dog). Thus, there is conflicting
evidence on whether inducing disgust selectively affects certain kinds of moral
judgments.
However, studies that demonstrate the effects of experimental inductions of
disgust on moral evaluation do not serve as evidence that disgust is naturally
elicited by moral violations. An analogy to research on the effects of emotion on
judgment is useful here. Research has shown that extraneously manipulating
emotions such as fear, sadness, anger, or even disgust can affect a wide range
of judgments and decisions (Loewenstein and Lerner 2003). But that does
not mean that these emotions naturally arise when making these judgments.
No one would conclude that because disgust makes one more willing to
sell an item that one has been given (Lerner et al. 2004), disgust therefore
also arises naturally when one is deciding whether to sell or keep an item.
Similarly, showing that disgust affects judgments of certain moral violations
is not informative about whether disgust is a naturally occurring response to
witnessing such violations.
If moral and political judgments are motivated at least partly by the threat
of contamination, drawing attention to this threat by asking participants to
wash their hands (or perhaps even by simply exposing them to washing-
related stimuli) should have similar effects on judgment as other pathogen
primes. There is some evidence for this: Helzer and Pizarro (2011) found that
participants who were standing next to a hand-sanitizer dispenser described
themselves as more politically conservative, and that those who had just used an
Pollution and Purity in Moral andPoliticalJudgment 123
antiseptic hand wipe were more negative in their moral judgments of unusual
sexual behaviors (e.g., consensual incest between half-siblings), but not in their
judgments of putatively immoral acts that did not involve sexuality. Similarly,
Zhong etal. (2010) demonstrated that hand-washing made participants more
conservative (i.e., more negative) on a number of social issues related mainly
to sexual morality (e.g., casual sex, pornography, and adultery).
However, researchers who have adopted a more metaphorical notion
of purity have made exactly the opposite prediction regarding the effects of
cleanliness on moral judgment, arguing that if feeling clean is psychologically
the opposite of feeling disgusted, making cleanliness salient should reduce
feelings of disgust and therefore make moral judgments less harsh. There is
also some evidence for this view: Priming participants with purity-related
words (e.g., pure, immaculate, and pristine) made them marginally less
harsh when judging moral violations (Schnall etal. 2008, Study 1), and asking
participants to wash their hands after watching a disgusting film clip attenuated
the effects of the clip on moral judgments (Schnall etal., Study 2).
How to reconcile these conflicting results? First, it is likely that in Schnall
etal.s (2008) Study 2, in which all participants watched a film clip showing a
man crawling into a filthy toilet, physical contamination threats were salient
for all participants. When contamination is salient, hand-washing may have
a palliative effect, whereas when contamination is not already salient, hand-
washing may instead prime pathogen concerns. However, this still leaves the
results of Schnall etal.s Study 1 unexplained. It is possible that purity-related
words do not prime physical pathogen threats. Such simple cognitive primes
may simply not be enough to engage a motivational system built to avoid
pathogens, but may be effective in reminding individuals of other cleanliness-
related concepts. It is also possible that this single, marginally significant result
from a low-powered (total N40) study is anomalous. This is a question that
can only be settled by future research.
Putting this (possibly anomalous) result aside, the account we propose here
offers a parsimonious explanation why disgust and its oppositecleanliness
would show parallel effects on peoples moral judgments and sociopolitical
attitudes. Because both disgust and hand-washing make the threat of physical
contamination salient, their effects on certain kinds of moral and sociopolitical
judgments should be similar. In contrast, a more metaphorical view of the role
124 Advances in Experimental Moral Psychology
One potential objection to the account we defend here is that there are many
behaviors that are judged by most as disgusting yet morally permissible, such
as picking ones nose in private (see also Royzman etal. 2009). However, our
argument does not require that all disgusting acts be seen as immoral (or, for
that matter, that all immoral acts be seen as disgusting). Rather, we argue that
reactions to certain moral violations (primarily those involving sex or food),
certain sociomoral attitudes (primarily toward individuals seen as physically
abnormal, norm-violating, or foreign), and certain political attitudes (primarily
those related to sexual conservatism, reduced contact between different social
groups, and hostility toward outsiders) rely on a shared motivational system;
that this system evolved due to the adaptive benefits of responding to disease
or contamination threats with rejection and avoidance; and that its primary
motivating emotion is disgust.
This account allows, but does not require, that disgust might extend to other
kinds of moral violations as well (as we have described above, evidence for
such extension is scarce). One way that such an extension could happen is that
disgust may become attached to some behaviors for which there already exist
non-moral proscriptive norms (e.g., smoking or eating meat; Nichols 2004). In
these cases, the pairing of disgust with (or the tendency to be disgusted by) the
behavior might cause it to be pushed into the moral domainespecially if the
behavior can be construed as harmful (see Rozin 1999). Such a moralization
process might be observed with longitudinal data comparing moral attitudes
toward disgusting and non-disgusting behaviors that either have an existing
(but non-moral) proscriptive norm and those which do not. If our account
is correct, one would expect moralization over time to occur only in the
disgusting behaviors for which there are already conventional norms in place.
Pollution and Purity in Moral andPoliticalJudgment 125
Conclusion
Reviewing the evidence linking moral violations and disgust shows that with
a few exceptions, the moral violations that elicit disgust involve food, sex, or
both. This is consistent with the view that seeing such acts as immoral and
feeling disgust in response to them result from activation of the behavioral
immune system, an evolved motivational system that responds to physical
contamination threats. We believe that this account parsimoniously explains
disgusts connection with moral judgments, sociomoral attitudes, and political
beliefs. It also suggests that the link between disgust and morality may be
different from what has been assumed by many researchers.
Although there is an empirical connection between disgust and seeing a
variety of acts as immoral, this may be due to the specific content of the acts
in question rather than to a more general relationship between disgust and
judgments of immorality. A great deal of research points to a reliable connection
between disgust and acts, individuals, or groups that are threatening because of
the potential for physical contamination, whereas there is as yet little evidence
that disgust is a reaction to immoral behaviors per se.
Note
* Authors Note: Yoel Inbar, Tilburg University, and David Pizarro, Cornell University.
Corresponding Author: Yoel Inbar, Department of Social Psychology, Tilburg
University, Email: yinbar@uvt.nl.
References
Chapman, H. A., and Anderson, A. K. (2013). Things rank and gross in nature:
Areview and synthesis of moral disgust. Psychological Bulletin, 139, 30027.
Chapman, H. A., Kim, D. A., Susskind, J. M., and Anderson, A. K. (2009). In bad
taste: Evidence for the oral origins of moral disgust. Science, 323, 12226.
Danovitch, J., and Bloom, P. (2009). Childrens extension of disgust to physical and
moral events. Emotion, 9, 10712.
Dasgupta, N., DeSteno, D. A., Williams, L., and Hunsinger, M. (2009). Fanning the
flames of prejudice: The influence of specific incidental emotions on implicit
prejudice. Emotion, 9, 58591.
Diamond, J. M. (2006). The Third Chimpanzee: The Evolution and Future of the
Human Animal. New York: Harper Perennial.
Faulkner, J., Schaller, M., Park, J. H., and Duncan, L. A. (2004). Evolved disease-
avoidance mechanisms and contemporary xenophobic attitudes. Group Processes
and Intergroup Behavior, 7, 33353.
Fincher, C. L., and Thornhill, R. (2012). Parasite-stress promotes in-group assortative
sociality: The cases of strong family ties and heightened religiosity. Behavioral and
Brain Sciences, 35, 6179.
Graham, J., Haidt, J., and Nosek, B. (2009). Liberals and conservatives use different sets
of moral foundations. Journal of Personality and Social Psychology, 96, 102946.
Greenwald, A. G., McGhee, D. E., and Schwartz, J. L. K. (1998). Measuring individual
differences in implicit cognition: The implicit association test. Journal of
Personality and Social Psychology, 74, 146480.
Gutierrez, R., and Giner-Sorolla, R. S. (2007). Anger, disgust, and presumption of
harm as reactions to taboo-breaking behaviors. Emotion, 7, 85368.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach
to moral judgment. Psychological Review, 108, 81434.
Haidt, J., and Graham, J. (2007). When morality opposes justice: Conservatives
have moral intuitions that liberals may not recognize. Social Justice Research, 20,
98116.
Haidt, J., and Hersh, M. (2001). Sexual morality: The cultures and emotions of
conservatives and liberals. Journal of Applied Social Psychology, 31, 191221.
Helzer, E. G., and Pizarro, D. A. (2011). Dirty liberals! Reminders of physical
cleanliness influence moral and political attitudes. Psychological Science, 22, 51722.
Horberg, E. J., Oveis, C., Keltner, D., and Cohen, A. B. (2009). Disgust and the
moralization of purity. Journal of Personality and Social Psychology, 97, 96376.
Inbar, Y., Pizarro, D., Knobe, J., and Bloom, P. (2009). Disgust sensitivity predicts
intuitive disapproval of gays. Emotion, 9, 4359.
Inbar, Y., Pizarro, D. A., and Bloom, P. (2009). Conservatives are more easily disgusted.
Cognition & Emotion, 23, 71425.
Pollution and Purity in Moral andPoliticalJudgment 127
Kelly, D. (2011). Yuck! The Nature and Moral Significance of Disgust. Cambridge, MA:
The MIT Press.
Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to
socialization. In D. A. Goslin (ed.), Handbook of Socialization Theory and Research.
Chicago, IL: Rand McNally, pp. 347480.
Koleva, S. P., Graham, J., Iyer, R., Ditto, P. H., and Haidt, J. (2012). Tracing the
threads: How five moral concerns (especially Purity) help explain culture war
attitudes. Journal of Research in Personality, 46, 18494.
Lerner, J. S., Small, D. A., and Loewenstein, G. (2004). Heart strings and purse strings
carryover effects of emotions on economic decisions. Psychological Science, 15,
33741.
Loewenstein, G., and Lerner, J. S. (2003). The role of affect in decision making. In
R. J. Davidson, K. R. Scherer, and H. H. Goldsmith (eds), Handbook of Affective
Science. New York: Oxford University Press, pp. 61942.
Moll, J., de Oliveira-Souza, R., Moll, F. T., Igncio, F. A., Bramati, I. E., Caparelli-
Dquer, E. M., and Eslinger, P. J. (2005). The moral affiliations of disgust: A
functional MRI study. Cognitive and Behavioral Neurology, 18, 6878.
Nabi, R. L. (2002). The theoretical versus the lay meaning of disgust: Implications for
emotion research. Cognition and Emotion, 16, 695703.
Navarrete, C. D., Fessler, D. M. T., and Eng, S. J. (2007). Elevated ethnocentrism in
the first trimester of pregnancy. Evolution and Human Behavior, 28, 605.
Nichols, S. (2004). Sentimental Rules: On the Natural Foundations of Moral Judgment.
New York: Oxford University Press.
Oaten, M., Stevenson, R. J., and Case, T. I. (2009). Disgust as a disease-avoidance
mechanism. Psychological Bulletin, 135, 30321.
Park, J. H., Faulkner, J., and Schaller, M. (2003). Evolved disease-avoidance processes
and contemporary anti-social behavior: Prejudicial attitudes and avoidance of
people with physical disabilities. Journal of Nonverbal Behavior, 27, 6587.
Park, J. H., Schaller, M., and Crandall, C. S. (2007). Pathogen-avoidance mechanisms
and the stigmatization of obese people. Evolution and Human Behavior, 28,
41014.
Pizarro, D. A., Inbar, Y., and Helion, C. (2011). On disgust and moral judgment.
Emotion Review, 3, 2678.
Pratto, F., Sidanius, J., Stallworth, L. M., and Malle, B. F. (1994). Social dominance
orientation: A personality variable predicting social and political attitudes. Journal
of Personality and Social Psychology, 67, 74163.
Royzman, E. B., Leeman, R. F., and Baron, J. (2009). Unsentimental ethics: Towards
a content-specific account of the moralconventional distinction. Cognition, 112,
15974.
128 Advances in Experimental Moral Psychology
Royzman, E. B., Leeman, R. F., and Sabini, J. (2008). You make me sick: Moral
dyspepsia as a reaction to third-party sibling incest. Motivation and Emotion, 32,
1008.
Rozin, P. (1999). The process of moralization. Psychological Science, 10, 21821.
Rozin, P., Haidt, J., and McCauley, C. R. (2008). Disgust. In M. Lewis, J. M. Haviland-
Jones, and L. F. Barrett (eds), Handbook of Emotions (3rd ed.). New York: Guilford,
pp. 75776.
Rozin, P., Lowery, L., Imada, S., and Haidt, J. (1999). The moral-emotion triad
hypothesis: A mapping between three moral emotions (contempt, anger, disgust)
and three moral ethics (community, autonomy, divinity). Journal of Personality
and Social Psychology, 76, 57486.
Russell, P. S., and Giner-Sorolla, R. (2011). Moral anger, but not moral disgust,
responds to intentionality. Emotion, 11, 23340.
Schaller, M., and Duncan, L. A. (2007). The behavioral immune system: Its evolution
and social psychological implications. In J. P. Forgas, M. G. Haselton, and W. von
Hippel (eds), Evolution and the Social Mind: Evolutionary Psychology and Social
Cognition. New York: Psychology Press, pp. 293307.
Schaller, M., and Murray, D. R. (2008). Pathogens, personality, and culture: Disease
prevalence predicts worldwide variability in sociosexuality, extraversion, and
openness to experience. Journal of Personality and Social Psychology, 95, 21221.
Schaller, M., and Park, J. H. (2011). The behavioral immune system (and why it
matters). Current Directions in Psychological Science, 20, 99103.
Schnall, S., Benton, J., and Harvey, S. (2008). With a clean conscience: Cleanliness
reduces the severity of moral judgments. Psychological Science, 19, 121922.
Schnall, S., Haidt, J., Clore, G. L., and Jordan, A. H. (2008). Disgust as embodied
moral judgment. Personality and Social Psychology Bulletin, 34, 1096109.
Simpson, J., Carter, S., Anthony, S. H., and Overton, P. G. (2006). Is disgust a
homogeneous emotion? Motivation and Emotion, 30, 3141.
Singelis, T. M., Triandis, H. C., Bhawuk, D. P. S., and Gelfand, M. J. (1995). Horizontal
and vertical dimensions of individualism and collectivism: A theoretical and
measurement refinement. Cross-Cultural Research, 29, 24075.
Staats, A. W., and Staats, C. K. (1958). Attitudes established by classical conditioning.
The Journal of Abnormal and Social Psychology, 57, 3740.
Terrizzi, J. A., Shook, N. J., and McDaniel, M. A. (2013). The behavioral immune
system and social conservatism: A meta-analysis. Evolution and Human Behavior,
34, 99108.
Terrizzi, J. A., Shook, N. J., and Ventis, W. L. (2010). Disgust: A predictor of social
conservatism and prejudicial attitudes toward homosexuals. Personality and
Individual Differences, 49, 587592.
Pollution and Purity in Moral andPoliticalJudgment 129
Tully, T., and Quinn, W. G. (1985). Classical conditioning and retention in normal
and mutant Drosophila melanogaster. Journal of Comparative Physiology A,
157(2), 26377.
Tybur, J. M., Lieberman, D., Kurzban, R., and DeScioli, P. (2012). Disgust: Evolved
function and structure. Psychological Review, 120, 6584.
Wheatley, T., and Haidt, J. (2005). Hypnotic disgust makes moral judgments more
severe. Psychological Science, 16, 7804.
Zhong, C. B., Strejcek, B., and Sivanathan, N. (2010). A clean self can render harsh
moral judgment. Journal of Experimental Social Psychology, 46, 85962.
7
each bears to its commonsensical starting place, with an eye toward where that
analogy might break down. For instance, Noam Chomsky suggests that when
we are doing science, theorizing can and should transcend the folk intuitions
it begins with, and that departure or movement away from the common-
sense concepts in which early investigation is typically couched is relatively
unrestricted. For instance, while discussing scientific inquiries into the mind,
and the relationship between the categories of folk psychology and those that
will be taken up by cognitive science as it proceeds, he remarks:
Its common to analogize folk psychology with folk physics. But, of course,
professional physicists can happily leave folk physics far behind as they
tinker with their Calabi-Yau Manifolds and Gromov-Witten invariants.
132 Advances in Experimental Moral Psychology
Put this way, a core question that emerges is whether morality and moral
theory is special or distinctive in its relation to empirical psychology and other
natural sciencesroughly, whether something about human moral nature
makes it more or less debunkable than other aspects of human nature, or
whether something about moral judgment makes it more or less resistant to
transformation than other types of judgment.
These are fascinating and timely topics; they are also difficult ones. Rather
than set out an overarching view or take a stand on the debunking of morality
tout court, in what follows Ill explore a divide and conquer strategy. First, I will
briefly sketch a debunking argument that, instead of targeting all of morality
or human moral nature, has a more narrow focusnamely, the intuitive moral
authority of disgust. The argument concludes that as vivid and compelling as
they can be while one is in their grip, feelings of disgust should be granted no
power to justify moral judgments. Importantly, the argument is grounded in
empirical advances concerning the character of the emotion itself. Next, I will
step back and consider the arguments general form. I then point to arguments
that others have made that seem to share this form and selective focus, and
comment on what such arguments do and do not presuppose. Finally, I locate
the selective strategy with respect to approaches to debunking morality and
end by reflecting on what the entire line of thought implies about Greenes
question and Appiahs claim.
You find it simply, but unequivocally, revolting and repulsive, or you just find
yourself slightly disgusted by it. What follows from that yuck reaction, from
the point of view of morality? Do feelings of disgust, in and of themselves,
provide good enough reason to think the practice is morally wrong or
problematic?
Recently, such issues have come to the fore in normative and applied ethics,
centering on the question of what role the emotion of disgust should play
in morality, broadly construed: whether or not disgust should influence our
considered moral judgments; if so, how feelings of disgust should be accounted
for in various ethical evaluations, deliberations, and decisions; what sort of
weight, import, or credit should be assigned to such feelings; and how our legal
system and other institutions should best deal with the emotion (see Kelly and
Morar manuscript for full references).
Elsewhere (Kelly 2011) I have fleshed out a debunking argument designed
to undermine confidence in the normative force that feelings of disgust
can seem to have in moral cognition. The resulting position, which I call
disgust skepticism, holds that: feelings of disgust have no moral authority;
that explicit appeals to disgust, while often rhetorically effective, are morally
empty; that the emotion should not be granted any justificatory value;
and that we should aspire to eliminate its influence on morality, moral
deliberation, and institutional operation to the extent that we can. Rather
than recapitulate the argument in full, I will here mention some of its most
relevant properties.
First, while the argument has a normative thrust concerning the role that
feelings of disgust should place in moral justification, it is firmly rooted in a
descriptive and explanatory account of the nature of the emotion itself. It is
worth noting that my argument shares this structural feature with arguments
that others have made concerning the moral significance of disgust. All
interested parties, both skeptics (Nussbaum 2004a, 2004b) and advocates
(Kass 1997, 2002; Kahan 1998, 1999), base their normative conclusions on
descriptive claims concerning the character of the emotion. On this score, a
key advantage I claim over those competing arguments is that my account of
disgust is superior to its competitors: it is more detailed, more evolutionarily
plausible, and better able to explain the wealth of empirical data recently
discovered by moral psychologists.
134 Advances in Experimental Moral Psychology
The two core claims of what I call the E&C view are the Entanglement thesis
and the Co-opt thesis. The first holds that at the heart of the psychological
disgust system are two distinguishable but functionally integrated mechanisms,
one that initially evolved to protect the gastrointestinal system from poisons
and other harmful food, and another that initially evolved to protect the entire
organism from infectious diseases and other forms of parasites. Appeal to the
operation of these two mechanisms and their associated adaptive problems
can explain much of the fine-grained structure of the disgust response, its
intrinsic sensitivity to perceivable cues associated with poisons and parasites,
its propensity to err in the direction of false positives (rather than false
negatives), and its malleability and responsiveness to social influence, which
can result in variation in what triggers disgust from one group of people to the
next. The second core claim, the Co-opt thesis, holds that this malleability and
responsiveness to social influence was exploited by evolution, as disgust was
recruited to perform auxiliary functions having nothing to do with poisons or
parasites, infusing certain social norms and group boundaries with a disgust-
based emotional valence. In doing so, disgust did not lose its primary functions
or those properties clearly selected to allow it to perform those functions well.
Rather, it retained those functions and properties, and simply brought them to
bear on the auxiliary functions associated with norms and group membership
(Kelly 2011, 2013).
In addition to these features, the argument in favor of disgust skepticism
appeals to other facts about the emotion and key elements of the picture
provided by the E&C view. One is that disgust has an intrinsic negative valence,
which can manifest subjectively as a kind of nonverbal authority. Intense
episodes of disgust obviously have a powerful and vivid phenomenology, but
even less flagrant instances can bias judgments that they influence toward
negativity and harshness. However, the mere activation of disgust, in and
of itself, is not even a vaguely reliable indicator of moral wrongness. The
emotion remains overly sensitive to cues related to its primary functions
of protecting against poisons and parasite, which results in many false
positives even in those domains. There is no reason to think the situation
improves when disgust operates in the sociomoral domain. Indeed, there
is reason to think that disgust renders those in its grip less sensitive to the
agency and intentions of others, and can make it easier to dehumanize them.
Selective Debunking Arguments 135
Now that my argument against the normative value of the yuck factor has been
sketched, recall the framing questions posed at the beginning of the chapter
about the relationship between morality, on the one hand, and a cognitive
scientific understanding of the mind that may depart from intuition and
folk psychology as it increases in sophistication, on the other. The issue is
not always approached this way. Many conversations have explored related
but different questions, and they have typically done so at a higher level of
generality: morality and all moral judgments (or claims) are grouped together,
and arguments are made that they are either all vulnerable to some sweeping
form of debunking, or none of them are (Mackie 1977; Blackburn 1988; Joyce
2007; c.f. Ayer 1936; also see Street2006; Greene 2013).4 While I have doubts
about the viability of this kind of global debunking, I have just advanced what
can be thought of as a selective debunking argument against the relevance of
one circumscribed set of considerations, namely feelings of disgust, to moral
justification.5 Here I will spell out the line of reasoning, first expressing it
in condensed form before going on to elaborate and comment on different
aspects of the premises and conclusion.
Selective Debunking Arguments 137
Indeed, Harman uses this as an example to soften up his reader for the main
claim he makes in the paper, which is, roughly, that folk psychology consistently
makes a fundamental attribution error about the determinants of peoples
behavior, and that virtue ethical theories that seem to enshrine that error in the
character trait-based moral psychology they advance are flawed on empirical
grounds.9 Cast in my terminology, Harman is offering a debunking argument
that selectively targets a specific component of folk psychology (rather than
the kind of global attack on the whole conceptual framework associated with,
e.g., Churchland 1981). Harman even offers an account of the psychological
mechanisms that drive the fundamental attribution error, and uses it to advance
his argument against those select intuitions that lead us10 to overestimate the
extent to which peoples behavior is driven by internal character traits, and
overlook the strong (and empirically documented) influence of external cues
and situational factors.
Similarly, current empirical work has shown how the psychological
mechanisms underlying racial cognition can lead people to naturally, intuitively
ascribe some deep and evaluatively laden racial essence to individuals based
on their observable phenotypic characteristics like skin color or hair type.
Such discoveries about the operational principles and evolutionary history
of those psychological mechanisms look to be important to contemporary
discussions about the nature of race itself, but also the pragmatics of racial
classification. A society might decide, in light of its considered goals about
140 Advances in Experimental Moral Psychology
how to deal with racial categories and biases, and also in light of the mounting
facts (genetic, biological, social, historical, etc.) about race and the source of
racial differences, that its members should aspire to overcome the influence
of the psychological mechanisms underlying racial cognition, and disregard
the intuitions that issue from them. Indeed, empirical work on the character
of those psychological mechanisms will likely point the way to the most
effective methods of controlling their influence (see Kelly etal. 2010a, 2010b
for discussion).11
The upshot of these examples is that arguments that have a form similar to
the one I have made about disgust are not uncommon. However, there does
not appear to be a single monolith or univocal notion of problematic that
they all have in common, suggesting that there is a variety of ways in which
psychological mechanisms and the intuitions that issue from them can be
found to be problematic. As nice as it would be to have a single, all-purpose, or
universally applicable criterion to apply to every psychological mechanism, no
such clean, algorithmic test is yet in the offing, and may never be. This does not
render the general argumentative strategy specious, though. Rather, it pushes
us to look at and assess each instance of the argument type on a case-by-case
basis, and tend to the details of the individual psychological mechanisms to
which it appeals.12
Conclusion
One might find reason for optimism in the themes of malleability and variation
that run throughout some of the above examples, including my main example
of disgust. Perhaps psychological mechanisms that are problematic in some
people are unproblematic in others, suggesting that such mechanisms are plastic
enough to be fixable. This is an interesting possibility, to be sure. However,
it leaves untouched the question of what being fixed amounts to, and which
mechanisms are properly tuned and which are not. One way to understand
my point about justification is to say that in cases of disagreement about this
kind of issue, members on one side of the debate cannot appeal to their own
calibrated psychological mechanisms or the intuitions that issue from them to
justify their position without begging the very question being raised. Even once
Selective Debunking Arguments 141
(or if) the issue of what proper tuning amounts to is settled, the argument still
goes through for those improperly tuned mechanisms, and I maintain that we
should continue to be on guard against their influence on judgment and action.
Finally, it is also likely that different psychological mechanisms will be
malleable to different extents, and in different ways. This provides more
support for the divide-and-conquer strategy I advocate. Together, I think
theseconsiderations raise problems for familiar globally oriented approaches
that seek to draw more encompassing conclusions in one fell swoop. I began
this chapter with a passage from K. Anthony Appiah suggesting that human
moral nature, morality, and moral psychology will be resistant to transformative
influences originating in advances in the sciences of the mind, and with some
questions raised by Joshua Greene about how far the empirical debunking of
human moral nature can go. I will end not by addressing these head on, but by
pointing out that in asking questions and making claims about morality as a
single phenomenon and moral psychology as a uniform whole, they rely on an
assumption that I think we have good reason to doubt. Rather, the argument
of this chapter shows that advances in cognitive science can indeed have a
transformative effect on how we think about selective aspects of morality, and
how we should make sense of ourselves and some of our own moral impulses.
Perhaps more importantly, the empirical work is also revealing how a more
piecemeal approach is required if we are to draw any defensible normative
conclusions from it. The need for a more selective focus opens up new ways to
think about whether and which components of morality might be debunked
by, transformed by, or even just informed and guided by our growing empirical
understanding of our own moral psychology.
Notes
2 In arguing that the role of disgust in the moral domain should be minimized,
Irealize that I am recommending that we should refrain from using what could
be a useful heuristic and powerful motivation tool. However, given the risks
attached to this particular emotion, namely its hair trigger sensitivity to cues
that are prima facie irrelevant to morality and its susceptibility to false positives,
together with its propensity to dehumanize its object, I think the costs outweigh
the benefits.
3 One might imagine an individual with a perfectly tuned sense of disgust,
whose psychological makeup is such that she feels revulsion at all and only
those norms, actions, and practices that are genuinely morally wrong. My
position is not undermined by this possibility. Even though, ex hypothesi,
all ofher judgments about those norms, action, and practices are justified,
itremains open for me to claim that it is not the attendant feelings of disgust
she feels that justify her judgments. Rather, the ultimate arbiter of justification
is something else, above and beyond the mere presence of feelings of disgust,
namely whatever standard is being appealed to in claiming that her sense of
disgust is perfectly tuned.
4 I am particularly skeptical of the prospects of empirically motivated debunking
of the entirety of morality or all moral judgments because (among other
reasons) it remains unclear how to delimit the scope of such arguments.
Separating the domain of morality and moral cognition off from the rest
ofnon-moral or extra-moral cognitionidentifying what moral judgments
have in common that makes them moral judgmentshas proven surprisingly
difficult. Certainly, no consensus has emerged among practitioners in
the growing field of empirical moral psychology. See Nado etal. 2009,
MacheryandMallon 2010, Parkinson etal. 2011, Sinnott-Armstrong and
Wheatley 2012.
5 The terminology selective debunking is taken from a series of thought-provoking
posts on the topic by Tamler Sommers at The Splintered Mind blog (http://
schwitzsplinters.blogspot.com/2009/05/on-debunking-part-deux-selective.html).
6 I mean to cast my net widely with the first premise, but recognize that the
details and preferred jargon used to discuss the distinguishable psychological
mechanisms vary in different literatures. For instance, see Fodor (1983, 2000),
Pinker (1997), and Carruthers (2006) for discussion in terms of different
psychological modules; Evans (2003), Stanovich (2005), and Frankish (2010)
for discussion in terms of dual process theory, and Ekman (1992) and Griffith
(1997) for discussion of affect programs and basic emotions.
Selective Debunking Arguments 143
7 See Rawls (1971) on the method of reflective equilibrium and also David Lewiss
methodological contention that to the victor go the spoils (Lewis 1973). For
some interesting recent discussion on the later, see (Eddon 2011; Ichikawa 2011).
8 This suggestion is very much in the spirit of some comments in Tim Maudlins
book The Metaphysics in Physics: if we care about intuitions at all, we ought
to care about the underlying mechanism that generates them (Maudlin2010,
pp.1467). In the main text, I am working with a picture similar to that implied
by Maudlins comment, namely that one of the things psychological mechanisms
that comprise the disgust system do is generate an intuition, namely the
intuition that whatever triggered the system (or whatever the person thinks
triggered the system, in cases of misattribution) is disgusting.
9 Also see Doris (2002) for a book-length defense of what has become known as
the situationist critique of virtue ethics, and Alfano (2013) for a discussion of
the current state of the debate.
10 That can lead those of us in Western cultures to commit the error, anyway.
Members of Eastern Asian cultures are less prone to the mistake, suggesting it
is not a universal component of folk psychology (Nisbett 2003). For another
discussion about cultural variability and the fundamental attribution error,
this time within the context of Confucian versus Aristotelian versions of virtue
ethics, see Sarkissian (2010).
11 A final illuminating comparison, and one that might feel more apt to someone
sympathetic to metaethical constructivism, is suggested by considering how
intuition, on the one hand, and theoretical psychological knowledge, on the
other, can best inform and guide not moral judgment but artistic creation.
Reflecting on his project in Sweet Anticipation: Music and the Psychology of
Expectation, cognitive musicologist David Huron offers some reasonable and
intriguing comments:
12 Another metaethical view that bears intriguing similarities to the one suggested
by the selective debunking approach endorsed here is the patchy realism
described by Doris and Plakias (2007).
References
(2002). Life, Liberty, and the Defense of Dignity: The Challenge to Bioethics.
NewYork: Encounter Books.
Kelly, D. (2011). Yuck! The Nature and Moral Significance of Disgust. Cambridge, MA:
The MIT Press.
(2013). Moral disgust and tribal instincts: A byproduct hypothesis. In R. Joyce,
K.Sterelny, and B. Calcott (eds), Cooperation and Its Evolution. Cambridge, MA:
The MIT Press.
Kelly, D., Faucher, L., and Machery, E. (2010). Getting rid of racism: Assessing three
proposals in light of psychological evidence. Journal of Social Philosophy, 41(3),
293322.
Kelly, D., Machery, E., and Mallon, R. (2010). Race and racial cognition. In J. Doris
etal. (eds), The Moral Psychology Handbook. New York: Oxford University Press,
pp. 43372.
Kelly, D., and Morar, N. (in press). Against the Yuck Factor: On the Ideal Role of
Disgust Society. Utilitas.
Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. New York: Penguin Books.
Maudlin, T. (2010). The Metaphysics Within Physics. New York: Oxford University Press.
Nado, J., Kelly, D., and Stich, S. (2009). Moral Judgment. In John Symons and Paco
Calvo (eds), The Routledge Companion to the Philosophy of Psychology. New York:
Routledge, pp. 62133.
Machery, E., and Mallon, R. (2010). Evolution of morality. In J. Doris etal. (eds),
TheMoral Psychology Handbook. New York: Oxford University Press, pp. 346.
Nichols, S. (2002). Norms with feeling: Towards a psychological account of moral
judgment. Cognition, 84, 22136.
(2004). Sentimental Rules: On the Natural Foundations of Moral Judgment.
NewYork: Oxford University Press.
Nisbett, R. (2003). The Geography of Thought: How Asians and Westerners Think
Differently...And Why. New York: The Free Press.
Nussbaum, M. (2004a). Hiding from Humanity: Disgust, Shame, and the Law.
Princeton, NJ: Princeton University Press.
(6 August 2004b). Danger to human dignity: The revival of disgust and shame in
the law. The Chronicle of Higher Education, 50(48), B6.
Parkinson, C., Sinnott-Armstrong, W., Koralus, P., Mendelovici, A., McGeer, V., and
Wheatley, T. (2011). Is morality unified? Evidence that distinct neural systems
underlie moral judgments of harm, dishonesty, and disgust. Journal of Cognitive
Neuroscience, 23(10), 316280.
Pinker, S. (1997). How the Mind Works. New York: W.W. Norton & Co.
Rawls, J. (1971). A Theory of Justice (2nd ed. 1999). Cambridge, MA: Harvard
University Press.
Selective Debunking Arguments 147
Sarkissian, H. (2010). Minor tweaks, major payoffs: The problems and promise of
situationism in moral philosophy. Philosophers Imprint, 10(9), 115.
Singer, P. (2005). Ethics and intuitions. The Journal of Ethics, 9, 33152.
Sinnott-Armstrong, W., and Wheatley, T. (2012). The disunity of morality and why it
matters to philosophy. The Monist, 95(3), 35577.
Stanovich, K. (2005). The Robots Rebellion: Finding Meaning in the Age of Darwin.
Chicago, IL: University of Chicago Press.
Street, S. (2006). A darwinian dilemma for realist theories of value. Philosophical
Studies, 127(1), 10966.
8
In a letter to the editor of the Mercury News, one reader explained his views
on the death penalty as follows: Ill vote to abolish the death penalty...and
not just because it is fiscally imprudent with unsustainable costs versus a life
sentence without possibility of parole. More importantly, its morally wrong.
Making us and the state murderersthrough exercising the death penaltyis
a pure illogicality akin to saying two wrongs make a right (Mercury News
2012). In short, this letter writer believes murder is simply wrong, regardless
of whether it is an individual or state action, and for no other reason than
because it is simply and purely wrong.
Attitudes rooted in moral conviction (or moral mandates), such as the
letter writers position on the death penalty, represent a unique class of strong
attitudes. Strong attitudes are more extreme, important, central, certain, and/
or accessible, and are also more stable, enduring, and predictive of behavior
than attitudes weaker on these dimensions (see Krosnick and Petty 1995 for a
review). Attitudes held with the strength of moral conviction, even if they share
many of the same characteristics of strong attitudes, are distinguished by a sense
of imperative and unwillingness to compromise even in the face of competing
desires or concerns. Someone might experience their attitude about chocolate,
for example, in extreme, important, certain, and central terms, but still decide
not to order chocolate cake at a restaurant because of the calories. Vanity,
or other motives such as health or cost, can trump even peoples very strong
preferences. Attitudes rooted in moral conviction, however, are much less likely
to be compromised or vulnerable to trade off (cf. Tetlock etal. 2000).
The Psychological Foundations ofMoralConviction 149
To better understand how attitudes that are equally strong can nonetheless
differ in their psychological antecedents and consequences, we need to
understand the psychological and behavioral implications of the content
of attitudes as well as their structure (e.g., extremity, importance). Social
domain theory (e.g., Nucci 2001; Nucci and Turiel 1978; Turiel 1998; 2002),
developed to explain moral development and reasoning, provides some
useful hints about key ways that attitudes may differ in substance, even
when they are otherwise equally strong. Using domain categories to describe
how attitudes differ represents a useful starting point for understanding
the foundations of moral mandates (Skitka et al. 2005; Skitka et al. 20081;
Wright etal. 2008). As can be seen in Figure 8.1, one domain of attitudes is
personal preference. Personal preferences represent attitudes that people see
as subject to individual discretion, and as exempt from social regulation or
comment. For example, one person might support legalized abortion because
she prefers to have access to a backstop method of birth control, and not
because of any normative or moral attachment to the issue. She is likely to
think others preferences about abortion are neither right nor wrong; they
may just be different from her own. Her position on this issue might still be
evaluatively extreme, personally important, certain, central, etc., but it is not
one she experiences as a core moral conviction. Her neighbor, on the other
hand, might oppose legalized abortion because this practice is inconsistent
with church doctrine or because the majority of people he is close to oppose
it. If church authorities or his peer group were to reverse their stance on
abortion, however, the neighbor probably would as well. Attitudes that
reflect these kinds of normative beliefs typically describe what people like
me or us believe, are relatively narrow in application, and are usually group
or culture bound rather than universally applied. Yet a third person might
see the issue of abortion in moral terms. This person perceives abortion
(or restricting access to abortion) as simply and self-evidently wrong, even
monstrously wrong, if not evil. Even if relevant authorities and peers were
to reverse positions on the issue, this person would nonetheless maintain
his or her moral belief about abortion. In addition to having the theorized
characteristic of authority and peer independence, moral convictions are also
likely to be perceived as objectively true, universal, and to have particularly
strong ties to emotion.
150 Advances in Experimental Moral Psychology
appear to have a strong intuitive sense of when their moral beliefs apply to a
given situation (Skitka etal. 2005). People can identify when situations engage
their moral sentiments, even when they cannot always elegantly describe the
processes or principles that lead to this sense (Haidt 2001). The assumption
that people have some insight into the characteristics of their own attitudes
is one shared by previous theory and research on the closely related concept
of attitude strength. Researchers assume that people can access from memory
and successfully report the degree to which a given attitude is (for example)
extreme, personally important, certain, or central (see Krosnick and Petty
1995 for a review).
Hornsey and colleagues (Hornsey etal. 2003; 2007) provide one example
of this approach. They operationalized moral conviction with three items, all
prefaced with the stem, To what extent do you feel your position... and the
completions of is based on strong personal principles, is a moral stance,
and is morally correct, that across four studies had an average Cronbachs
a 0.75. Others have operationalized moral conviction in similar fashion,
most typically using either a single face-valid item: How much are your
feelings about connected to your core moral beliefs and convictions (e.g.,
Brandt and Wetherell 2012; Skitka etal. 2005), or this item accompanied by a
second item, To what extent are your feelings about deeply connected
to your fundamental beliefs about right and wrong? (e.g., Skitka etal. 2009;
Skitka and Wisneski 2011; Swink 2011). Morgan (2011) used a combination
of the Hornsey etal.s (2003, 2007) and Skitka etal.s (2009) items to create a
5-item scale, and found as that ranged from 0.93 to 0.99 across three samples.
The reliability scores observed by Morgan suggest that either all, or a subset, of
these items work well, and will capture highly overlapping content.
Some have wondered, however, if moral conviction is better represented as
a binary judgment: Something that is or is not the case, rather than something
that varies in degree or strength. Measuring the categorization of an attitude
as moral and the relative strength of conviction both contribute uniquely to
the explanatory power of the variable (Wright etal. 2008; Wright 2012). For
this reason, as well as the parallelism of conceptualizing moral conviction
similarly to measures of attitude strength, we advocate that moral convictions
be measured continuously rather than nominally.
Other ways of operationalizing moral conviction are problematic because
they confound moral conviction with the things that moral convictions should
152 Advances in Experimental Moral Psychology
theoretically predict (e.g., Van Zomeron etal. 2011; Zaal etal. 2011), use items
that have no explicit references to morality (e.g., X threatens values that
are important to me,3 Siegrist et al. 2012), conflate moral convictions with
other dimensions of attitude strength (e.g., centrality, Garguilo 2010; Skitka
and Mullen 2006), and/or measure other constructs as proxies for moral
conviction, such as importance or centrality (e.g., Besley 2012; Earle and
Siegrist 2008). These strategies introduce a host of possible confounds and do
more to confuse than to clarify the unique contribution of moral conviction
independent of other characteristics of attitudes. Attitude importance and
centrality, for example, have very different associations with other relevant
variables than those observed with unconfounded measures of moral
conviction (e.g., including effects that are the reverse sign, e.g., Skitka et al.
2005). To avoid these problems, researchers should therefore use items that
(a) explicitly assess moral content, and (b) do not introduce confounds that
capture either the things moral conviction should theoretically predict (e.g.,
perceived universalism) or other dimensions of attitude strength (importance,
certainty, or centrality).
Moral philosophers argue that moral convictions are experienced as
sui generis, that is as unique, special, and in a class of their own (e.g.,
Boyd 1988; McDowell 1979; Moore 1903; Sturgeon 1985). This status of
singularity is thoughtto be due to a number of distinguishing mental states
or processes associated with the recognition of something as moral, including
(a) universalism, (b) the status of moral beliefs as factual beliefs with
compelling motives and justification for action, and (c) emotion (Skitka etal.
2005). These theoretically defining characteristics of attitudes (which taken
together represent the domain theory of attitudes) are testable propositions
in themselves, and have a number of testable implications (e.g., the authority
independence and nonconformity hypotheses). I briefly review empirical
research testing these core propositions and selected hypotheses that can be
derived from them next.
facts. In other words, good and bad are experienced as objective characteristics
of phenomena and not just as verbal labels that people attach to feelings
(Shweder 2002). Because beliefs rooted in moral conviction are perceived as
operationally true, they should also be perceived as universally applicable. The
author of the letter to the Mercury News, for example, is likely to believe that
the death penalty should not only be prohibited in his home state of California,
but in other states and countries as well.
Broad versions of the universalism and objectivism hypotheses have been
tested and supported. For example, people see certain moral rules (e.g.,
Nichols and Folds-Bennett 2003; Turiel 1978) and values (e.g., Gibbs etal.
2007) as universally or objectively true, and that certain moral transgressions
should be universally prohibited (e.g., Brown 1991). There is some evidence
that people also see ethical rules and moral issues as more objectively true
than, for example, various violations of normative conventions (Goodwin
and Darley 2008), but other research yields more mixed results (Wright
etal. 2012). Until recently, little or no research has tested the universalism
hypothesis.
To shed further light on the objectivism and universalism hypotheses,
Morgan, Skitka, and Lytle (under review) tested whether thinking about
a morally mandated attitude leads to a situational increase in peoples
endorsement of a universalistic moral philosophy (e.g., the degree to which
people rate moral principles as individualistic or relativistic, versus as universal
truisms). Participants endorsements of a universalistic moral philosophy,
their positions on the issue of legalized abortion, and moral conviction about
abortion were measured at least 24 hours before the experimental session.
Once in the lab, participants were primed to think about abortion by writing an
essay about their position that they thought would be shared with an another
participant. They were then given an essay presumably written by the other
participant, that was either pro-choice or pro-life (essays were modeled after
real participants essays on this topic). After reading the essay, participants
completed the same universalistic philosophy measure they had completed
at pretest. Strength of moral conviction about abortion was associated with
increased post-experimental endorsement of a universalistic philosophy,
regardless of whether participants read an essay that affirmed or threatened
their own position on the topic. In short, people see moral rules in general
154 Advances in Experimental Moral Psychology
as more universally applicable when they have just thought about an attitude
held with moral conviction.
A second study tested the universalism and objectivity hypotheses more
directly by having participants rate the perceived objectivity (e.g., Imagine
that someone disagreed with your position on [abortion, requiring the HPV
vaccine, same sex marriage]: To what extent would you conclude the other
person is surely mistaken?) and universality (To what extent would your
position on [abortion/the HPV vaccine, same sex marriage] be equally correct
in another culture?) of these attitudes, in addition to providing ratings of the
degree to which each reflected a moral conviction. Strength of moral conviction
was associated with higher perceived objectivity and universalism of attitudes,
even when controlling for attitude extremity.
Finally, in a third study, participants were asked to generate sentences that
articulated their own beliefs or positions with respect to a piece of scientific
knowledge, something that is morally right or wrong, and that you like
or dislike. Participants then completed the same objectivity and universalism
measures used in Study 2. Scientific and moral beliefs were rated as equally
objectively true and universal, and as more objectively true and universal than
likes/dislikes. In sum, these three studies demonstrated that moral convictions
are perceived as indistinguishable from scientific facts in perceived universality
and objectivism.
Emotion
The domain theory of attitudes also makes the prediction that moral convictions
should have especially strong ties to emotion. For example, Person A might
have preference for low taxes. If her taxes rise, she is likely to be disappointed
rather than outraged. Imagine instead, Person B, who has a strong moral
conviction that taxes be kept low. He is likely to respond to the same rise in
tax rates with rage, disgust, and contempt. In short, the strength and content
of emotional reactions associated with attitudes rooted in moral conviction
are likely to be quite different than the emotional reactions associated with
otherwise strong but nonmoral attitudes. Emotional responses to given issues
might also play a key role in how people detect that an attitude is a moral
conviction, or in strengthening moral convictions.
156 Advances in Experimental Moral Psychology
Emotion as consequence
Consistent with the prediction that moral mandates will have different, and
perhaps stronger ties to emotion than nonmoral mandates, people whose
opposition to the Iraq War was high rather than low in moral conviction also
experienced more negative emotion (i.e., anger and anxiety) about the War in
the weeks just before and after it began. In contrast, supporters high in moral
conviction experienced more positive emotions (i.e., pleased and glad) about
going to war compared to those low in moral conviction, results that emerged
even when controlling for a variety of attitude strength measures. Similar
positive and negative emotional reactions were also observed in supporters
and opponents reactions to the thought of legalizing physician-assisted suicide
(Skitka and Wisneski 2011).
Emotion as antecedent
Other research has tested whether people use emotions as information in
deciding whether a given attitude is a moral conviction. Consistent with this
idea, people make harsher moral judgments of others behavior when exposed
to incidental disgust such as foul odors or when in a dirty lab room, than
they do when exposed to more pleasant odors or a clean lab room (Schnall
et al. 2008). People generalize disgust cues and apply them to their moral
judgments. It is important to point out, however, that moral judgments are not
the same thing as moral convictions. Attitudes (unlike judgments) tend to be
stable, internalized, and treated much like possessions (e.g., Prentice 1987). In
contrast, moral judgments are single-shot reactions to a given behavior, actor,
or hypothetical, and share few psychological features with attitudes. Learning
that incidental disgust leads to harsher moral judgments, therefore, may not
mean that incidental disgust (or other incidental emotions) would also lead
people to have stronger moral convictions.
Consistent with distinctions between judgments and attitudes, research
in my lab has found no effect of incidental emotion on moral convictions
(Skitka, unpublished data). We have manipulated whether data is collected
in a clean versus dirty lab; in the context of pleasant (e.g., Hawaiian breeze,)
versus disgusting smells (e.g., fart spray or a substance that smelled like
a dead rat); when participants have their hands and forearms placed in an
unpleasant concoction of glue and gummy worms, versus feathers and beads;
The Psychological Foundations ofMoralConviction 157
of them as well. Although more research is needed to further tease apart the
complex connections between moral convictions and emotions, one thing is
clear: Emotions are clearly a key part of the story.
A core premise of the domain theory of attitudes is that people do not rely on
conventions or authorities to define moral imperative; rather, people perceive
what is morally right and wrong irrespective of authority or conventional
dictates. Moral beliefs are not by definition antiestablishment or antiauthority,
but are simply not dependent on conventions, rules, or authorities. When
people take a moral perspective, they focus on their ideals and the way they
believe things ought to or should be done rather than on a duty to comply with
authorities or normative conventions. The authority independence hypothesis
therefore predicts that when peoples moral convictions are at stake, they are
more likely to believe that duties and rights follow from the greater moral
purposes that underlie rules, procedures, and authority dictate than from the
rules, procedures, or authorities themselves (see also Kohlberg 1976; Rest
etal. 1999).
One study tested the authority independence hypothesis by examining
which was more important in predicting peoples reactions to a controversial
US Supreme Court decision: peoples standing perceptions of the Courts
legitimacy, or peoples moral convictions about the issue being decided (Skitka
etal. 2009). A nationally representative sample of adults rated the legitimacy of
the Court, as well as their level of moral conviction about the issue of physician-
assisted suicide several weeks before the Court heard arguments about whether
states could legalize the practice, or whether it should be federally regulated.
The same sample of people was contacted again after the Court upheld the
right of states to legalize physician-assisted suicide. Knowing whether peoples
support or opposition to physician-assisted suicide was high versus low in
moral conviction predicted whether they saw the Supreme Courts decision
as fair or unfair, as well as their willingness to accept the decision as binding.
Pre-ruling perceptions of the legitimacy of the Court, in contrast, had no effect
on post-ruling perceptions of fairness or decision acceptance.
The Psychological Foundations ofMoralConviction 159
Other research has found behavioral support for the prediction that people
reject authorities and the rule of law when outcomes violate their moral
convictions. Mullen and Nadler (2008) exposed people to legal decisions
that supported, opposed, or were unrelated to their moral convictions. The
experimenters distributed a pen with a post-exposure questionnaire, and
asked participants to return them at the end of the session. Consistent with the
prediction that decisions, rules, and laws that violate peoples moral convictions
erode support for the authorities and authority systems who decide these things,
participants were more likely to steal the pen after exposure to a legal decision that
was inconsistent rather than consistent with their personal moral convictions.
Peoples moral mandates should affect not only their perceptions of decisions
and willingness to comply with authorities, but should also affect their
perceptions of authorities legitimacy. People often do not know the right
answer to various decisions authorities are asked to make (e.g., what is best for
the group, whether a defendant is really guilty or innocent), and therefore, they
frequently rely on cues like procedural fairness and an authoritys legitimacy
to guide their reactions (Lind 2001). When people have moral certainty about
what outcome authorities and institutions should deliver, however, they do
not need to rely on standing perceptions of legitimacy as proxy information
to judge whether the system works. In these cases, they can simply evaluate
whether authorities get it right. Right decisions indicate that authorities
are appropriate and work as they should. Wrong answers signal that the
system is somehow broken and is not working as it should. In short, one
could hypothesize that people use their sense of morality as a benchmark
to assess authorities legitimacy. Consistent with this idea, the results of the
Supreme Court study referenced earlier also found that perceptions of the
Courts legitimacy changed from pre- to post ruling as a function of whether
the Courtruled consistently or inconsistently with perceivers morally vested
outcome preferences (Skitka etal. 2009).
choice to accept or reject the majority position. This occurs because those who
oppose the majority risk ridicule and disenfranchisement, whereas those who
conform expect acceptance (Asch 1956). In addition, people may conform
when they are unsure about the appropriate way to think or behave; they adopt
the majority opinion because they believe the majority is likely to be correct
(Chaiken and Stangor 1987; Deutsch and Gerard 1955). Therefore, people
conform both to gain acceptance from others and to be right.
Feeling strong moral convictions about a given issue should weaken the
typical motives for conformitymaking people more resistant to majority
influence. To test this idea, Hornsey and colleagues presented student
participants with feedback that their position on same-sex marriage was
either the majority or minority view on campus. Surprisingly, stronger moral
convictions about this issue were associated with greater willingness to
engage in activism when students believed they were in the opinion minority,
rather than majorityan example of counter-conformity (Hornsey et al.
2003, 2007).
Another study had participants engage in what they believed was a computer-
mediated interaction with four additional (though, in fact, virtual) peers. The
study was scripted so that each participant was exposed to a majority of peers
who supported torture (pretesting indicated that none of our study participants
did). Participants were shown the other participants opinions one at a time
before they were asked to provide their own position on the issue to the group.
Results supported the hypothesis: Stronger moral convictions were associated
with lower conformity rates, even when controlling for a number of indices of
attitude strength (Aramovich etal. 2010).4 By contrast, people do show strong
conformity effects in an Asch paradigm when making moral judgments about
moral dilemmas, such as the trolley problem (Kundu and Cummins 2012),
providing further evidence that moral judgments and moral attitudes are not
the same things.
Conclusion
Notes
References
Aguilera, R., Hanson, B., and Skitka, L. J. (2013). Approaching good or avoiding bad?
Understanding morally motivated collective action. Paper presented at the annual
meeting of the Society for Personality and Social Psychology, New Orleans, LA.
Aramovich, N. P., Lytle, B. L., and Skitka, L. J. (2012). Opposing torture: Moral
conviction and resistance to majority influence. Social Influence, 7, 2134.
Asch, S. E. (1956). Studies of independence and conformity: A minority of one
against a unanimous majority. Psychological Monographs, 70(9, No 416), 170.
Bartels, D. M. (2008). Principled moral sentiment and the flexibility of moral
judgment and decision making. Cognition, 180, 381417.
Besley, J. C. (2012). Does fairness matter in the context of anger about nuclear energy
decision making? Risk Analysis, 32, 2538.
Boyd, R. (1988). How to be a moral realist. In G. Sayre-McCord (ed.), Essays in Moral
Realism. Ithaca, NY: Cornell University Press, pp. 181228.
Brandt, M. J., and Wetherell, G. A. (2012). What attitudes are moral attitudes? The
case of attitude heritability. Social Psychological and Personality Science, 3, 1729.
Brown, D. (1991). Human Universals. New York: McGraw-Hill.
Chaiken, S., and Stangor, C. (1987). Attitudes and attitude change. Annual Review
ofPsychology, 38, 575630.
Cushman, F. A., Young, L., and Hauser, M. D. (2006). The role of reasoning and
intuition in moral judgments: Testing three principles of harm. Psychological
Science, 17, 10829.
Darwin, D. O. (1982). Public attitudes toward life and death. Public Opinion
Quarterly, 46, 52133.
Deutsch, M., and Gerard, H. B. (1955). A study of normative and informational social
influences upon individual judgment. Journal of Abnormal and Social Psychology,
51, 62936.
Earle, T. C., and Siegrist, M. (2008). On the relation between trust and fairness in
environmental risk management. Risk Analysis, 28, 1395413.
Fehr, E., and Fischbacher, U. (2004). Third-party punishment and social norms.
Evolution and Human Behavior, 25, 6387.
Garguilo, S. P. (2010). Moral Conviction as a Moderator of Framing Effects (Masters
thesis). Rutgers University, Rutgers, NJ.
Gibbs, J. C., Basinger, K. S., Grime, R. L., and Snarey, J. R. (2007). Moral judgment
development across cultures: Revisiting Kohlbergs universality claims.
Developmental Review, 27, 443550.
Goodwin, G. P., and Darley, J. M. (2008). The psychology of meta-ethics: Exploring
objectivism. Cognition, 106, 1139366.
The Psychological Foundations ofMoralConviction 163
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., and Cohen, J. D.
(2001). An fMRI investigation of emotional engagement in moral judgment.
Science, 293, 21058.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach
to moral judgment. Psychological Review, 108, 81434.
Hornsey, M. J., Majkut, L., Terry, D. J., and McKimmie, B. M. (2003). On being loud
and proud: Non-conformity and counter-conformity to group norms. British
Journal of Social Psychology, 42, 31935.
Hornsey, M. J., Smith, J. R., and Begg, D. I. (2007). Effects of norms among those with
moral conviction: Counter-conformity emerges on intentions but not behaviors.
Social Influence, 2, 24468.
Hume, D. (1968). A Treatise on Human Nature. Oxford, England: Clarendon Press.
Original work published 1888.
Kohlberg, L. (1976). Moral stages and moralization: The cognitive developmental
approach. In T. Lickona (ed.), Moral Development and Behavior: Theory, Research
and Social Issues. New York: Holt, Rinehart, & Winston, pp. 3153.
Krosnick, J. A., and Petty, R. E. (1995). Attitude strength: An overview. In R. E.
Petty and J. A. Krosnick (eds), Attitude Strength: Antecedents and Consequences.
Mahwah, NJ: Lawrence Erlbaum Associates, pp. 124.
Kundu, P., and Cummins, D. D. (2012). Morality and conformity: The Asch paradigm
applied to moral decisions. Social Influence, (ahead-of-print), 112.
Lind, E. A. (2001). Fairness heuristic theory: Justice judgments as pivotal cognitions
in organizational relations. In J. Greenberg and R. Cropanzano (eds), Advances in
Organizational Behavior. San Francisco: New Lexington Press, pp. 2755.
Lodewijkx, H. F. M., Kersten, G. L. E., and Van Zomeren, M. (2008). Dual pathways
to engage in Silent Marches against violence: Moral outrage, moral cleansing,
and modes of identification. Journal of Community and Applied Social Psychology,
18, 15367.
Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. New York: Penguin.
McDowell, J. (1979). Virtue and reason. The Monist, 62, 33150.
Mercury News (2012). Talk back/Saturday forum letters. Retrieved 12/17/12 from
http://www.mercurynews.com/top-stories/ci_21813856/oct-20-talk-back-
saturday-forum-letters
Moore, G. E. (1903). Principia Ethica. New York: Cambridge University Press.
Morgan, G. S. (2011). Toward a Model of Morally Motivated Behavior: Investigating
Mediators of the Moral Conviction-action Link (Doctoral dissertation). University
of Illinois at Chicago.
Morgan, G. S., Skitka, L. J., and Lytle, B. L. (under review). Universally and objectively
true: The psychological foundations of moral conviction.
164 Advances in Experimental Moral Psychology
Skitka, L. J., Bauman, C. W., and Sargis, E. G. (2005). Moral conviction: Another
contributor to attitude strength or something more? Journal of Personality and
Social Psychology, 88, 895917.
Skitka, L. J., and Morgan, G. S. (2009). The double-edged sword of a moral state of
mind. In D. Narvaez and D. K. Lapsley (eds), Moral Self, Identity, and Character:
Prospects for New Field of Study. Cambridge, UK: Cambridge University Press,
pp.35574.
Skitka, L. J., and Wisneski, D. C. (2011). Moral conviction and emotion. Emotion
Review, 3, 32830.
Smith, M. (1994). The Moral Problem. Oxford, England: Blackwell.
Sturgeon, N. (1985). Moral explanations. In D. Copp and D. Zimmerman (eds),
Morality, Reason, and Truth. Totowa, NJ: Rowman and Allanheld, pp. 4978.
Swink, N. (2011). Dogmatism and moral conviction in individuals: Injustice for all.
(Doctoral dissertation). Wichita State University.
Tetlock, P. E., Kirstel, O. V., Elson, S. B., Green, M. C., and Lerner, J. S. (2000).
The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and
heretical counterfactuals. Journal of Personality and Social Psychology, 78, 85370.
Turiel, E. (1978). Social regulations and domains of social concepts. In W. Damon
(ed.), New Directions for Child Development. Vol. 1. Social Cognition. New York:
Gardner, pp. 4574.
(1983). The Development of Social Knowledge: Morality and Convention. New York:
Cambridge University Press.
(1998). The development of morality. In W. Damon (Series ed.) and N. Eisenberg
(Vol. ed.), Handbook of Child Psychology: Vol. 3. Social Emotional and Personality
Development (5th ed.). New York: Academic Press, pp. 863932.
Uhlmann, E. L., Pizarro, D. A., Tannenbaum, D., and Ditto, P. H. (2009). The
motivated use of moral principles. Judgment and Decision Making, 6, 47691.
Van Zomeron, M., Postmes, T., Spears, R., and Bettache, K. (2011). Can moral
convictions motivate the advantaged to challenge social inequality?: Extending
the social identity model of collective action. Group Processes and Intergroup
Relations, 14, 73553.
Wisneski, D. C., Lytle, B. L., and Skitka, L. J. (2009). Gut reactions: Moral conviction,
religiosity, and trust in authority. Psychological Science, 20, 105963.
Wisneski, D. C., and Skitka, L. J. (2013). Flipping the moralization switch:
Exploring possible routes to moral conviction. Emotion pre-conference, Society for
Personality and Social Psychology, New Orleans, LA.
Wright, J. C. (2012). Childrens and adolescents tolerance for divergent beliefs:
Exploring the cognitive and affective dimensions of moral conviction in our
youth. British Journal of Developmental Psychology, 30, 493510.
166 Advances in Experimental Moral Psychology
Wright, J. C., Cullum, J., and Schwab, N. (2008). The cognitive and affective
dimensions of moral conviction: Implications for attitudinal and behavioral
measures of interpersonal tolerance. Personality and Social Psychology Bulletin, 34,
146176.
Wright, J. C., Grandjean, P. T., and McWhite, C. B. (2012). The meta-ethical grounding
of our moral beliefs: Evidence of meta-ethical pluralism. Philosophical Psychology,
ifirst, 126.
Zaal, M. P., Van Laar, C., Sthl, T., Ellemers, N., and Derks, B. (2011). By any means
necessary: The effects of regulatory focus and moral conviction on hostile and
benevolent forms of collection action. British Journal of Social Psychology, 50,
67089.
9
Although the empirical study of folk metaethical judgments is still in its infancy,
a variety of interesting and significant results have been obtained.1 Goodwin
and Darley (2008), for example, report that individuals tend to regard ethical
statements as more objective than conventional or taste claims and almost
as objective as scientific claims, although there is considerable variation in
metaethical intuitions across individuals and across different ethical issues.
Goodwin and Darley (2012) also report (i) that participants treat statements
condemning ethical wrongdoing as more objective than statements enjoining
good or morally exemplary actions, (ii) that perceived consensus regarding an
ethical statement positively influences ratings of metaethical objectivity, and
(iii) that moral objectivism is associated with greater discomfort with and more
pejorative attributions toward those with whom individuals disagreed. Beebe
and Sackris (under review) found that folk metaethical commitments vary
across different life stages, with decreased objectivism during the college years.
Sarkissian etal. (2011) found that folk intuitions about metaethical objectivity
vary as a function of cultural distance, with increased cultural distance between
disagreeing parties leading to decreased attributions of metaethical objectivity.
Wright etal. (2013) found that not only is there significant diversity among
individuals with regard to the objectivity they attribute to ethical claims, there
is also significant diversity of opinion with respect to whether individuals take
certain issues such as abortion or anonymously donating money to charity
to be ethical issues at all, despite the fact that philosophers overwhelmingly
regard these issues as ethical.2 Wright etal. (2013) provide the following useful
summary of the current set of findings on folk metaethical intuitions:
168 Advances in Experimental Moral Psychology
Study 1
Method
Participants
Study 1 was an attempt to replicate Beebe and Sackris (under review) initial
study with a population of participants that was limited to the same university
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 169
Materials
Beebe and Sackris asked two and a half thousand participants between the
ages 12 and 88 to indicate the degree to which they agreed or disagreed with
the claims that appear in Table 9.1 and the extent to which they thought that
people in our society disagreed about whether they are true. The same set of
claims was used in Studies 1 through 3.
Procedure
The items from Table 9.1 were divided into three questionnaire versions, and
participants indicated their agreement or disagreement with them on a six-point
scale, where 1 was anchored with Strongly Disagree and 6 with Strongly
Agree. Participants rated the extent to which they thought people in our society
disagreed about the various claims on a six-point scale anchored with There is no
disagreement at all and There is an extremely large amount of disagreement.
In order to capture one kind of objectivity that participants might attribute to
the various claims in Table 9.1, participants were asked, If someone disagrees
with you about whether [one of these claims is true], is it possible for both of
you to be correct or must one of you be mistaken? The answer At least one of
you must be mistaken was interpreted as an attribution of objectivity, and an
answer of It is possible for both of you to be correct was taken to be a denial
of objectivity.
Results
As can be seen from Figure 9.1, the items in Table 9.1 are ordered within each
subcategory in terms of increasing proportions of participants who attributed
objectivity to them.
170 Advances in Experimental Moral Psychology
Table 9.1 Factual, ethical, and taste claims used in Beebe and Sackris (under review)
and in Studies 1 through 4
Factual
1. Frequent exercise usually helps people to lose weight.
2.Global warming is due primarily to human activity (for example, the burning
offossil fuels).
3. Humans evolved from more primitive primate species.
4. There is an even number of stars in the universe.
5. Julius Caesar did not drink wine on his 21st birthday.
6. New York City is further north than Los Angeles.
7. The earth is only 6,000 years old.
8. Mars is the smallest planet in the solar system.
Ethical
9.Assisting in the death of a friend who has a disease for which there is no known
cure and who is in terrible pain and wants to die is morally permissible.
10.Before the third month of pregnancy, abortion for any reason is morally
permissible.
11.Anonymously donating a significant portion of ones income to charity is
morally good.
12. Scientific research on human embryonic stem cells is morally wrong.
13. Lying on behalf of a friend who is accused of murder is morally permissible.
14.Cutting the American flag into pieces and using it to clean ones bathroom
ismorally wrong.
15.Cheating on an exam that you have to pass in order to graduate is morally
permissible.
16. Hitting someone just because you feel like it is wrong.
17. Robbing a bank in order to pay for an expensive vacation is morally bad.
18. Treating someone poorly on the basis of their race is morally wrong.
Taste
19. Classical music is better than rock music.
20. Brad Pitt is better looking than Drew Carey.
21. McDonalds hamburgers taste better than hamburgers made at home.
22.Gourmet meals from fancy Italian restaurants taste better than microwavable
frozen dinners.
23. Barack Obama is a better public speaker than George W. Bush.
24. Beethoven was a better musician than Britney Spears is.
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 171
Question
1.0 type
Factual
Ethical
Taste
Proportion of objectivity attributions
0.5
0.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Question number
Goodwin and Darley (2008) and Beebe and Sackris both found that more
participants attributed objectivity to factual claims than to ethical or taste
claims. In Study 1, a greater proportion of participants attributed objectivity
to factual claims (0.64, averaged across all claims in the factual subcategory)
than to ethical (0.34) or taste (0.10) claims. Chi-square tests of independence
reveal that the difference between the factual and ethical proportions was
significant, c2 (1, N926)80.523, p0.001, Cramrs V0.30, andthe
difference between the ethical and taste proportions was significant as well,
c2(1,N826)61.483, p0.001, Cramrs V0.27.3 Study 1 alsoreplicates
earlier findings that objectivity attributions are positively associated with
strength of belief about an issue (c2 (2, N 1,224) 67.276, p 0.001,
Cramrs V 0.23) but negatively associated with the extent of perceived
disagreement about the issue (c2 (5, N1,218)89.517, p0.001, Cramrs
V0.27). In other words, participants tended to attribute more objectivity to
claims that they had stronger opinions about than to claims they had weaker
172 Advances in Experimental Moral Psychology
opinions about, but they tended to attribute less objectivity to claims they
recognized were widely disputed in society. Somewhat surprisingly, higher
ratings of perceived disagreement about an issue were positively associated with
participants strength of opinion about the issue, c2 (10, N1,212)100.897,
p0.001, Cramrs V0.20.
Discussion
Like Goodwin and Darley (2008) and Beebe and Sackris, Study 1 found that
participants attribute more objectivity to some ethical claims than to some
factual claims and that there is significant variation concerning the degree of
objectivity attributed to different claims within each subcategory.4 Thus, Study
1 reinforces the conclusion already established by Goodwin and Darley (2008)
and Beebe and Sackris that the question of whether ordinary individuals are
moral objectivists is not going to have a simple Yes or No answer.
Study 2
Method
Participants
A total of 195 undergraduate students (average age 19, 47% female, 69%
Anglo-American) from the University at Buffalo participated in Study 2 in
exchange for extra credit in an introductory course.
Thus, the last thing participants were asked to do was to consider about the
extent of societal disagreement with respect to the claims. Given the negative
association between perceived disagreement and objectivity attributions, it
was hypothesized that if participants were directed to think about societal
disagreement before completing Task 2, their attributions of metaethical
objectivity would decrease. Disagreement was not hypothesized to have a
similar effect on factual and taste claims.
Results
Discussion
The findings of Study 2 are consistent not only with the correlational data
obtained by Goodwin and Darley (2008) and Beebe and Sackris but also with
174 Advances in Experimental Moral Psychology
the experimental data obtained by Goodwin and Darley (2012). The latter
manipulated participants perceived consensus about ethical issues by giving
them bogus information about the percentage of students from the same
institution who agreed with them. Participants who were told that a majority
of their peers agreed with them about some ethical statement were more likely
to think there was a correct answer as to whether or not the statement was true
than participants who were told that significantly fewer of their peers agreed
with them. These studies show that perceived disagreement or consensus can
be a causal and not a merely correlational factor in folk metaethical decision-
making.
Study 3
Various studies of folk intuitions about moral responsibility have shown that
individuals hold agents more responsible for their actions when the situations of
those agents are described concretely than when they are described abstractly.
Nichols and Knobe (2007), for example, obtained significantly higher ratings
of moral responsibility for Bill, who was attracted to his secretary and killed
his wife and three children in order to be with her, than for a person whose
actions were left unspecified. Small and Loewenstein (2003, 2005) showed that
the subtlest change in the concreteness of the representation of an individual
can lead to surprising differences in judgments or decisions regarding them.
When their participants were given the opportunity to punish randomly
selected defectors in an economic game, participants selected significantly
harsher punishments for anonymous defectors whose numbers had just
been chosen than for anonymous defectors whose numbers were about to be
chosen. Because increased concreteness appears to heighten or intensify the
engagement of cognitive and affective processes associated with attributions
of blame and responsibility and to lead participants to treat the actions of
concrete individuals as more serious than abstractly represented ones,6 it was
hypothesized that increasing the concreteness of those with whom participants
were asked to imagine they disagreed would lead participants to take the
disagreements more seriously and to increase attributions of metaethical
objectivity.
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 175
Method
Participants
A total of 108 undergraduate students (average age 19, 59% female, 66%
Anglo-American) from the University at Buffalo participated in Study 3 in
exchange for extra credit in an introductory course.
Results
numerically higher for eight of the ten ethical claims. Having more concrete
parties in Study 3 did not, however, result in any significant difference in the
objectivity attributed to factual or taste claims.
Discussion
The results from Study 3 are consisted with those obtained by Sarkissian
etal. (2011), who found that strong objectivity ratings were obtained when
participants were asked to consider disagreeing with a concretely presented
individual from their same culture (vs. a concretely presented individual
from a different culture). The fact that the concreteness of the disagreeing
parties used in Study 3 led to increased metaethical objectivity attributions
may also explain why the objectivity ratings obtained in Study 1 fell below
those obtained by Goodwin and Darley (2008), even though both used
samples of university students. The Task 2 objectivity question in Study 1
asked participants to consider a situation of hypothetical disagreement
(If someone disagrees with you...). Goodwin and Darley (2008, 1344),
however, instructed participants, We have done prior psychological testing
with these statements, and we have a body of data concerning them. None
of the statements have produced 100% agreement or disagreement. Each
of Goodwin and Darleys objectivity questions then reiterated that some
individuals who had been previously tested disagreed with participants
about the relevant issue. Goodwin and Darley thus constructed situations of
disagreement that were more concrete than those in Studies 1 and 2 by having
(allegedly) actual rather than merely hypothetical individuals who disagreed
with participants.
Study 4
Study 3 made the parties with whom experimental participants were asked to
consider disagreeing concrete by providing them with given names, surname
initials, academic classes, and majors. In Study 4, the disagreeing parties
were made concrete by having pictures of their faces shown. Faces (and parts
of faces) have been shown to have a variety of effects on morally relevant
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 177
behavior. Forexample, Bateson etal. (2006) found that academics paid 276
percent more for the tea they took from a departmental tea station when an
image of eyes was displayed by the station than when an image of flowers
was displayed. Rezlescu, Duchaine, Olivola, and Chater (2012) found that
unfakeable facial features associated with trustworthiness attracted 42 percent
greater investment in an economic game that required trust.7
Method
Participants
A total of 360 participants (average age 32, 38% female, 82% Anglo-
American) were recruited through Amazons Mechanical Turk (www.
mturk.com) and were directed to complete online questionnaires hosted at
vovici.com.8
Dominant
Submissive
Claims (12), (13), and (14) from Table 9.1concerning embryonic stem
cell research, lying for a friend accused of murder, and treating a national flag
disrespectfullywere selected for use in Study 4. The degrees of objectivity
attributed to them in Studies 1 through 3 fell in the middle range, suggesting
that judgments about them could be more easily manipulated than judgments
near the floor or ceiling. The first screen contained one of the pictures from
Table 9.2, along with the following (Task 1) question:
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 179
Mark (pictured above11) believes that [statement (12), (13), or (14) is true].
Please indicate whether you agree or disagree with Marks belief.
Agree
Disagree
Each screen that presented the metaethical question included the same
picture (if any) that participants saw at the top of their Task 1 question. Each
participant was presented with claims (12), (13), and (14) in counterbalanced
order. The same picture (if any) of Mark appeared above each of these questions.
Thus, no participant saw more than one version of Marks face.
It was hypothesized that the five facial conditions would engage online
processes of social cognition to a greater degree than the control condition
and that this would result in higher attributions of metaethical objectivity.
On the basis of Oosterhof and Todorovs (2008) finding that untrustworthy
and dominant faces were associated with potential threat, it was also
hypothesized that untrustworthy and dominant faces would elicit lower
objectivity attributions than their dimensional pairs, since participants might
be more tentative or anxious about disagreeing with potentially threatening
interlocutors.
Results
and Untrustworthy (0.66) face conditions than it was in the No Face (0.46)
condition. The proportions of objectivity attributions in the five face conditions
did not differ significantly from each other.
Discussion
Study 5
Method
Participants
Using a between-subject design, 160 participants (average age 34, 38%
female, 80% Anglo-American) were recruited through Amazons Mechanical
Turk and were directed to complete online questionnaires hosted at vovici.
com.13
1. The CEO of a company that helps and preserves the environment believes
that it is morally wrong to harm the environment.
2. The CEO of a company that helps and preserves the environment believes
that it is not morally wrong to harm the environment.
3. The CEO of a company that harms and pollutes the environment believes
that it is morally wrong to harm the environment.
4. The CEO of a company that harms and pollutes the environment believes
that it is not morally wrong to harm the environment.
In (1) and (2), the CEO is depicted doing something morally good, namely,
helping and preserving the environment, whereas the CEOs actions in (3) and
(4) are morally bad. In (1) and (3), the CEO is described as having a morally
good belief about the environment, namely, that it should not be harmed; in
(2) and (4), the CEO has the corresponding morally bad belief. The crossing of
good and bad actions with good and bad beliefs results in the actions and beliefs
of the CEO being congruent in (1) and (4) and incongruent in (2) and (3).
Participants were first asked to indicate in a forced-choice format whether
they agreed or disagreed with the CEOs belief. They were then asked, If
someone disagreed with the CEO about whether it is morally wrong to harm
the environment, would it be possible for both of them to be correct or must
at least one of them be mistaken?. Participants were then directed to choose
between It is possible for both of them to be correct and At least one of them
must be mistaken.
Results
Judgment
valence
1.0 *
Good
Bad
Proportion of objectivity attributions
0.5
0.0
Good Bad
Action valence
Figure 9.2 Mean objectivity attributions in the Good Action/Good Belief (0.55),
Good Action/Bad Belief (0.43), Bad Action/Good Belief (0.57), and Bad Action/Bad
Belief (0.75) conditions of Study 5.
Discussion
As with other findings from the Knobe effect literature, the moral valence of
a protagonists action significantly affected participants responses to probe
questions. However, unlike other results in this literature, the responses in
question were not folk psychological ascriptions. They were second-order
attributions of objectivity to ethical beliefs held by the protagonist. These
results provide further evidence that individuals assessments of metaethical
disagreements are significantly affected by a variety of factors in the situation
of disagreement.
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 183
General discussion
The foregoing studies show (i) that making disagreement salient to participants
before asking them to make metaethical judgments can decrease objectivist
responses, (ii) that increasing the concreteness of the situation of disagreement
participants are directed to consider can increase objectivist responses, and
(iii) that the moral valence of the actions performed by agents whose ethical
beliefs participants are asked to consider affected attributions of objectivity
to those beliefs. Because philosophical discussionwhether in the classroom
or at professional conferencesoften takes place in a somewhat rarefied
atmosphere of abstractions, philosophers should be aware that intuitive
agreement or disagreement with their metaethical claims can be affected by
the very abstractness of those situations and that the amount of agreement or
disagreement they encounter might be different in other situations. In spite of the
fact that an increasing number of philosophers are familiar with the Knobe effect
and its seemingly unlimited range of applicability, many philosophers continue
to give little thought either to the moral valence of the actions depicted in their
favored thought experiments and/or to the consequences this might have.
An important question raised by the studies reported above concerns the
coherence of folk metaethical commitments. Most philosophers assume that
the correct semantics for ordinary ethical judgments must show them to be
uniformly objective or subjective.15 Yet, Studies 2 through 5in addition to
work by Goodwin and Darley (2008), Beebe and Sackris (under review), and
Sarkissian etal. (2011)reveal that there are several kinds of variation in folk
metaethical judgments. The lack of uniformity in the objectivity attributed
to ethical claims might make us wonder how well ordinary individuals grasp
the ideas of objectivism and subjectivism (and perhaps the related ideas of
relativism and universalism). It might also lead us to question their reasoning
abilities. Goodwin and Darley (2008, 1358, 1359), for example, suggest that
individuals were not particularly consistent in their meta-ethical positions
about various ethical beliefs and that requirements of judgmental consistency
across ethical scenarios are not considered. However, this attribution of
inconsistency seems both uncharitable and unwarranted.
Why should we believe that the ordinary use of ethical terms requires
a semantics that assumes uniform objectivity or subjectivity? Because
184 Advances in Experimental Moral Psychology
Notes
References
Alfano, M. (2010). Social cues in the public good game. Presentation at KEEL 2010
Conference: How and Why Economists and Philosophers Do Experiments, Kyoto
Sangyo University, Kyoto, Japan.
Alfano, M., Beebe, J. R., and Robinson, B. (2012). The centrality of belief and reflection
in Knobe effect cases: A unified account of the data. The Monist, 95, 26489.
Bateson, M., Nettle, D., and Roberts, G. (2006). Cues of being watched enhance
cooperation in a real-world setting. Biology Letters, 12, 41214.
Beebe, J. R. (2010). Moral relativism in context. Nos, 44, 691724.
Beebe, J. R., and Sackris, D. (Under review). Moral objectivism across the lifespan.
DeRose, K. (2011). The Case for Contextualism: Knowledge, Skepticism, and Context,
vol. 1. Oxford: Oxford University Press.
Goodwin, G. P., and Darley, J. M. (2008). The psychology of meta-ethics: Exploring
objectivism. Cognition, 106, 133966.
(2012). Why are some moral beliefs perceived to be more objective than others?
Journal of Experimental Social Psychology, 48, 2506.
Lewis, D. K. (1996). Elusive knowledge. Australasian Journal of Philosophy, 74, 54967.
Nahmias, E., Coates, D., and Kvaran, T. (2007). Free will, moral responsibility, and
mechanism: Experiments on folk intuitions. Midwest Studies in Philosophy, 31,
21442.
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 187
Nichols, S., and Knobe J. (2007). Moral responsibility and determinism: The cognitive
science of folk intuitions. Nos, 41, 66385.
Oosterhof, N., and Todorov, A. (2008). The functional basis of face evaluations.
Proceedings of the National Academy of Sciences of the USA, 105, 1108792.
Rezlescu, C., Duchaine, B., Olivola, C. Y., and Chater, N. (2012). Unfakeable facial
configurations affect strategic choices in trust games with or without information
about past behavior. PLoS ONE, 7, e34293.
Sarkissian, H., Parks, J., Tien, D., Wright, J. C., and Knobe. J. (2011). Folk moral
relativism. Mind & Language, 26, 482505.
Sidgwick, H. (1907/1981). The Methods of Ethics. Indianapolis: Hackett.
Sinnott-Armstrong, W. (2009). Mixed-up meta-ethics. Philosophical Issues, 19, 23556.
Small, D. A., and Loewenstein, G. (2003). Helping a victim or helping the victim:
Altruism and identifiability. Journal of Risk and Uncertainty, 26, 516.
(2005). The devil you know: The effects of identifiability on punitiveness. Journal
ofBehavioral Decision Making, 18, 3118.
Tersman, F. (2006). Moral Disagreement. Cambridge: Cambridge University Press.
Wong, D. B. (1984). Moral Relativity. Berkeley: University of California Press.
Wright, J. C., Grandjean, P. T., and McWhite, C. B. (2013). The meta-ethical
grounding of our moral beliefs: Evidence for meta-ethical pluralism. Philosophical
Psychology, 26(3), 33661.
10
People have beliefs not only about specific moral issues, such as the permissibility
of slavery, but also about the nature of moral beliefs. These beliefs, or meta-
ethical commitments, have been the subject of recent work in psychology and
experimental philosophy. One issue of study has been whether people view
moral beliefs in more objectivist or relativist terms (i.e., as more like factual
beliefs or more like personal preferences).
In this chapter, we briefly review previous research on folk moral
objectivism. We then present the results of an experiment that compares
two different approaches to measuring moral objectivism (those of Goodwin
and Darley 2008, and Sarkissian et al. 2011) and consider the relationship
between objectivism and two additional metaethical beliefs: belief in moral
progress and belief in a just world. By examining the relationships between
different metaethical commitments, we can better understand the extent to
which such commitments are (or are not) systematic and coherent, shedding
light on the psychological complexity of an important area of moral belief and
experience.
To preview our results, we find that different metaethical beliefs are reliably
but weakly associated, with different measures of moral objectivism generating
distinct patterns of association with belief in moral progress and belief in a just
world. We highlight some of the challenges in reliably measuring metaethical
commitments and suggest that the distinctions that have been useful in
differentiating philosophical positions may be a poor guide to folk moral
judgment.
Exploring Metaethical Commitments 189
Moral objectivity
Moral objectivity is a complex idea with multiple variants and diverse
proponents (for useful discussions see Goodwin and Darley 2010; Knobe etal.
2012; Sinnott-Armstrong 2009). For our purposes, to accept moral objectivism
is to believe that some moral claims are true in a way that does not depend on
peoples decisions, feelings, beliefs, or practices. Thus, to reject the objectivity
of moral claims one can either deny that moral claims have a truth value or
allow that moral claims can be true, but in a way that does depend on decisions,
feelings, beliefs, or practices (e.g., Harman 1975; Sinnott-Armstrong 2009).
Non-cognitivism is typically an instance of the former position, and cultural
or moral relativism of the latter.
Recently, there have been a few attempts to examine empirically what people
believe about moral objectivity (Goodwin and Darley 2008, 2010; Forsyth
1980; Nichols 2004; Sarkissian etal. 2011; see Knobe etal. 2012, for review).
Goodwin and Darley (2008) asked participants to rate their agreement with
statements that were factual, ethical, social-conventional, or about personal
taste, and then asked them whether these statements were true, false, or an
opinion or attitude. For example, one of the ethical statements was Robbing
a bank in order to pay for an expensive holiday is a morally bad action, while
one of the social-conventional statements was that Wearing pajamas and bath
robe to a seminar meeting is wrong behavior. Responding that these were
either true or false was considered a more objectivist response than selecting
an opinion or attitude. Participants were later asked whether the fact that
someone disagreed with them about a given statement meant that the other
person was wrong, that neither person was wrong, that they themselves were
wrong, or something else entirely. On this measure, responding that one of the
two people must be wrong was taken as a more objectivist response.
Using a composite of these two measures, Goodwin and Darley found
evidence that people treat statements of ethical beliefs as more objective than
either social conventions or taste. They also found a great deal of variation
in objectivism across both ethical statements and individuals. Strongly held
ethical beliefs were seen as more objective than beliefs that people did not hold
190 Advances in Experimental Moral Psychology
as strongly, and those who said they grounded their ethical beliefs in religion,
moral self-identity, or the pragmatic consequences of failing to observe norms
were more likely to be objectivist about ethical statements. Subsequent work
has suggested that variation in objectivist beliefs is not an artifact of variation
concerning which issues participants themselves take to be moral, nor of
misunderstanding moral objectivism (Wright etal. 2012).
More recently, Sarkissian etal. (2011) have argued that relativist beliefs are
more prevalent than suggested by Goodwin and colleagues, but that these
beliefs are only observed when participants are comparing judgments made
by agents who differ from each other in important ways. In their studies,
participants were presented with two agents who disagreed about a moral
claim and were asked whether one of them must be wrong. For example,
participants were asked to imagine a race of extraterrestrial beings called
Pentars who have a very different sort of psychology from human beings.
Participants were then presented with a hypothetical case in which a classmate
and a Pentar had differing views on a moral case, and were asked to rate their
agreement with the statement that at least one of them must be wrong.
Participants provided more objectivist answers (one of them must be wrong)
when comparing judgments made by agents from the same culture, but more
relativist answers (denying that at least one of them must be wrong) when
comparing judgments made by agents from different planets (i.e., a human
and a Pentar). Sarkissian et al. argue that engaging with radically different
perspectives leads people to moral relativism.
What are the implications of this research? On the one hand, the findings
from Goodwin and Darley (2008) and Sarkissian et al. (2011) suggest that
metaethical beliefs are not particularly developed or unquestionably coherent.
They certainly challenge the idea that those without philosophical expertise can
be neatly classified as moral objectivists versus moral relativists. Instead,
judgments vary considerably depending on the moral claim in question and
the way in which objectivism is assessedin particular, whether a case of
disagreement involves similar or dissimilar agents.
On the other hand, a growing body of research suggests that moral
objectivism is systematically related to aspects of cognition and behavior that
go beyond metaethical beliefs. For example, Goodwin and Darley (2012)
found that moral claims were judged more objective when there was greater
Exploring Metaethical Commitments 191
perceived consensus. They also found that participants judged those who held
opposing beliefs as less moral and harder to imagine interacting with when
disagreement concerned a claim that was considered objective (see also Wright
et al. in press). Finally, Young and Durwin (2013) found that participants
primed to think in more objective terms were more likely to give to charity.
These findings, among others, suggest that despite intra- and interpersonal
variation in judgments, moral objectivism relates to factual beliefs (e.g., about
consensus), attitudes (e.g., tolerance of others), and decisions (e.g., about
whether to give to charity). We aim here to better understand the ways in which
metaethical beliefs are and are not systematic and coherent by considering
the relationship between three different metaethical beliefs: belief in moral
objectivism, belief in moral progress, and belief in a just world.
Moral progress
A belief in moral progress is a commitment to the idea that history tends
toward moral improvement over time. This notion, which postulates a certain
directionality in human history, can be contrasted with the notion of mere
moral change. Although moral progress has been defended by philosophers
in the history of philosophy, notably Marx and Hegel, the notion also finds
expression in peoples ordinary thinking. For example, Martin Luther King
famously proclaimed, the arc of the moral universe is long but it bends
towards justice (King 1986).
It is worth noting that a belief in a historical tendency toward moral
progress can be consistently held while maintaining that moral progress can
be imperceptible, occurring over long stretches of time. Sometimes moral
improvement can be dramatic and rapid, while at other times not. Thus, belief in
a tendency toward moral progress does not require committing to a particular
rate of moral progress. Additionally, to hold that there is a basic tendency
toward moral progress in human history is also compatible with allowing that
these tendencies do not inevitably or necessarily prevail. Believing in some
tendency need not require belief in inevitability. For example, one could
believe that 6-year-old children tend to grow physically larger (e.g., that a child
at 14years of age will be larger than that very same child at age 6) without
claiming that they inevitably or necessarily get physically larger (serious illness
192 Advances in Experimental Moral Psychology
or death could prevent their continuing to grow in size). Likewise, in the case
of moral progress, one could still allow that there could be exogenous forces
such as environmental and biological catastrophes or foreign invasions that
prevent the historical development toward moral progress.
One reason to focus on moral progress is that the notion is commonly invoked,
reflecting ideas in the broader culture. There is therefore reason to suspect that
people have commitments concerning its truth, and it is natural to ask, with
philosopher Joshua Cohen (1997), Do [ideas of moral progress] withstand
reflective examination, or are they simply collages of empirical rumination and
reified hope, held together by rhetorical flourish? (p. 93). In particular, we
might ask whether moral progress typically involves a commitment to moral
objectivism, as objective norms might be thought to causally contribute to
progress or simply provide a metric against which progress can be assessed.
It is also important to note that the notion of moral progress does not
merely contain metaethical content but also a kind of descriptive content: to
believe in moral progress involves believing something about the nature of
human history and the character of the social world. This suggests that our
metaethical beliefs, including beliefs about moral objectivity, do not stand
alone, compartmentalized from other classes of beliefs. Not only might they
be rationally and causally related to each other, in some cases these beliefs are
inseparable, expressing a union between the ethical and the descriptive. Thus,
a second reason for our interest in considering moral progress in tandem with
moral objectivity is that it may reveal important connections between different
types of metaethical beliefs as well as connections between metaethical beliefs
and other beliefs (such as descriptive beliefs about consensus, or explanatory
beliefs about social phenomena).
Method
Participants
Three hundred and eighty-four participants (223 female; mean age33) were
recruited from Amazon Mechanical Turk, an online crowd-sourcing platform.
Participants received a small payment for their participation. All participants
identified themselves as being from the United States and as fluent speakers of
English.
194 Advances in Experimental Moral Psychology
Explanation solicitation
In the full experiment, participants were presented with a description of
a social change and asked to explain it in a few sentences (e.g., Why was
slavery abolished?). The changes included the abolition of slavery, womens
suffrage, and the potential legalization of same-sex marriage. Given our
present focus on the relationship between different metaethical beliefs,
we do not report findings concerning explanation here (see Uttich et al.
inprep).
Imagine a person named Allison, a fairly ordinary student from your town
who enjoys watching sports and hanging out with friends. Consider Allisons
views concerning the moral status of the following social institution:
Slavery.
Allison thinks that slavery is not morally wrong.
Exploring Metaethical Commitments 195
This scenario was matched with one involving a judgment from a different
time or place:
Imagine the social world of the United States in the eighteenth century.
Most people in this time and place view slavery as morally acceptable. The
existence of slavery is seen by many as part of the natural social order, slavery
is permitted by the law and the slave trade is at its peak, and someone who
owns many slaves is esteemed as admirable.
An individual, Jessica, from this society (eighteenth-century United
States), regards slavery as not morally wrong.1
In both cases, participants were then presented with a friend who disagreed:
Imagine that one of your friends thinks that slavery is morally wrong. Given
that these individuals (Allison [Jessica] and your friend) have different
judgments about this case, we would like to know whether you think at least
one of them must be wrong, or whether you think both of them could actually
be correct. In other words, to what extent would you agree or disagree with
the following statement concerning such a case?
Since your friend and Allison [Jessica] have different judgments about
this case, at least one of them must be wrong.
Baseline check
Participants were also asked for their personal views on whether the three
social changes were good or bad. For example, for the slavery fact participants
were presented with the following statement:
The demise of slavery was a good thing.
Participants rated their agreement with this statement on a 17-point
scale with 1 being definitely disagree, 7 definitely agree, and 4 neither
agree nor disagree. All three social facts were rated. The social fact related
to the explanation for which each participant had been prompted was always
presented first.
Table 10.1 Means and factor loadings for statements of moral progress and belief in a just world. Items with an asterisk were reverse coded.
An increase in moral justice in the world is inevitable. 4.05 (1.69) 0.729 0.340 0.227 0.072
Concrete Tendency
Over time there is moral progress concerning slavery. 5.54 (1.45) 0.159 0.146 0.179 0.813
Over time there is moral progress concerning voting rights. 5.41 (1.35) 0.250 0.066 0.249 0.733
Over time there is moral progress concerning marriage rights. 4.79 (1.60) 0.586 0.001 0.290 0.354
(Continued)
197
198
I firmly believe that injustices inall areas of life (e.g., 3.82 (1.52) 0.082 0.620 0.055 0.122
professional, family, politic) are the exception rather
thanthe rule.
I think people try to be fair when making important decisions. 4.69 (1.46) 0.159 0.607 0.006 0.271
Exploring Metaethical Commitments 199
Counterbalancing
Participants either provided an explanation first (part 1) and then completed
the two moral objectivity measures (part 2) and the moral progress measures
and GBJW measures (part 3), with the order of parts 2 and 3 counterbalanced,
or they first completed the moral objectivity measures (part 2) and the moral
progress and GBJW measures (part 3), with order counterbalanced, followed
by explanations (part 1). Participants always completed the baseline check on
social facts (part 4) last.
Results
We begin by reporting the data for each set of questions individually, and then
consider the relationship between different metaethical commitments.
Individual measures
Baseline check measures
The baseline check confirmed our assumptions about participants own
attitudes toward the moral claims in question. The average ratings were 6.70 of
7 (SD0.95) for the demise of slavery, 6.63 (SD1.00) for womens suffrage,
and 5.15 (SD2.20) for same-sex marriage.
fact, F(2,381)36.35, p0.01, with responses that were more objectivist for
slavery (M4.99, SD1.90) and womens suffrage (M4.90, SD1.69)
than for same-sex marriage (M4.30, SD1.95).
Because the correlation between participants current (C) and historical
(H) ratings was very high (r 0.817, p 0.01), we consider the average
rating (CH) for each participant (M4.72, SD1.87) in most subsequent
analyses.
inevitability emerged only for the concrete items, where participants may
have been able to look back over time at the specific issues we considered to
identify both general trends and temporary setbacks.3 In subsequent analyses
we examine correlations between these four factors and our two measures of
moral objectivism.
measure captures some important and unique variance in beliefs about moral
objectivism, perhaps roughly capturing relativism and non-cognitivism,
respectively. To further investigate the possible relationships between
measures, we also considered whether TFO might be related to the difference
between C and H ratings (C-H), which can be conceptualized as a measure
of the extent to which a participant is influenced by sociocultural factors in
evaluating the truth of a moral claim. One might therefore expect a significant
negative correlation between TFO and C-H, but in fact the relationship was
very close to zero. Coupled with the high correlation between judgments on
the C and H questions, and the fact that C and H had very similar relationships
to other variables, this suggests that varying the sociocultural context for a
belief can indeed affect judgments concerning disagreement, but that the
effect is more like a shift in the absolute value of participants judgments than
the recruitment or application of different moral commitments.
Second, while both the CH and TFO ratings were related to moral progress
and GBJW, they had unique profiles in terms of the specific factors with which
they correlated. The CH measure was correlated with the concrete tendency
factor (r0.239, p0.01), while the TFO measure was positively correlated with
the abstract progress factor (r0.127, p0.05) and negatively correlated with
the GBJW factor (r0.116, p0.05). Although these correlations were small,
they suggest systematic relationships between measures, and more surprisingly,
non-overlapping relationships, providing further evidence that judgments of
disagreement (CH) and judgments concerning whether moral claims have a
truth value (TFO) reflect different facets of folk metaethical commitments.
Finally, its worth considering why CH and TFO had these distinct profiles.
We speculate that the dimension of concrete versus abstract evaluation can
partially explain these results. Specifically, CH and the concrete tendency
factor were positively associated and involved particular moral claims (e.g.,
about slavery) rather than abstract claims, while TFO and the abstract progress
factor were positively associated and involved judgments that were more
explicitly metaethical in that they concerned the status of particular moral
ideas (i.e., whether there is moral progress in general and whether particular
claims have a truth value). However, this speculation does not explain why the
CH measure was not also associated with the concrete tendency factor, nor
does it explain the negative association between TFO and the GBJW factor.
Exploring Metaethical Commitments 203
General discussion
Our results suggest that metaethical beliefs are varied and complex, with
significant but modest relationships across different sets of beliefs. Our results
also reinforce some of the conclusions from prior research. Like Goodwin
and Darley (2008, 2012), we find significant variation in objectivism across
individuals, and also that judgments reflect greater objectivism for some
social facts (slavery) than for others (same-sex marriage), perhaps echoing
their findings on the role of consensus, and also consistent with the strength of
participants attitudes concerning each social fact. Like Sarkissian etal. (2011),
we find evidence that measures that highlight different perspectives seem
to increase non-objectivist responses, as our historical vignette generated
less objectivist responses than the matched current vignette, although the
responses were strongly correlated. Our findings therefore support the need
to consider the characteristics of both participants and measures in drawing
conclusions about metaethical beliefs.
Beyond illuminating variation between individuals, our findingsshed light
on the coherence and variability of metaethical beliefs within individuals.
Correlations between our measures of metaethical beliefs suggest two
conclusions: that the metaethical concepts we investigate have some common
elements, but also that there is only partial coherence in the corresponding
beliefs. Our two separate measures of moral objectivity (CH and TFO)
were significantly correlated, but only weakly so. The correlation was weak
despite modifications from Goodwin and Darley (2008) and Sarkissian etal.
(2011) to make the measures more comparable: both involved judgments
on 7-point scales and referred to the same moral claims. Analyses of the
relationship between these two measures and the four factors concerning
moral progress and GBJW suggest that moral objectivism is related to these
ideas, but the two measures of objectivism had unique patterns of association.
If participants have strong, stable, and consistent metaethical commitments,
why might responses to metaethical questions be so weaklyrelated?
We first consider methodological and conceptual answers to this question.
One possibility is that we observe weak associations between metaethical
commitments as an artifact of our methods of measurement. This idea is
consistent with a suggestion by Sarkissian etal. (2011), who argue that when
204 Advances in Experimental Moral Psychology
Notes
References
Dalbert, C., Montada, L., and Schmitt, M. (1987). Glaube an eine ge- rechte welt
als motiv: Vali-dierungskorrelate zweier Skalen [Belief in a just world as motive:
Validity correlates of two scales]. Psychologische Beitrage, 29, 596615.
Forsyth, D. R. (1980). A taxonomy of ethical ideologies. Journal of Personality and
Social Psychology, 39, 17584.
Furnham, A. (2003). Belief in a just world: Research progress over the past decade.
Personality and Individual Differences, 34, 795817.
Goodwin, G. P., and Darley, J. M. (2008). The psychology of meta-ethics: Exploring
objectivism. Cognition, 106, 133966.
(2010). The perceived objectivity of ethical beliefs: Psychological findings and
implications for public policy. Review of Philosophy and Psychology, 1, 16188.
(2012). Why are some moral beliefs seen as more objective than others? Journal
ofExperimental Social Psychology, 48, 2506.
Hare, R. M. (1952). The Language of Morals. New York: Oxford University Press.
Harman, G. (1975). Moral relativism defended. Philosophical Review, 84, 322.
King, M. L., Jr. (1986). A Testament of Hope: The Essential Writings of Martin Luther
King, Jr. In J. Washington (ed.). San Francisco: Harper and Row.
Knobe, J., Buckwalter, W., Nichols, S., Robbins, P., Sarkissian, H., and Sommers, T.
(2012). Experimental philosophy. Annual Review of Psychology, 63, 8199.
Lerner, M. (1980). The Belief in a just World: A Fundamental Delusion. New York:
Plenum.
Lombrozo, T. (2009). The role of moral commitments in moral judgment. Cognitive
Science, 33, 27386.
Nichols, S. (2004). After objectivity: An empirical study of moral judgment.
Philosophical Psychology, 17, 326.
Nichols, S., and Knobe, J. (2007). Moral responsibility and determinism: The
cognitive science of folk intuitions. Nous, 41, 66385.
Sarkissian, H., Park, J., Tien, D., Wright, J. C., and Knobe, J. (2011). Folk moral
relativism. Mind & Language, 26, 482505.
Shtulman, A. (2010). Theories of god: Explanatory coherence in a non-scientific
domain. In S. Ohlsson and R. Catrambone (eds), Proceedings of the 32nd Annual
Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society,
pp. 1295300.
Sinnott-Armstrong, W., Moral skepticism. The Stanford Encyclopedia of Philosophy
(Summer 2009 Edition), Edward N. Zalta (ed.), URLhttp://plato.stanford.edu/
archives/sum2009/entries/skepticism-moral/.
Thagard, P. (1989). Explanatory coherence. Behavioral and Brain Sciences, 12,
435502.
208 Advances in Experimental Moral Psychology
Theories of moral relativism do not always fit well with common intuitions.
In the Theaetetus, Plato ridiculed the relativist teachings of Protagoras (Plato
1921), and Bernard Williams dubbed moral relativism possibly the most
absurd view to have been advanced even in moral philosophy (Williams 1972,
p. 20). Nonetheless, even though some moral philosophers oppose theories
of moral relativism due to its counterintuitive implications (e.g., Streiffer
1999), other philosophers defend it by referring to common intuitions, lay
peoples speech acts, or common understandings of certain moral terms
(e.g., Brogaard 2008; Harman 1975; Prinz 2007). These intuitions have been
investigated empirically: On the one hand, empirical studies suggest that
survey respondents can make relativist moral judgments (Goodwin and Darley
2008; Sarkissian etal. 2012; Wright etal. 2012; Wright etal. in press). On the
other hand, the folks moral relativist intuitions might be self-contradictory
(cf. Beebe 2010), and this can be used as an argument against relativist moral
theories (Williams 1972).
If the prevalence and coherence of folk moral relativism are to play a role
in arguments regarding the philosophical merits of moral relativism, then
we need to know what the folk actually adhere to. In this regard, for several
reasons, it is important to take into account that there are different kinds of
moral relativism (Quintelier and Fessler 2012). First, failure to do so may
lead to an underestimation of the prevalence of folk moral relativism, as
respondents may employ relativist intuitions of a kind other than that being
210 Advances in Experimental Moral Psychology
Moral relativism
of personal choice over the value of the unborn life. According to some kinds
of moral relativism, a pro-choice activistsay, Christinecan truthfully
judge that abortion is permissible because it is in accordance with her moral
framework. Nonetheless, if a pro-life activistsay, Lisaabhors abortion,
Lisas statement regarding the impermissibility of abortion is also true because
it is in accordance with Lisas moral framework that prioritizes the value of
the unborn life over personal choice. In this example, the truth of moral
statements thus depends on the moral framework of the person uttering a
moral statement.
Second, moral relativism holds that there is variation between these moral
frameworks. In our example, some people are pro-choice and others are pro-
life. Peoples moral judgments will therefore sometimes differ because their
respective moral frameworks differ.
Finally, moral relativism rests on philosophical assumptions, such that this
variation in moral frameworks cannot be eliminated. For instance, one can
hold that both frameworks are equally true, that there is no truth about the
matter, or that they are equally practical, etc. If moral relativism would allow
that all variation in moral frameworks could be eliminated, moral relativism
would be compatible with (most forms of) moral universalism. This meaning
of moral relativism would be too broad for our purposes.
Carol and Lauraor based on the moral frameworks of the appraisers judging
the actthis being Lisa and Christine? Or could any moral framework be an
appropriate frame of reference?
According to agent moral relativism, the appropriate frame of reference is
the agents moral framework. In this example, it would be permissible for Carol
(the pro-choice agent) to have an abortion, but it would not be permissible for
Laura (the pro-life agent) to have an abortion. Viewed from the perspective
of agent moral relativism, Christines evaluative statement that both abortions
are permissible is false, even though this statement is in accordance with her
own moral framework. In contrast, for an agent moral relativist, it would be
correct for an appraiser, such as Christine, to disapprove of Lauras abortion
(as inconsistent with Lauras own moral perspective) and to permit Carols
abortion (as consistent with Carols own moral perspective).
In contrast, according to appraiser relativism, the moral frameworks of
the agents (Laura and Carol) are irrelevant for a moral judgment to be true
or false. What matters instead are the moral frameworks of the appraisers,
Christine, and Lisa. Viewed from the perspective of appraiser moral relativism,
Christines evaluative statement that both abortions are permissible is correct,
even though abortion is against Lauras (the agents) framework.
In what follows, we consider appraisers as only those who evaluate a moral
act without being involved in the act. We consider agents as only those doing
the act without uttering a statement about the act. Thus, considering the act
of lying, when A lies to B, A and B are not appraisers. Of course, in reality,
agents can appraise their own actions. Moreover, when appraisers are uttering
a moral statementfor example, C says to D that lying is wrongthey might
in the first place have themselves as agents in mind; thus, appraisers can also be
agents. However, simplifying matters this way will make it easier to investigate
whether lay people indeed draw a distinction between agents and appraisers
when assessing the status of moral statements and behavior.
The distinction between agent moral relativism and appraiser moral relativism
is important when evaluating moral theories. One possible argument against
Agent Versus Appraiser Moral Relativism: AnExploratory Study 213
Existing studies about folk moral relativism most often vary only the appraisers.
To date, investigators have yet to examine whether participants also reveal
agent relativist intuitions in experimental studies.
Goodwin and Darley (2008) and Wright etal. (2012; in press) both report
the existence of relativist moral positions. In these studies, participants
are presented with statements such as Before the 3rd month of pregnancy,
abortion for any reason (of the mothers) is acceptable. Some participants
indicated that the statement was true (or false) but that a person who disagrees
with them about the statement need not be mistaken. Hence, in these studies,
participants allowed the truth value of a moral statement to vary when the
appraiser varied. We do not know if participants would also allow the truth of a
moral statement, or the rightness of an act, to vary when the agent would vary.
Sarkissian etal. (2011) were able to guide participants intuitions in the
direction of moral relativism by varying the cultural background of the
appraisers. They also varied the cultural background of the agents, but this
did not have an effect on participants intuitions. However, this apparent
null result is subject to the methodological limitation that the cultural back
grounds of the hypothetical agents were much more similar to each other
Agent Versus Appraiser Moral Relativism: AnExploratory Study 215
(an American vs. an Algerian agent) than were the cultural backgrounds
of the hypothetical appraisers (a classmate vs. an appraiser from a fictitious
primitive society, or vs. an extraterrestrial).
Because the above studies do not allow us to conclude whether the folk
show agent relativist moral speech acts, we developed scenarios in which we
explicitly varied the moral frameworks of both agents and appraisers.
Method
Participants
From April to June 2013, we recruited participants using Amazon.coms
Mechanical Turk web-based employment system (hereafter MTurk). This is
a crowdsourcing website that allows people to perform short tasks, including
surveys, for small amounts of money. Anyone over 18 could participate. We
analyzed data from 381 participants, who were mostly from the United States
(234) and India (118).
Scenario 1
Mr Jay is the boss of family business J in a small town in the Midwestern
United States. In this company, when employees are late for work, their wages
are reduced by a proportionate amount. As a consequence, everyone in this
company has come to think that a proportionate wage reduction is a morally
right punishment for being late for work. They think reducing lunch breaks as
a punishment is morally wrong because this is never done and they value their
lunch breaks.
One day, John is late for work. This day, his boss is not in the mood to deal
with administrative issues such as adjusting Johns wages. Instead, he punishes
John by shortening his lunch break, even though Mr Jay himself, John, and all
the other employees in this company think this is morally wrong.
Agent Versus Appraiser Moral Relativism: AnExploratory Study 217
Scenario 2
Mr May is the boss of another family business M in the same small town in the
Midwestern United States. In this company, when employees are late for work,
their lunch break is proportionately shortened. As a consequence, everyone in
this company has come to think that a proportionately shorter lunch break is
a morally right punishment for being late for work. They think that reducing
wages as a punishment is morally wrong because this is never done and they
value their income.
One day, Michael is late for work. His boss punishes Michael by shortening
his lunch break. Mr May himself, Michael, and all the other employees in this
company think that this is morally right.
Because this punishment is concordant with the agents moral framework,
we refer to this scenario as AC.
Participants then answered the following judgment question on a 5-point
Likert scale: To what extent do you think Mr Mays behavior is morally
wrong? (1certainly morally wrong; 5certainly not morally wrong). Thus,
the higher the participants scores, the more their judgment was concordant
with the agents moral frameworks.
Participants again answered two comprehension questions.
In order to test whether participants moral judgments depended on the
agents moral frameworks, we used AC and AD as within-subject levels of the
variable AGENT.
In order to test whether participants moral intuitions varied depending on
the appraisers and the agents moral frameworks, participants were presented
with two additional scenarios, presented in randomized order, that extend
218 Advances in Experimental Moral Psychology
the previous scenarios through the addition of appraisers who utter a moral
statement about the act.
Scenario 3
James and Jared are employees in Mr Jays company. They both know that in
Mr Mays company, everyone thinks shortening lunch breaks is morally right.
Of course, in their own company, it is just the other way around: Everybody
in Mr Jays company, including James and Jared, think that shorter breaks
are a morally wrong punishment, and that wage reduction is a morally right
punishment.
James and Jared have heard that Mr May shortened Michaels lunch break.
James says to Jared: What Mr May did was morally wrong.
This statement is discordant with the agents moral framework and
concordant with the appraisers moral framework. We therefore label this
scenario AGDAPC.
Participants answered the following question: To what extent do you think
that what James says is true or false? (1certainly true; 5certainly false).
Thus, the higher the participants scores, the more that their truth evaluation was
concordant with the agents moral frameworks but discordant with the appraisers
moral frameworks. Since this is at odds with the scenario label, we reverse
coded this item. Thus, for the final variable that was used in the analysis, higher
scores indicate that the response was more discordant with the agents moral
frameworks and more concordant with the appraisers moral frameworks.
Participants answered one comprehension question.
Participants were then led to the following text: Now Jared replies to James:
No, what Mr. May did was not morally wrong. This statement is concordant
with the agents moral framework and discordant with the appraisers moral
framework. We therefore label this scenario AGCAPD.
Participants answered the following question: To what extent do you think
that what Jared says is true or false? (1certainly true; 5certainly false).
Again, we reverse coded this item. For the final variable that was used in the
analysis, higher scores indicate that the response was more concordant with
the agents moral frameworks and more discordant with the appraisers moral
frameworks, in line with the label for this scenario.
Participants answered one comprehension question.
Agent Versus Appraiser Moral Relativism: AnExploratory Study 219
Scenario 4
Mark and Matthew are employees in Mr Mays company. They both know that
in their own company, everybody, just like Mark and Matthew themselves,
thinks that reducing wages is a morally wrong punishment, and that shortening
lunch breaks is a morally right punishment.
Mark and Matthew have heard that Mr May shortened Michaels lunch
break. Mark says to Matthew: What Mr. May did was morally wrong.
This statement is discordant with the agents moral framework and
discordant with the appraisers moral framework. We therefore label this
scenario AGDAPD.
Participants answered the following question: To what extent do you think
that what Mark says is true or false? (1certainly true; 5certainly false).
Higher scores on this statement indicate that the participants truth evaluation
was more concordant with both the appraisers and the agents moral
frameworks. We reverse coded this item. For the final variable that was used
in the analysis, higher scores indicate that the response was more discordant
with the agents moral frameworks and more discordant with the appraisers
moral frameworks, in line with the label for this scenario.
Participants answered one comprehension question.
Participants were then led to the following text: Now Matthew replies
to Mark: No, what Mr. May did was not morally wrong. This statement is
concordant with the agents moral framework and concordant with the
appraisers moral framework. We therefore label this scenario AGCAPC.
Participants answered the following question: To what extent do you think
that what Matthew says is true or false? (1certainly true; 5certainly false).
We reverse coded this item, such that higher scores indicate that the response
was more concordant with the agents moral frameworks and more concordant
with the appraisers moral frameworks, in line with the label for this scenario.
Participants again answered one comprehension question.
Participants thus had to indicate the truth of four moral statements. The
variable AGENT TRUTH consists of the following two within-subject levels:
AGCAPCAGCAPD and AGDAPCAGDAPD. The variable APPRAISER
TRUTH consists of the following two levels: AGCAPCAGDAPC and
AGCAPDAGDAPD.
The sailors questionnaire featured the following scenario:
220 Advances in Experimental Moral Psychology
Scenario 1
Mr Johnson is an officer on a cargo ship in 2010, carrying goods along the
Atlantic coastline. All the crew members are American but the ship is mostly
in international waters. When a ship is in international waters, it has to follow
the law of the state whose flag it sails under and each ship can sail under only
one flag. This ship does not sail under the US flag. The law of this ships flag
state allows both whipping and food deprivation as a punishment.
On this ship, food deprivation is always used to discipline sailors who
disobey orders or who are drunk on duty; as a consequence, everyone on
this ship, Mr Johnson as well as all the sailors, has come to think that food
deprivation is a morally permissible punishment. Whipping, however, is never
used to discipline sailors and everyone on this ship. Mr Johnson, as well as all
the sailors, thinks whipping is a morally wrong punishment.
One night, while the ship is in international waters, Mr Johnson finds a
sailor drunk at a time when the sailor should have been on watch. After the
sailor sobers up, Mr Johnson punishes the sailor by giving him 5 lashes with a
whip. This does not go against the law of the flag state.
Subsequent scenarios, experimental and comprehension questions
were analogous to the employees questionnaire: As in the employees
questionnaire, there were eight comprehension questions and six experimental
questions.
Results
In order to ensure that participants read and understood the scenarios, we only
retained those participants that answered all eight comprehension questions
correctly. We analyzed the data from the two questionnaires separately. We
analyzed data from 272 participants (50.4% women) for employees and 109
participants (51.4% women) for sailors. For some analyses, the total number
of participants was lower due to missing values. For employees, mean age
was 34.92 years (SD12.42), ranging from 19 to 75years old. For sailors,
mean age was 35.63 years (SD12.11) ranging from 20 to 68. Participants
were mostly from the United States (58.1% and 69.7%) and India (34.2% and
22.9%) for employees and sailors, respectively.
Agent Versus Appraiser Moral Relativism: AnExploratory Study 221
5.00
Mean moral permissibility of behavior
4.00
3.00
2.00
1.00
0.00
Employees Sailors
QUESTIONNAIRE Error bars: 95% Cl
SCENARIO
Agent CONCORDANT behavior Agent DISCORDANT behavior
We found that the agents moral frameworks (AGENT TRUTH) had an effect
on whether participants thought that the moral statement was true or not
(employees: F(1,270)76.3, p0.001; sailors: F(1,107)53.9, p0.001).
Specifically, participants thought that the statement was more likely to be true
when it was in accordance with the agents moral frameworks (see Figure 11.2;
employees: M3.46, SD0.053; sailors: M3.62, SD0.089) than when
it was not in accordance with the agents moral frameworks (employees: M
2.61, SD0.053; sailors: M 2.61, SD0.096).
We found that the appraisers moral frameworks (APPRAISER TRUTH)
also had a significant effect on whether participants thought that the moral
statement was true or not (employees: F(1,270) 2496, p 0.001; sailors:
F(1,107)33.3, p0.001). Specifically, participants thought that the moral
statement was more likely to be true when it was in accordance with the
appraisers moral frameworks (see Figure 11.3; employees: M3.75, SD
0.051; sailors: M3.71 SD0.081) than when it was not in accordance with
the appraisers moral frameworks (employees: M2.32, SD0.050; sailors:
M 2.51, SD 0.092). We did not find a main effect of order (employees:
F(1,270)0.318, p0.573; sailors: F(1,107)0.067, p0.797).
4.00
3.00
Mean truth of statement
2.00
1.00
0.00
Employees Sailors
QUESTIONNAIRE Error bars: 95% Cl
SCENARIO
Agent CONCORDANT statement Agent DISCORDANT statement
4.00
3.00
Mean truth of statement
2.00
1.00
0.00
Employees Sailors
QUESTIONNAIRE Error bars: 95% Cl
SCENARIO
Appraiser CONCORDANT statement
Appraiser DISCORDANT statement
For employees, but not for sailors, we found a significant two-way interaction
between AGENT TRUTH and APPRAISER TRUTH (see Figure 11.4;
employees: F(1,270)7.58, p0.006; sailors: F(1,107)0.199, p 0.657).
Examining Figure 11.4, although this interaction was significant, the effect of
agents (or appraisers) moral frameworks was in the same direction in both
conditions. Interestingly, in the employees questionnaire, the statement was
perceived to be more true when it was concordant with the appraisers moral
framework and discordant with the agents moral framework (M 3.40,
SD1.39), than when it was concordant with the agents moral framework
and discordant with the appraisers moral framework (M2.81, SD1.42).
In the sailors questionnaire though, the truth values of these statements
were similar (M 3.21, SD 1.47, M 3.04, SD 1.47). This suggests
that, in the employees questionnaire, the appraisers moral framework was
more important than the agents moral frameworks when there was some
discordance, while in the sailors questionnaire, the appraisers and agents
moral frameworks were almost equally important when there was some
discordance.
224 Advances in Experimental Moral Psychology
5.00
4.00
Mean truth of statement
3.00
2.00
1.00
0.00
I...Appraiser CONCORDANT statement...II...Appraiser DISCORDANT statement...I
SCENARIO Error bars: 95% Cl
SCENARIO
Agent CONCORDANT statement Agent DISCORDANT statement
Because these results suggests that agents and appraisers moral frameworks
independently matter for peoples evaluations of the truth of moral statements,
it might be the case that some people are predominantly and consistently
agent relativists while others are predominantly and consistently appraiser
relativists. In order to explore this possibility, we calculated three new variables:
AGENT DEGREE (AC-AD) as the degree to which participants relativized
the permissibility of behavior according to the agents moral frameworks,
AGENT TRUTH DEGREE (AGCAPCAGCAPD-AGDAPC-AGDAPD)
as the degree to which participants relativized the truth of moral statements
according to the agents moral frameworks, and APPRAISER TRUTH
DEGREE (AGCAPCAGDAPC-AGCAPD-AGDAPD) as the degree to
which participants relativized the truth of moral statements according to the
appraisers moral frameworks.
For sailors, we found that AGENT DEGREE was positively and significantly
related to AGENT TRUTH DEGREE (F(1,108)11.0, p0.001) but not to
APPRAISER TRUTH DEGREE (F(1,108)0.000, p0.989). This suggests
that participants who were relativists with regard to moral permissibility were
more likely to be agent relativists with regard to moral truth. Thus, they might
Agent Versus Appraiser Moral Relativism: AnExploratory Study 225
have been agent relativists with regard to moral permissibility and with regard
to moral truth, and therefore quite consistent in their relativist intuitions.
However, for employees, it was just the other way around: AGENT
DEGREE was positively and significantly related to APPRAISER TRUTH
DEGREE (F(1,271)5.30, p0.022) but not to AGENT TRUTH DEGREE
(F(1,271)0.141, p0.708). In this scenario, participants might have been
inconsistent in their relativist intuitions, alternating between agent relativist
speech acts with regard to moral permissibility and appraiser relativist speech
acts with regard to moral truth. Alternatively, participants might have interpreted
the actors in the moral permissibility scenario as appraisers instead of agentsas
explained in the introduction. Thus, they might have been appraiser relativists
with regard to moral permissibility and with regard to moral truth.
Finally, for employees, but not for sailors, we found a significant interaction
effect between APPRAISER TRUTH and order of presentation (see Figure11.5;
F(1,270)26.76, p0.001). Examining Figure 11.5 though, we see that the
5.00
4.00
Mean truth of statement
3.00
2.00
1.00
0.00
Appraiser CONCORDANT FIRST Appraiser CONCORDANT SECOND
ORDER Error bars: 95% Cl
SCENARIO
Appraiser CONCORDANT statement
Appraiser DISCORDANT statement
effect of appraisers moral frameworks was again in the same direction in both
orders. Thus, the folk seem to be appraiser relativists regardless of order of
presentation or variation in the appraisers moral frameworks.
There were no interaction effects for AGENT TRUTH and order of
presentation.
Discussion
For employees, but not for sailors, we found two interaction effects.
We found a significant two-way interaction between AGENT TRUTH
and APPRAISER TRUTH and between APPRAISER TRUTH and order of
presentation. However, the effects were always in the same direction, meaning
that our second conclusion is upheld: individuals take both agents and
appraisers moral frameworks into account when assessing the truth of moral
statements. Further research may reveal whether these interaction effects are a
consistent pattern in folk moral relativism, or whether they were an artifact of
the employees scenario.
Finally, we explored the possibility that some people are predominantly and
consistently agent relativists while others are predominantly and consistently
appraiser relativists. Our results are not conclusive. Whether people are
predominantly agent moral relativists or appraiser moral relativists might
vary depending on the scenario or depending on the moral aspect (truth vs.
permissibility) that is to be evaluated.
Our results are not definitive. Notwithstanding the fact that we excluded all
participants who did not fill out all comprehension questions correctly, given
the complexity of our scenarios and questions, future investigations would
benefit from simpler materials. Also, we examined assessments of only two
acts, namely reduction in lunch time, and whipping, both as a punishment.
The extent of lay peoples moral relativism may depend on the kind of act
or the modality of the moral statement. In addition, it remains to be seen
whether agent relativism and appraiser relativism are stable intuitions or
vary across a range of situations. These and other possibilities warrant future
research, some of which has already been undertaken by the present authors
(Quintelier etal. 2013).
With the above caveats in mind, our study reveals that there is inter-
individual as well as intra-individual variation in whether individuals
relativize moral speech acts to agents or to appraisers. Such variation in types
of moral intuitions is in line with previous suggestions (e.g., Gill 2009; Sinnott-
Armstrong 2009) that different individuals employ quite divergent moral
language. The variation that we have documented thus supports Gills position
that philosophical theories that appeal to lay peoples speech acts cannot rely
on a handful of commonsense judgments, (2009, p. 217), as the philosophers
commonsense judgment will often fail to reflect the actual distribution of
228 Advances in Experimental Moral Psychology
moral reasoning among the folk. Moreover, that people may employ divergent
relativist forms of language indicates that researchers of moral relativism
cannot make claims regarding moral relativism without first specifying the
type of relativism at issue, nor can they attend only to appraiser relativism.
Methodologically, researchers must take care in designing stimuli and queries
in order to minimize ambiguity as to which type of relativism is made salient.
Whether they be empiricists or theorists, researchers of moral relativism must
take seriously the existence of agent moral relativism, and must consider the
differences between it and appraiser moral relativism.
Note
References
Wright, J. C., Grandjean, P., and McWhite, C. (2012). The meta-ethical grounding of
our moral beliefs: Evidence for meta-ethical pluralism. Philosophical Psychology,
26(3), 126. doi:10.1080/09515089.2011.633751
Wright, J. C., McWhite, C., and Grandjean, P. T. (in press). The cognitive
mechanismsof intolerance: Do our meta-ethical commitments matter? In
T. Lombrozo, S. Nichols and J. Knobe (eds), Oxford Studies in Experimental
Philosophy, Vol. 1. Oxford: Oxford University Press.
Part Three
Measuring Morality
232
12
scholars nor laypeople agree about the moral significance of these thoughts
and behaviors. As a consequence, moral psychologists often investigate the
causes and consequences of behaviors that either have little impact on the
world or are of little moral interest to scholars and laypeople. We also explain
how a small change in how morality is typically assessed could significantly
increase the scope and importance of morality research. Finally, we present
an empirically informed list of traits and behaviors that we consider useful
proxies of morality.
In this chapter, we will focus primarily on the construct of morality; however,
most of the concerns we raise and the solutions we offer are relevant to other
normative concepts that are likely of interest to readers of this volume, such as
prosociality and selfishness. Although we think moral psychologists current
nomothetic bent poses a problem for many areas of psychology (e.g., judgment
and decision-making, motivation), here we focus on moral behavior, the area
of research where we think this bias is most prevalent and problematic.
Operationalizing MoralityAn
introduction to the problem
2004), social activism (Colby and Damon 1992), honesty (Derryberry and
Thoma 2005), environmentally friendly behavior (Kaiser and Wilson 2000),
and community service (Hart etal. 2006) to operationalize morality. To this
list, we add volunteerism (Aquino and Reed 2002), honesty (Teper etal. 2011),
and cooperation (Crockett etal. 2010), to name a few.
The traditional alternative to the third-person approach is the first-person,
value-neutral, descriptive approach (Frimer and Walker 2008). In contrast
to the third-person approach, the first-person approach assesses morality
according to what the participant herself considers moral. The impartial
researcher deems each individuals set of principles or actions moral, and
deems failing to follow or enact those principles or actions not moral.
Although no less a figure than Gordon Allport proposed that a first-person
approach is the only valid means of assessing moral behavior (Allport 1937),
and researchers have long warned of the flaws inherent in third-person
morality research (e.g., Pittel and Mendelsohn 1966), moral behavior has rarely
been assessed using the first-person approach. Psychologists have occasionally
taken participants idiosyncratic moral beliefs into account when determining
which variables to include in their analyses, but have taken this step almost
exclusively when studying moral cognition (e.g., Goodwin and Darley 2008;
Wright 2008; Wright 2010), but not when studying moral behavior.
That said, the first-person approach is very similar to the approach taken
by advocates of social-cognitive process models of general personality, such
as Cognitive Affective Personality System (Mischel and Shoda 1995, 1999)
and Knowledge-and-Appraisal Personality Architecture (Cervone 2004).
These models were developed largely as a response to calls that behavior was
not consistent across situations. The creators and advocates of these models
suggested that past research showed that behaviors associated with traits such
as agreeableness and conscientiousness were not meaningfully consistent,
because participants perceptions of situations were not taken into account.
When researchers assessed behavior across situations that were psychologically
similar according to the participants (not just nominally similar to an outsiders
perspective), researchers discovered that cross-situational consistency was
high (Mischel and Shoda 1998).
The idea behind the first-person approach to operationalizing morality is
that something similar might be true for moral behavior: peoples behavior
236 Advances in Experimental Moral Psychology
might not fall in line with what researchers deem moral (i.e., what is nominally
moral), but their behavior might fall in line with what they personally consider
moral (i.e., what is psychologically moral).
variability in the degree to which laypeople consider actions and traits morally
relevant, (b) disagreement about the moral valence of certain actions and
behaviors (e.g., some see obedience to authority figures as good, some see it as
bad), and, perhaps most problematic for moral psychology, (c) disagreement
between researchers and laypeople about the moral relevance of several
behaviors and traits.
The first two types of lay disagreement demand little attention here, as
intra- and intercultural variations in moral concerns and judgments have
recently received much attention in the psychology literature (for a review, see
Graham etal. 2013). Consequently, we will focus on the discrepancy between
the amount of moral relevance that researchers place on behaviors and traits
themselves (or assume that their participants place on them) and the moral
relevance that their participants actually place on these behaviors and traits
(if any at all).
Empirical evidence
To begin to assess this discrepancy, we instructed two groups of participants
to rate either the moral importance of a list of traits and behaviors (How
important it is to have each of the following traits in order to be a moral
person?; Table 12.1) or the moral valence of traits and behaviors (How
morally good or morally bad it is to possess or perform each of the following
traits or behaviors?; Tables 12.2 and 12.3). The moral importance of each
trait was rated by 905 participants on YourMorals.org (YM), and the moral
valence of each behavior or trait was rated by 125 participants on Amazons
Mechanical Turk. Both of these qualify as Western, Educated, Industrialized,
Rich, and Democratic (Henrich et al. 2010) samples, so it is likely that the
researcher-participant discrepancies suggested here underestimate the true
discrepancies.
Included in our trait list were all traits that have been previously included
in morally relevant trait lists that we are aware of (e.g., Aquino and Reed 2002;
Lapsley and Lasky 2001; Smith etal. 2007; Walker and Pitts 1998), as well as
traits listed by participants in our own pretests who completed open-ended
questions such as In order to be moral, what traits are important for people to
possess? The list of behaviors consisted of (a) actions that psychologists often
238 Advances in Experimental Moral Psychology
Mean Mean
Behavior score SD Behavior score SD
Honest 4.39 0.90 Wise 3.00 1.34
Just 4.20 0.96 Controls thoughts 2.98 1.34
Compassionate 4.04 1.10 Straightforward 2.95 1.22
Treats people 3.91 1.20 Courageous 2.87 1.28
equally
Genuine 3.86 1.07 Hardworking 2.83 1.24
Kind 3.83 1.08 Environmentally 2.76 1.19
friendly
Honorable 3.81 1.16 Purposeful 2.73 1.19
Tolerant 3.80 1.12 Perseverant 2.71 1.18
Responsible 3.74 1.07 Controls 2.68 1.16
emotions
Merciful 3.68 1.16 Modest 2.65 1.17
Humane toward 3.61 1.19 Friendly 2.61 1.17
animals
Forgiving 3.59 1.19 Brave 2.61 1.24
Respectful 3.56 1.19 Determined 2.56 1.22
Conscientious 3.51 1.09 Non-materialistic 2.48 1.23
Helpful 3.44 1.00 Resourceful 2.22 1.20
Nonjudgmental 3.34 1.32 Optimistic 2.18 1.22
Loyal 3.34 1.19 Spends money 2.12 1.12
wisely
Giving 3.31 1.08 Spiritual 1.88 1.20
Rational 3.29 1.28 Obedient 1.80 1.02
Self-controlled 3.28 1.18 Is patriotic 1.59 0.97
Generous 3.24 1.14
Supportive 3.22 1.08
Selfless 3.19 1.26
Patient 3.10 1.15
Cooperative 3.01 1.15
Note: 5It is extremely important that a person possesses this characteristic (in order to be moral).
1It is not important that a person possess this characteristic (in order to be moral). Sample sizes range
from 867 to 905 raters for each item.
The Trouble with Nomothetic Assumptions in Moral Psychology 239
Mean
Behavior score SD
Kicking a dog in the head, hard. (Graham etal. 2009) 8.54 0.99
Steal a cellphone. 7.96 1.67
A psychologist tells his or her participants that their behavior is 7.72 1.65
anonymous when it is not.
Cheat on trivia game (for money). (DeAndrea etal. 2009) 7.27 1.50
Lie about predicting the outcome of a coin toss (for money). 7.17 1.51
(Greene and Paxton 2009)
Gossiping. 6.89 1.33
Failing to pay for a subway ticket which costs somewhere between 6.72 1.24
$1.50 and $5.00.
Not stopping at a stop sign. 6.60 1.43
Not recycling. 6.08 1.26
Being impatient with people. 6.07 1.33
Illegally watching movies online. 6.06 1.27
Looking at pornography. 6.05 1.42
Keeping the majority of $20 in an ultimatum game. 5.97 1.38
Failing to flip a coin to decide whether to place self or other 5.86 1.37
participant in a positive consequence condition (and choosing
to place self in positive condition). (Batson etal. 1999)
Showing up late for something. 5.85 1.07
Not holding a door open for someone behind you. 5.84 1.02
Taking a pen that is not yours. (Mullen and Nadler 2008) 5.76 1.09
Illegally walking across a street. 5.57 0.83
Having the opportunity to keep $50 for oneself or keeping $25 and 5.56 1.26
giving $25 to charity, and deciding to keep all $50 for oneself.
Not cooperating in a one shot public goods game. 5.48 1.37
Lying in order to avoid hurting someones feelings. 5.21 1.52
Eating pickles. 4.97 0.48
Defecting in a one shot prisoners dilemma game. 4.94 1.26
Cooperating in a one shot prisoners dilemma game. 3.77 1.62
Giving the majority of $20 away in a one shot Dictator Game. 3.61 1.50
(Continued)
240 Advances in Experimental Moral Psychology
Mean
Behavior score SD
Agree to talk to xenophobic inmates on the virtues of immigration. 3.21 1.75
(Kayser etal. 2010)
Helping an experimenter pick up and organize dropped papers. 3.01 1.32
(van Rompay etal. 2009)
Having the opportunity to keep $50 for oneself or keeping $25 and 2.57 1.35
giving $25 to charity, and deciding to give $25 to charity and
keeping $25 for oneself.
Note: 1This is an extremely morally good behavior/trait; 5This is neither a morally good nor a
morally bad behavior/trait; 9This is an extremely morally bad behavior/trait. Sample sizes range from
94 to 124 raters for each item.
Mean Mean
Behavior score SD Behavior score SD
Honest 1.62 1.05 Patient 2.72 1.29
Treats people equally 1.82 1.02 Open-minded 2.90 1.45
Compassionate 1.95 1.14 Friendly 2.97 1.36
Humane toward 2.03 1.33 Self-controlled 2.99 1.38
animals
Honorable 2.05 1.24 Non-materialistic 3.02 1.41
Charitable 2.06 1.07 Cooperative 3.05 1.36
Giving 2.13 1.24 Conscientious 3.07 1.43
Respectful 2.14 1.22 Modest 3.16 1.36
Faithful 2.15 1.43 Brave 3.42 1.48
Helpful 2.22 1.16 Wise 3.46 1.68
Generous 2.24 1.23 Perseverant 3.57 1.47
Is Fair 2.25 1.28 Patriotic 3.59 1.50
Hardworking 2.45 1.25 Independent 3.93 1.38
Loyal 2.48 1.46 Obedient 3.94 1.59
Empathetic 2.49 1.25 Sociable 4.00 1.37
Dependable 2.53 1.28 Intelligent 4.09 1.46
Humble 2.55 1.36 Lively 4.13 1.34
Polite 2.55 1.40 Bold 4.30 1.19
Selfless 2.59 1.70 Creative 4.32 1.18
Tolerant 2.61 1.27 Perfectionist 4.67 0.92
Nonjudgmental 2.65 1.51
Genuine 2.67 1.33
Environmentally 2.69 1.26
friendly
Note: 1This is an extremely morally good behavior/trait; 5This is neither a morally good nor a
morally bad behavior/trait; 9This is an extremely morally bad behavior/trait. Sample sizes range from
95 to 125 raters for each item.
The advantages of the two approaches complement each other, and their
disadvantages mostly negate each other. For this reason, we suggest that a
synthesis of the two approaches can lead to the best operationalizations of
morality. We call this the mixed approach. As is the case with the third-
person approach, we suggest that scholars should assess morality according
to predetermined moral principles, but in line with the first-person
approach we suggest that scholars should also examine which principles
each participant (consciously) values. Thus, in effect we are suggesting that
researchers interested in moral behavior should assess moral hypocrisy,
which is often conceptualized as the degree to which people go against
what they consider morally rightand thus involves both first-person
The Trouble with Nomothetic Assumptions in Moral Psychology 245
they could also determine what their own particular participants consider
most morally relevant by performing pretests with the particular population
or even sample that they will use in their research. As long as a group of
participants (probable) moral beliefs are somehow being taken into account,
this approach will likely produce more accurate results than the third-person
research, and at the same time require far less time and effort than a purely
first-person approach.
Conclusion
By asking large groups of laypeople what traits and behaviors they consider
moral, immoral, or morally irrelevant, we found evidence of problematic
discrepancies between researchers nomothetic assumptions and participants
views. For instance, measures that many researchers use as proxies for morality
(e.g., cooperation or defection in PDG, picking up papers in the lab) are seen
as about as morally irrelevant as eating pickles, while the actual behavior
of moral psychology researchers (deceiving participants) is seen as highly
morally bad. To help address this discrepancy we suggest that researchers
use a mixed approach combining the strengths of the first- and third-person
perspectives. This mixed approach still has disadvantages. For instance, even
if the mixed approach were used, values will sometimes clash, and it is unclear
how a persons morality should be assessed in such cases. However, we believe
that even with these disadvantages the mixed approach can help usher in a
more interesting and significant era of morality research.
Notes
* Authors Note: Peter Meindl and Jesse Graham, University of Southern California.
Address correspondence to: Peter Meindl, Department of Psychology, University
of Southern California, 3620 S. McClintock Ave., SGM 501, Los Angeles,
CA 90089, Email: meindl@usc.edu. This work was supported by Templeton
Foundation Grant 53-4873-5200.
1 See also Frimer, this volume, on the differences between values people explicitly
express on surveys and those they implicitly express in their everyday lives.
References
Blasi, A. (1990). Kohlbergs theory and moral motivation. New Directions for Child
and Adolescent Development, 1990, 517.
Cervone, D. (2004). The architecture of personality. Psychological Review, 111,
183204.
Cohen, T. R., Montoya, R. M., and Insko, C. A. (2006). Group morality and
intergroup relations: Cross-cultural and experimental evidence. Personality and
Social Psychology Bulletin, 32, 155972.
Colby, A., and Damon, W. (1992). Some do Care: Contemporary Lives of Moral
Commitment. New York: The Free Press.
Colvin, S., and Bagley, W. (1930). Character and behavior. In Colvin, Stephen S.
Bagley, William C. MacDonald, Marion E. (eds), Human Behavior: A First Book in
Psychology for Teachers (2nd ed. Rev.). New York, NY: MacMillan Co, pp. 292322.
Crockett, M. J., Clark, L., Hauser, M., and Robbins, T. (2010). Serotonin selectively
influences moral judgment and behavior through effects on harm aversion.
Proceedings of the National Academy of Sciences of the United States of America,
107, 174338.
DeAndrea, D. C., Carpenter, C., Shulman, H., and Levine, T. R. (2009). The
relationship between cheating behavior and sensation-seeking. Personality and
Individual Differences, 47, 9447.
Derryberry, W. P., and Thoma, S. J. (2005). Functional differences: Comparing moral
judgment developmental phases of consolidation and transition. Journal of Moral
Education, 34, 89106.
Doris, J. M. (2002). Lack of character: Personality and moral behavior. Cambridge:
Cambridge University Press.
, ed. (2010). The Moral Psychology Handbook. Oxford, UK: Oxford University Press.
Dunning, D., Perie, M., and Story, A. L. (1991). Self-serving prototypes of social
categories. Journal of Personality and Social Psychology, 61, 95768.
Frimer, J. A., and Walker, L. J. (2008). Towards a new paradigm of moral personhood.
Journal of Moral Education, 37, 33356.
Goodwin, G. P., and Darley, J. M. (2008). The psychology of meta-ethics: Exploring
objectivism. Cognition, 106, 133966.
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., and Ditto, P. H. (2013).
Moral Foundations Theory: The pragmatic validity of moral pluralism. Advances
in Experimental Social Psychology, 47, 55130.
Graham, J., Haidt, J., and Nosek, B. A. (2009). Liberals and conservatives rely on different
sets of moral foundations. Journal of Personality and Social Psychology, 96, 102946.
Greene, J. D., and Paxton, J. M. (2009). Patterns of neural activity associated with
honest and dishonest moral decisions. Proceedings of the National Academy of
Sciences, 106, 1250611.
250 Advances in Experimental Moral Psychology
Gu, J., Zhong, C., and Page-Gould, E. (2013). Listen to your heart: When false
somatic feedback shapes moral behavior. Journal of Experimental Psychology,
142, 307.
Hart, D., Atkins, R., and Donnelly, T. M. (2006). Community service and moral
development. In M. Killen and J. Smetana (eds), Handbook of Moral Development.
Hillsdale, NJ: Lawrence Erlbaum, pp. 63356.
Immordino-Yang, M. H., McColl, A., Damasio, H., and Damasio, A. (2009). Neural
correlates of admiration and compassion. Proceedings of the National Academy of
Sciences, 106, 80216.
Isen, A. M., and Levin, P. F. (1972). Effect of feeling good on helping: Cookies and
kindness. Journal of Personality and Social Psychology, 21, 3848.
Jordan, J., Mullen, E., and Murnighan, J. K. (2011). Striving for the moral self: The
effects of recalling past moral actions on future moral behavior. Personality and
Social Psychology Bulletin, 37, 70113.
Kaiser, F. G., and Wilson, M. (2000). Assessing peoples general ecological behavior:
Across-cultural measure. Journal of Applied Social Psychology, 30, 95278.
Kayser, D., Greitemeyer, T., Fischer, P., and Frey, D. (2010). Why mood affects help
giving, but not moral courage: Comparing two types of prosocial behavior.
European Journal of Social Psychology, 40, 113657.
Kohlberg, L. (1970). Education for justice: A modern statement of the Platonic view.
In N. Sizer and T. Sizer (eds), Moral Education: Five Lectures. Cambridge, MA:
Harvard University Press, pp. 5683.
Kohlberg, L., and Mayer, R. (1972). Development as the Aim of Education. Harvard
Educational Review, 42(4).
Kouchaki, M. (2011). Vicarious moral licensing: The influence of others past
moral actions on moral behavior. Journal of personality and social psychology,
101, 702.
Lapsley, D. K., and Lasky, B. (2001). Prototypic moral character. Identity, 1, 34563.
Matsuba, M. K., and Walker, L. J. (2004). Extraordinary moral commitment: Young
adults involved in social organizations. Journal of Personality, 72, 41336.
McAdams, D. P. (2009). The moral personality. In D. Narvaez and D. K. Lapsley (eds),
Personality, Identity, and Character: Explorations in Moral Psychology. New York:
Cambridge University Press, pp. 1129.
Mischel, W., and Shoda, Y. (1995). A cognitive-affective system theory of personality:
Reconceptualizing situations, dispositions, dynamics, and invariance in
personality structure. Psychological Review, 102, 24668.
(1998). Reconciling processing dynamics and personality dispositions. Annual
review of psychology, 49, 22958.
The Trouble with Nomothetic Assumptions in Moral Psychology 251
Valdesolo, P., and DeSteno, D. (2008). The duality of virtue: Deconstructing the moral
hypocrite. Journal of Experimental Social Psychology, 44, 13348.
Van Rompay, T., Vonk, D., and Fransen, M. (2009). The eye of the camera: Effects
of security cameras on prosocial behavior. Environment and Behavior, 41,
6074.
Walker, L. J., and Pitts, R. C. (1998). Naturalistic conceptions of moral maturity.
Developmental Psychology, 34, 40319.
Wright, J. C. (2010). On intuitional stability: The clear, the strong, and the paradigmatic.
Cognition, 115, 491503.
Wright, J. C., Cullum, J., and Schwab, N. (2008). The cognitive and affective
dimensions of moral conviction: Implications for attitudinal and behavioral
measures of interpersonal tolerance. Personality and Social Psychology
Bulletin,34,146176.
Index
abortion 132, 149, 153, 157, 167, 21114 complex scoring systems 43
agent moral relativism 210, 21213, 226 Co-opt thesis 114, 134
agents 5, 7, 32, 73, 115, 174, 180, 183,
190, 21116, 2267 Darley, J. M. 167, 1714, 176, 183,
Algoe, S. 65 18990, 1935, 200, 203, 214
Allport, G. W. 42, 235, 247 Dasgupta, N. 116
ambient sensibilia 78 death penalty 148, 153
American Philosophical Association debunking argument 1323, 1356, 138
(APA) 92, 94, 98, 106n. 1, 131 derogation 51, 578, 60
Appiah, K. A. 1312, 141 disagreement
appraiser moral relativism 210 folk metaethical intuitions 17282
Aristotle 5 moral objectivism 1945, 199200
authority independence hypothesis disgust
1589 amplifier view 11213
consequence view 11213
Batson, C. D. 3940, 51, 2467 ethnic cleansing/child abuse 120
behavioral immune system 10, 11516, foreign outgroups 117
121, 125 homosexuals 11617
moral judgment 11821 moral cognition 133
social attitudes 11618 moralizer 11213
belief in a just world 1923, 196, 200 and moral judgment 111
Blass, T. 80 moral violations 11921
bystander apathy 80 permissible actions 124
repugnant foods, consumption of 119
Cannon, P. R. 113 sexual practices 119
charitable donation 978 dispositions 74
Chomsky, N. 131 psychology of 7780
Cognitive Affective Personality disrupters 75, 81
System 235 domain theory of attitudes
cognitive science 1301, 141 authority independence
coin-flipping experiments 3940 hypothesis 1589
comparative disposition 76 emotion 1558
competence 223 motivating behavior 1545
character judgments 30, 32 schematic representation 150
emotional responses 25 universalism and objectivism 1523
moral cognition 323 Durwin, A. 191
moral identity 30
motivations 28 elevation 6, 645, 246
status 24 prosociality 613
traits 2830 email responsiveness 95
competitiveness 24, 27, 58, 60, 656 emotion 1558
254 Index
objectivism 1523 see also moral Sackris, D. 1679, 1713, 175, 183
objectivism Sawyer, P. J. 58
objectivist beliefs 190 selective debunking argument see
Oosterhof, N. 177, 179 debunking argument
operationalizing morality self-affirmation 62
first-person approach 2345, 2434 self-report measures 16, 37, 445,
mixed approach 2446 478,51
third-person approach 2347, 2423 charitable donation 978
outgroup bias 79 disgust 120
expedience 42
Parks, C. D. 60 limitations 412
patients 32, 205 meat consumption 967
peer evaluations 16, 84, 923, objectivity 42
160, 174 selfish 38
personal preference 149 Small, D. A. 174
person perception Smith, A. 278
competence 26 social attitudes 112
evaluation, dimensions of 3 behavioral immune system 11618
moral character 224 social domain theory 149 see also domain
moral cognition 29 theory of attitudes
Plato 209 specialization 278, 93
Power of Reason view 91, 100, 1023 spoken words 43, 479
Pritchard, D. 83 status 4, 9, 14, 24, 44, 567, 634, 66, 101,
professional ethicists, moral behavior of 135, 152
charitable donation 978 stereotype content model 25
deficient intuitions, compensation Stohr, K. 94
for 1034 Stone, A. B. 60
email responsiveness 95 strong attitudes 148, 161
256 Index