Você está na página 1de 9

Morality in a Nutshell

Introduction
My purpose here is to sketch an outline of what I take to be the nature of morality, or at least moral
wrongness. Positive moral obligations might not be captured here, but that could be a different project.
I believe that the following outline, short and simple though it may be, renders in high fidelity an
explanation of what counts as wrong and why it is that way. Each aspect of what appears below will
need elaboration if I even decide to detail this, but I feel like Ive thought about this enough and have
enough of the pieces in place to make a satisfactory stab at a fairly convincing theory, or at least one
that will serve as fuel for further discussion.

Neurology and Moral Intuition/Motivation


This part especially will require some more research on my part since I know almost nothing about it,
but its still pretty cool and provides a plausible empirical starting point for the rest. Based on the
Radiolab episode Morality, that I listened to, with professor whoever-the-fuck who does philosophy
and neuroscience, there are two, lets say, dominant sources of moral intuition, both rooted in
evolutionary biology. The episode describes putting people in a PET scanner and presenting them with
the Trolley Problem. In the standard case (the lever case), a certain part of the brain associated with
number crunching lights up and starts firing, and survey results show that 90% of the people confronted
with the standard case say that you should pull the lever and save five lives at the cost of one. But for
the footbridge case, where 90% say you shouldnt push the guy into the trolleys path even if it results in
saving five lives at the cost of one, a different section of the brain lights up. Now the hypothesis is that
there is an evolutionary explanation for this. Humans are cooperatorsthats how we survive and how
we conquered the world, and it makes sense that there would have evolved neurological wiring that
disinclines us to hurt one another, like, say, by pushing someone off a bridge. But weve also evolved to
have concern for the welfare of the group, so we sometimes crunch the numbers when deciding how to
distribute harms. A plausible explanation for differences in reaction to these cases is the personalness
of pushing the guy off the footbridge. Its our primitive primate brains screaming No! at us when it
comes to harming other individuals in such a direct way. The lever case is less apt to trigger our
disinclination to harm another individual, probably because it involves redirecting an existing threat
away from several individuals, rather than introducing a new threat to someone, instrumental though it
may be.

The important thing to recognize is that these are separate adaptations and they can conflict.
The Radiolab episode discusses how when confronted with a last-episode-of-MASH style scenario where
you have to smother your baby in order to save everyones lives, both parts of the brain light up and
subjects will say things like, Well, probably I should smother my baby, but I dont think Id be able to
actually carry it out. The brain stuff also explains why people with certain kinds of brain damage (so Ive
been told) deviate from the majority when confronted with the footbridge case, answering that they
would as readily push the guy into the trolleys path as they would pull a lever in order to save five lives.
I.e., their dominant concern are the numbers of people that would be saved; their concern for the
individual seems to get drowned out by the mathematically positive results.
I believe this is important because it suggests that this neurological wiring largely underlies and
determines our moral motivations and intuitions. I would speculate that while these two neurological
centers of moral thought might not be the only ones operating in our heads, they are probably the most
important. Speculating further, I would suggest that the differences and conflicts among the intuitions
generated by these neurological centers are writ large in the debate between deontology and
consequentialismMay justice be done though the heavens fall, vs. You should kill your child if it
would save the lives of many others.
There is one more important thing to notice here, specifically that the nature of the intuitions
generated by both of these centers of moral thought share something in common: they are both, in
different ways, concerned with and triggered by considerations or situational features related to what
benefits and harms human beings. This is also reflected/writ large in the deontology vs.
consequentialism debate; both, in different ways, concern human welfare. It is also reflected by the fact
that when asked why something is the right or wrong thing to do, people will typically cite
considerations involving benefits and harms to human beings. That is the stuff of moral thought.1

Moral Language
There is a simple way to move from the above hypothesis about the nature of our moral intuitions and
motivations to a thesis about the meanings of moral terms and ultimately an explanation of what is right
and wrong, morally speaking. The move depends on Wittgensteins view in Philosophical Investigations
that The meaning of any word is a matter of what we do with our language, (PI s. 43). Duncan Richter

We also care about animals and other potential sentient creatures, but I would suggest this is the case only
insofar as those creatures are relevantly similar to us. We care more about apes than mosquitos, for example,
because they are subject to more of the same harms that we are. They have more to lose, so to speak.

phrases it this way: : The meaning of any word is a matter of what we do with our language, 2 If we
use moral terms like good, bad, right, and wrong to express and represent our
intuitions/conclusions/assessments (neurologically derived) associated with what benefits and harms
human beings, then since meaning is use, the meanings of these terms are themselves bound up and
associated with what benefits and harms human beings. Again, this is borne out by observations re: how
people attempt to explain why something is (e.g.) wrongthey typically reach for explanations involving
harms and threats to human beings.
Think also about theories of value like Scanlons and Zimmermans, i.e., buck-passing accounts.
Consider a state of affairs involving the useless suffering of innocents. If we think such a state is
intrinsically morally bad or has moral disvalue, this is because the fact that it has the useless-sufferingof-innocents is a (moral) reason to disfavor it (Scanlon) or that it is morally fitting/required to disfavor it
(Zimmerman). Such states invite assessments about the nature and degree of their moral value, and
this is because they trigger what I suppose might be called our moral sense. The meanings of words
used to describe facts or states involving human welfare are explicable (per Wittgenstein) in virtue of
how those words are used. I find this entirely plausible. Again, that moral language is used in contexts
related to benefits/harms/welfare of people illuminates its meaning; the meaning of wrong for
example, is going to have something to do with harms or threats to humans.

Moral Rules
So, reiterating, because moral language is used to express or represent assessments in contexts
involving human welfare, and meaning is use, the meaning of moral language is itself relates to human
welfare. Now, consider the following two statements:
(1) It is morally wrong to deliberately subject people to harm or inferior treatment without
justification.
(2) Justification, morally speaking, involves resisting harm to/inferior treatment of human
beings.
Note that deliberately in (1) should be construed as a synecdochic stand-in for whatever necessary
motivational/psychological conditions that might apply to or be necessary for wrongful action. It seems
wrong to say that someone with a computer chip in their brain being remotely controlled by another

This is found in Richters entry on Wittgenstein in the Internet Encyclopedia of Philosophy


http://www.iep.utm.edu/wittgens/#H5. [RMT: Add something from chicks book]

actor is doing something wrong if they are remotely manipulated into causing harm. Certainly we would
not blame them for doing so, and I am suspicious of completely severing wrongness from
blameworthiness. The act must originate from the offender in the right sort of way in order to be wrong,
and I use deliberately to capture that notion. I leave the details for others to hash out.
In (2), resist is likewise meant as a synecdochic stand-in for things like preventing, denouncing,
deterring, discouraging, or avoiding harm. Maybe promoting welfare in a positive sense can justify
causing some measure of harms as well, but Ill leave that aside. My purpose is to establish that some
moral rules like (1) are true.
Now, it strikes me that competent speakers of the English language with typical experience in
applying the term morally wrong will find it very difficult to find a counterexample to (1). I invite
people to try. There are, however, those who may not buy that the simple statement in (1) is because
they have adopted some sort of idioscyncratic metaethical position, e.g., moral skeptics or nihilists
might deny (1) on the view that nothing is right or wrong and that moral concepts are mere fantasies.
Moral subjectivists or cultural relativists may be unsettled by the categorical nature of (1), insisting that
moral claims only make sense when tied to the moral practice or beliefs of an individual or culture. But
setting these people aside briefly, let me elaborate on the prima facie truth of (1). Consider one form a
counterexample might take:
(a) Deliberate, unjustified actions that cause significant harm to thousands of people,
are morally obligatory rather than morally wrong.
Observe how characteristically odd (a) sounds. There is, conceptually speaking, something off about
asserting that a deliberate and unjustified act causing harm to thousands is morally obligatory. Going a
bit further, I would suggest that there is something characteristically odd about this sentence as well:
(b) Deliberate, unjustified actions that cause significant harm to thousands of people,
are morally permissible.
Within the moral domain, if there is such a thing (more on this below), deliberately harming thousands
without justification will not come out as obligatory or permissible given the meanings of moral terms
and the nature of their association with what benefits and harms human beings.
This actually provides a neat way to answer cultural relativists and moral subjectivists. The
relativist and subjectivist positions arise from the observation that cultures or individuals disagree over
what particular acts count as right/wrong/permissible, and that these disagreements are not resolvable

on an objective basis. But I believe the apparent disagreements evaporate under a little interrogation.
All cultures and all individuals concerned with moral wrongness will typically assent to (1). They share
that in common. What they actually disagree about are particular justifications of the kind roughly
expressed by (2). For example, when discussing homosexuality and gay marriage with my class full of
religiously conservative Muslims in Qatar (in a discussion of relativism) they explained that the reason
homosexuality was wrong (and why its divinely proscribed) was because of the associated risk of
contracting and spreading AIDS. Even though the particular moral conventions and beliefs in our
cultures differ substantially, their answer (Because AIDS!) was offered as justification for the anti-gay
position (which suggests they recognize that homosexuals are subject to inferior treatment), and is
indeed one that nominally relates to human welfare. As my students saw it, the injunctions against
homosexuality are justified by considerations of resisting harm to human beings represented by the
threat of AIDS. Whether that (or anything else) actually serves as a cogent justification is open to be
settled by empirical or other rational means.3 The important thing to acknowledge through examples
like this is that the nature of the disagreement is not over whether (1) is true, but rather lies in the
different justifications cultures and individuals accept, and these disagreements are in turn
characterized by differences in opinion over what frustrates or promotes human welfare. They are
disputes about justification, not wrongness. Almost all cultures and individuals will agree that (1) is true,
which makes perfect sense given our evolutionary history, neurological wiring, and standard use of
moral language across the world.

Fuck the is-ought problem


A moral skeptic or nihilist or error theorist might complain that I am running up against some variety of
the is-ought problem here. I wont try to detail Humes Guillotine, the naturalistic fallacy, or the openquestion argument, but the idea would be that the substance of my argument involves empirical
observations (or speculative assumptions) about neurology, how people use moral language, and what
people agree on etc., and that none of that can support a conclusion about what people ought or ought
not do. In other words, I am cheating or erring by suggesting that a statement like (1) can be established
by the maneuvers Ive employed so far. The explanation for this, the story goes, is that you cant get an
ought from an is; just because it is true that our moral intuitions arise from neurology, and that these in
turn give sense to our moral concepts and language, and that people broadly speaking agree about

IObserve that AIDS isnt localized to gay people, AIDS might be cured etc.

these concepts, we cannot move to conclusions about how people ought to act, and so cannot move to
conclusions about what is morally right or wrong.
I had the opportunity to argue a prototype version of my thesis with an anonymous scienceminded guy on the Internet one evening. He produced a version of the above objection that is perfectly
satisfactory for the sake of illustration. Favoring subjectivism, he wrote the following:
-Science Guy: I'm not saying it's not wrong to hurt other people, just that it's still
subjective, even if most humans agree with that statement.
Here are two statements I believe are true:
A. Human biological and social evolution make most people feel that hurting others is
wrong, and/or that hurting others is not socially acceptable, or, in general, any of the
middle three stages of Kohlberg's moral development (see
http://www.haverford.edu/psychology/ddavis/p109g/kohlberg.stages.html if you're not
familiar with them)
B. Stage 5 of Kohlberg is very rare because evolution would tend not to make perfect
altruists, who are too vulnerable to the small percentage of psychopaths in the
population (of course, psychopathy itself is a good evolutionary strategy but only as long
as most of the population are not, so they can be taken advantage of but that's another
topic so I won't go into it any further here)
So these are descriptive, and they agree with the principle that aspects of morality are
shared and not arbitrarily individual.
None of the above, however, can lead one to conclude C:
C. Hurting others is wrong.
A, B, or any other statement that you can derive from science, can never be used to
back up C! This is because A, B, etc. just lead to the conclusion that hurting others is
often avoided by combinations of natural and socially acceptable behavior, but that is
descriptive. C is a prescriptive statement, and thus completely disconnected in terms of
a logical justification from the others.
My reply to objections like this basically boils down to one simple point: Im not trying to derive an
ought from an is, and my argument, properly construed, doesnt involve such a move. The conversation
with Science Guy proceeded this way:
-Me: You're making a mistake. I do not intend C to be a prescriptive statement; C
can only involve a prescription re: what one ought to do if [moral] wrongness is
an intrinsically normative concept. I'm taking it as a matter of the meaning of the
word "wrong" that it is morally wrong to hurt others(...), but I mean that as an

entirely descriptive statement. In and of itself it isn't normatively binding in any


serious way, i.e., someone who doesn't care about morality may not be making
any kind of mistake of rationality or understanding if they do not behave in a
moral way.
Think of it this way, in chess you can't take your opponents king on the first
move. But if I'm not trying to play chess, I can easily knock a king off the board
and put a pawn there. But, Chessally speaking, doing that is against the rules.
Morally speaking, hurting others without justification is similarly against the rules.
-Science Guy: >I mean that as an entirely descriptive statement.
No, you don't you're just utilizing a slight of hand:
A. Doing X is wrong.
B. You shouldn't do X.
A and B are idempotent, by definition of the term "wrong". Arguing against this is
redefining the semantics of the English language and akin to Clinton telling us
what "is" really means.
-Me: >A. Doing X is wrong.
>B. You shouldn't do X.
>A and B are idempotent, by definition of the term "wrong".
That's the assertion I'm denying.
Let me explain this way. I am suggesting that there is a difference between
1. Doing X is morally wrong.
and
2. You should/ought not do X.
The reason this is counterintuitive is that we internalize morality to such a degree
that moral statements like A just appear idempotent (nice one) with statements
about what one ought to do. But just like the rules of chess don't by themselves
determine what I should do with my chess pieces (I can throw them all over the
table if I like), moral rules like A [or (1) above] don't by themselves determine
how you ought to behave. It might determine that if you have a conscience and
care about being moral, you shouldnt do X, but that is not the same as saying
that A is a categorical binding statement about what one should do. Its a
statement about what one should do morally speaking, or within the context or
domain of morality.
Allow me to elaborate on what is going on here. As suggested above, my response to is-ought objections
like Science Guys is that it is a mistake to construe X is wrong and One ought not do X as

idempotent. The proposition expressed by (1) is not meant to be a categorical claim about what people
should (not) do; rather I am trying to tease out what would be morally wrong to do, or to use the
language in the above exchange, what would violate moral rules. There are many ways to establish the
conclusion that a sentence like (1) is not definitionally prescriptive (or, I guess, proscriptive). While
surely sentences like (1) have been used by people to try to express categorically binding prescriptions
on everyonei.e., to tell people what they ought to do (Science Guy seems to think thats the only way
theyre used), I would argue that such uses are actually rather uncommon, and that they actually are
deeply problematic for the very reasons that Science Guy and other skeptics suggest: namely that you
cannot get an ought from an is by reference to the facts and speculative assumptions like those Ive
advance. In other words, I agree that it is a mistake to construe (1) as definitionally prescriptive partly
because doing so invites is-ought worries. If there is a population of English speakers who construe (1) in
that way then they will owe an explanation of how moral wrongness or rightness are intrinsically
normative concepts.
The existence of such a population does not, however, vitiate my thesis. This is because there
still is an alternative, perfectly mundane, common use of sentences like (1) which does not presume the
intrinsic normativity of morality and so does not run afoul of the is-ought problem. Given the premise
that moral language is used to express and represent our assessments of actions and value bearers that
have dimensions involving human welfare, it will still be true that the meanings of moral terms are
bound up and associated with human welfare. But the alternative variety of the use and meaning of
moral language dodges the is-ought problem by eschewing the assumption that considerations of
human welfare are normatively binding on their own.
This alternative is illuminated by the fact that it is perfectly sensible to ask the question Why be
moral? The question assumes that rules like (1) are knowable and understandable, but that some
additional innovation is necessary to get them to operate as prescriptions or should statements. A
religious person who advances the threat of Gods wrath as the reason to be moral is failing to construe
sentences like (1) as definitionally prescriptive; if (1) were definitionally prescriptive, there would be no
need to answer the question Why be moral? because morality would have an in-built normativity that
applied to everyone without the need for further inquiry. God could close up the door to Hell if he liked,
and we would still all have reasons to behave morally. The fact that Why be moral? invites responses
like this, and that our religious believer seems to be making no error in offering one, suggests that
people like Science guy are asking for too much: I dont need an ought to get my argument to work; I

just need a theory of wrongness. Why we should care about wrongness or regulate our behavior so as to
avoid doing wrong things is an entirely separate question.
Of course, religious responses to Why be moral? are not the only ones. There is a long
tradition in moral philosophy, sometimes called moral rationalism, which has been deployed in order
to answer the question. Moral rationalists argue that what one should morally do is a function of a set of
constraints on how we all ought to behave, constraints which are in some way identifiable by the
exercise of reason.4 Behaving in ways that violate these constraints is supposed to be irrational, and
since these constraints can be discovered through some rational process, in theory they should be
justifiable to and for (i.e., binding on) everyone. In effect, at least some versions of rationalism can be
viewed as attempts to secure a basis for objectivity in moral judgment. The modern tradition of moral
rationalism originated with Kant and his Groundwork of the Metaphysics of Morals, wherein he famously
argued for the Categorical Imperative, which represents moral requirements as being principles of
action which must be accepted by any rational, autonomous will.5 In Kants view, being immoral
amounts to violating the Categorical Imperative, and as such doing so is irrational. Other varieties of
moral rationalism come courtesy of Richard Hare (1963, 1981), Thomas Nagel (1970, 1986), Alan
Gewirth (1978, 1988), David Gauthier (1986), Michael Smith (1994), David Schmidtz (1995), Christine
Korsgaard (1986, 1996), and Thomas Scanlon (1998). For all of these writers the connecting thread is
that we ought to be moral because it is rational (on some theory of rationality) to do so.
I bring up rationalism not because I favor it (though I have sometimes in the past), but only to
illustrate that the oughtness of a sentence like (1) is not self-evident, and can and has been denied.

Alan Donagan puts it this way: Any rationalist theory presents morality as a code of precepts, [the
violation of which] can be shown by a rational process to be contrary to practical reason. (1982, 3)
5
The Categorical Imperative (on one of its formulationsthe other main formulations are in terms of
respect for persons) states that you should act only in accordance with that maxim through which you can at the
same time will that it become a universal law. (Groundwork 4:421). The principle is supreme in the sense that it
is supposed to take precedence over all other possible moral or practical principles of conduct.

Você também pode gostar