Você está na página 1de 5

Knowledge of our Own Thoughts is just as Interpretive as Knowledge of the

Thoughts of Others
Peter Carruthers, Professor of Philosophy, University of Maryland

Philosophers have traditionally assumed that knowledge of our own thoughts is special.
Descartes famously believed that knowledge of our current thoughts is infallible. He also
believed that those thoughts themselves are self-presenting, so that whenever one entertains
a thought, one is capable of infallible knowledge of it. Many figures in the history of
philosophy have shared these beliefs (including Aristotle, Augustine, and Locke). This is no
longer true today. Most now accept that it is possible to be mistaken about ones current
thoughts, and that many such thoughts occur unconsciously, in ways that arent available to
one. Nevertheless, almost everyone in the field believes that our knowledge of some subset
of each of the main kinds of thought (judgment, decision, desire, and so on) is both
authoritative and available to us through some form of privileged access. It is believed that
our knowledge of these thoughts is much more certain than the knowledge that we have of
the thoughts of other people, and normally cannot be challenged from a third-person
perspective. Moreover, the mode in which we acquire this knowledge is unavailable to others,
even in principle. Philosophical discussions of self-knowledge typically start from a statement
of these assumptions, and proceed to develop theories that purport to explain them.

Carruthers (2011) argues that these assumptions, and the philosophical theories based on
them, are directly challenged by an extensive range of evidence from across cognitive
science. In their place is proposed the Interpretive Sensory-Access (ISA) theory of self-
knowledge. This holds that there is just a single mental faculty (the mindreading faculty) that
is responsible for all our knowledge of propositional attitudes, whether those thoughts are our
own or other peoples; it claims that the faculty in question has only sensory access to its
domain, utilizing globally broadcast attended sensory outputs (including inner speech and
visual and other forms of imagery); and it claims that our access to all attitudes (whether our
own or other peoples) is equally interpretive in character. One outcome, it is argued, is that
there are hardly any kinds of conscious propositional attitude; another is that there is no such
thing as conscious agency.

Notice that the philosophical assumptions described above are inherently contrastive in
nature. They presume that ones knowledge of ones own thoughts is very different in kind
from ones knowledge of the thoughts of other people. (The ISA theory, in contrast, claims
that these are very similar, differing only in that there are some forms of sensory information
available for interpretation in the first person that are not available in the third, such as our
own inner speech and visual imagery.) Those assumptions therefore cannot be defended
from empirical attack in the way that philosophers of perception defend their direct-
perception accounts from the findings of vision science. The latter group say that they are
only intending to make claims about the personal level, and make no commitments regarding
the subpersonal processes that support our direct perceptual access to the world. A similar
move is not available in the domain of self-knowledge, since what the data show is that
knowledge of our own thoughts is not different in kind from knowledge of the thoughts of
other people.
The relevant evidence is of many different forms, ranging from experimental studies in social
psychology demonstrating the ease with which peoples reports of their current attitudes can
be pushed around by minor contextual modifications, through introspection-sampling
studies, studies of peoples metacognitive abilities to monitor and control their own learning
and reasoning, studies of systematic failures of self-knowledge and/or other-knowledge in
autism and schizophrenia, as well as brain-imaging data for self versus other tasks. Some of
these forms of evidence count directly against some philosophical theories, some count
against all or almost all. The data are reviewed in detail and their significance discussed in
Carruthers (2011).

Perhaps the most directly relevant set of data consists of numerous psychological studies
demonstrating peoples willingness to confabulate about their own current or very recent
thoughts, attributing thoughts to themselves that we have every reason to believe they never
entertained, and making errors in self-attribution that directly parallel the errors that we make
in attributing thoughts to other people. These studies show that, at least in these cases,
people are using the same mindreading faculty that they employ when attributing thoughts to
other people, relying on sensory forms of evidence that stands in need of highly fallible
interpretation.

Recent defenders of the philosophical status quo who know about some of this data admit
that they are forced to become dual method theorists as a result (Nichols and Stich, 2003;
Goldman, 2006). That is, they are forced to admit that sometimes people employ self-
directed mindreading when attributing thoughts to themselves (hence the instances of
confabulation), while on other occasions they have knowledge of their own thoughts that is
authoritative and privileged. The main problem for dual method theories, however, is to
explain the patterning in the data. For this, they need to provide some principled account of
the circumstances in which people access their thoughts directly and the circumstances in
which they rely on self-directed mindreading. No such account has yet been provided that
can accommodate all the data. Indeed, many instances of confabulation concern perfectly
ordinary everyday thoughts occurring in circumstances where people should have been
paying attention to their thoughts. In these cases one would expect that people should have
had authoritative access to their thoughts if such a thing is ever possible.

Let me illustrate these points by discussing one body of data deriving from the dissonance
tradition in social psychology, where hundreds of supporting references could be provided. In
a typical experiment, subjects will be induced to write an essay arguing for a conclusion that
is the contrary of what they believe. In one condition, subjects may be led to think that they
have little choice about doing so (for example, the experimenter might emphasize that they
have previously agreed to participate in the experiment). In the other condition, subjects are
led to think that they have freely chosen to write the essay (perhaps by signing a consent
form on top of the essay-sheet that reads, I freely agree to participate in this experiment.)

The normal finding in such experiments is that subjects in the free-choice condition (and only
in the free-choice condition) change their reported attitudes on the subject-matter of the
essay. And this happens although there are typically no differences in the quality of the
arguments produced in the two conditions.
If subjects in the free-choice condition have previously been strongly opposed to a rise in
university tuition costs, for example (either measured in an unrelated survey some weeks
before the experiment, or by assumption, since almost all people in the subject pool have
similar attitudes), then following the experiment they might express only weak opposition or
perhaps even positive support for the proposed increase. Such effects are generally robust
and highly significant, even on matters that the subjects rate as important to them, and the
changes in reported attitude are often quite large.

We know that freely undertaken counter-attitudinal advocacy gives rise to negatively valenced
states of arousal, which dissipate as soon as subjects express an attitude that is more
consistent with their advocacy (Elliot and Devine, 1994). Indeed, even pro-attitudinal
advocacy will give rise to changes in expressed attitude in circumstances where subjects are
induced to believe that their honest advocacy will turn out to have bad consequences (Scher
and Cooper, 1989). And in circumstances where subjects are offered a variety of methods for
making themselves feel better about what they have done (an attitude questionnaire, a
question about their degree of responsibility, and a question about the importance of the
topic), they will use whatever method is offered to them first (Simon et al., 1995; Gosling et
al., 2006). For example, if asked first about the importance of the question of tuition raises,
they will say that it is of little importance (even though in questionnaires administered a few
weeks previously they rated it as of high importance), thereafter going on to express an
unchanged degree of opposition to the change and rating themselves as highly responsible
for what they did.

The best explanation of these patterns of result is that subjects mindreading systems
automatically appraise them as having freely chosen to do something bad, resulting in
negative affect. Then when confronted with the attitude questionnaire they rehearse various
possible responses, responding affectively to each in the manner of Damasio (1994). They
select the one that feels right in the circumstances, which is one that provides an appraisal
of their actions as being significantly less bad. And as a result of making that selection, their
bad feelings go away. For example, saying (and hearing themselves say) that they do not
oppose a raise in tuition (contrary to what they believe) enables their earlier actions to be
appraised as not bad, and as a result they cease to feel bad. In contrast, it seems quite
unlikely that subjects should really be changing their minds prior to selecting an answer on
the questionnaire, with their novel belief then being available to be authoritatively reported.
For we know for sure that they do not change their beliefs unless offered the chance to
express them, and there is no plausible mechanism via which a question about ones beliefs
should lead to the formation of a new belief in these circumstances (which can then be
veridically reported).

Such phenomena are fully consistent with the ISA theory of self-knowledge. Indeed, they are
predictable from it when combined with independently warranted psychological theories
(such as the use of mental rehearsal and prospective affect in action selection; Damasio,
1994; Gilbert and Wilson, 2007). But they are deeply problematic for most standard
philosophical accounts. For one would think that a direct question about ones beliefs (e.g.
about the badness of a tuition raise, or about the importance of the issue) would have the
effect of activating the relevant belief from memory. And there seems no reason why a
judgment of this sort should remain unconscious or be otherwise inaccessible to the subject.
But if subjects had authoritative access to this activated belief, then it would be mysterious
how they could at the same time express an inconsistent belief and make themselves feel
better by doing so. For if they say one thing while being aware that they think something else,
then they should be aware of themselves as lying. And that ought to make them feel worse,
not better.

These counter-attitudinal essay-writing data can be combined with many other studies of
confabulation to support the ISA account: subjects mindreading systems monitor and
interpret their own behavior, both overt (such as an episode of essay writing) and covert (such
as sentences rehearsed in inner speech), much as the overt behavior of others is monitored
and interpreted. And this is the only mode of access that people have to their own thoughts.
For if they also had privileged and authoritative access to some of their own thoughts, then
the data would not display the patterning that it does.

Why, then, do people across time and place have such a powerful intuition of infallible (or at
least authoritative) access to their own thoughts? And why are they inclined to believe that
their thoughts are self-presenting (or at least accessible in a privileged way)? One answer
would be that people have these intuitions because they are true. Compare the universality of
believing that water is wet: people believe this because water is wet, and because everyone
has access to plenty of data to indicate that it is. Likewise, then, there might be voluminous
and easily available evidence that supports the existence of direct access to our own
attitudes. But the only such evidence (to the extent that it exists at all) is the general reliability
of peoples reports of their own attitudes, which often turn out to be consistent with our
observations of their behavior. But this cant begin to support the claims that error and
ignorance with respect to ones own mental states are impossible. Nor does it close off the
possibility of skepticism about self-knowledge. (Compare the fact that visual perception, too,
is generally reliable; yet skepticism in this domain has been common, whereas no
philosophers have ever been skeptics about knowledge of their own thoughts.) And neither,
even, does it support the idea that our access to our own mental states is somehow
privileged and especially authoritative. All it supports is general reliability.

A better explanation of the universality of our intuitions about self-knowledge is that they
derive from a pair of inference-rules that are built into the structure of the mindreading faculty
itself (whether innately or by learning):

1. One thinks that one is in mental state M One is in mental state M.


2. One thinks that one is not in mental state M One is not in mental state M.

Carruthers (2011) argues on reverse-engineering grounds that just these rules are likely to be
built into the mindreading system, providing heuristic short-cuts in the process of behavior
interpretation (especially behavior in which subjects ascribe mental attitudes to themselves).
As a result, if a question is raised about the provenance of a belief about ones own attitudes,
or about the possibilities of mistake or ignorance, then one will initially be baffled. For an
application of the inference rules (1) and (2) with oneself as subject leaves no room for such
possibilities. It will require systematic reflection on the significance of such phenomena as
self-deception, or the findings of cognitive science, for one to realize that mistakes about
ones own attitudes are possible, and that some of ones attitudes might be inaccessible to
one.
In addition, these rules also function to short-circuit processes that might otherwise lead
one to be aware of ambiguities in ones own inner speech. This means that we are never
confronted by the manifestly interpretive character of our access to the thoughts that underlie
our own speech. Or so I will now suggest.

When interpreting the speech of another person, the mindreading system is likely to work in
conjunction with the language faculty to arrive at a swift first pass representation of the
attitude expressed, relying on syntax, prosody, and salient features of the conversational
context. But it is part of the mindreading systems working model of the mind and its
relationship to behavior that people can be overtly deceptive, and that their actions can in
various ways disguise their real motives and intentions. One would expect, then, that
whenever the degree of support for the initial interpretation is lower than normal, or there is a
competing interpretation in play that has at least some degree of support, or the potential
costs of misunderstanding are much higher than normal, a signal would be sent to executive
systems to slow down and issue inquiries more widely before a conclusion is reached. In
these cases we become aware of ourselves as interpreting the speech of others.

When interpreting ones own speech, however, the mindreading system is likely to operate
rather differently. For possession of inference rules (1) and (2) means that it implicitly models
itself as having direct access to the mind within which it is lodged. Moreover, even among
people who know about cognitive science, and/or who believe that self-deception sometimes
occurs, such ideas will rarely be active and salient in most normal contexts. Hence it is likely
that once an initial first pass interpretation of ones own speech has been reached, no
further inquiries are undertaken, and no signals are sent to executive systems triggering a
stop and reflect mode of processing. As a result, the attitude that initially seems to be
expressed is the attitude that one attributes to oneself, not only by default but almost
invariably. So although the process of extracting attitudes from speech is just as interpretive in
ones own case as it is in connection with other people, it is rarely if ever consciously
interpretive.

Carruthers (2011) argues that when the full range of evidence is considered, the Interpretive
Sensory-Access (ISA) theory emerges as significantly better than any of its more-traditional
philosophical rivals. If so, then philosophers and others need to begin exploring what this
should mean, and how related topics might be impacted. As noted earlier, one outcome is
said to be that there are hardly any types of conscious attitude, and another is that there is no
such thing as conscious agency. What this means for our conception of ourselves as
subjects, and for our beliefs about our moral responsibility, are matters requiring urgent
attention.

References
Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. Oxford University Press.
Damasio, A. (1994). Descartes Error. Papermac.
Elliot, A. and Devine, P. (1994). On the motivational nature of cognitive dissonance: Dissonance as psychological discomfort. Journal of
Personality and Social Psychology, 67, 382-394.
Gilbert, D. and Wilson, T. (2007). Prospection: Experiencing the future. Science, 317, 1351-1354.
Goldman, A. (2006). Simulating Minds. Oxford University Press.
Gosling, P., Denizeau, M., and Oberl, D. (2006). Denial of responsibility: A new mode of dissonance reduction. Journal of Personality
and Social Psychology, 90, 722-733
Nichols, S. and Stich, S. (2003). Mindreading. Oxford University Press.
Scher, S. and Cooper, J. (1989). Motivational basis of dissonance: The singular role of behavioral consequences. Journal of Personality
and Social Psychology, 56, 899-906.
Simon, L., Greenberg, J., and Brehm, J. (1995). Trivialization: The forgotten mode of dissonance reduction. Journal of Personality and
Social Psychology, 68, 247-260.

Você também pode gostar