Escolar Documentos
Profissional Documentos
Cultura Documentos
Emoções
Ana P. Pinheiro
Faculty of Psychology,
University of Lisbon
1
Expressão e Reconhecimento de Emoções
https://vimeo.com/97449717 4
Expressão e Reconhecimento de Emoções
O que sabemos?
– prosódia (Bryant & Clark Barrett, 2008; Scherer, Banse, & Wallbott, 2001; van Bezooijen, Otto,
& Heenan, 1983);
5
Expressão e Reconhecimento de Emoções
• A alegria parece ser reconhecida quase com precisão total quando transmitida pela
face (precisão 96% nas culturas Ocidentais – Russell, 1994), mas é difícil reconhecer
inequivocamente esta emoção quando transmitida pela voz (Johnstone & Scherer, 2000).
Contempt 46.9
Disgust 62.1
https://vimeo.com/97449717 10
[0.03, 0.11]); (AudioVisual: posed: t(39) = 9.53, p
Expressão e Reconhecimento
< .001, Cohen’s de Emoções
d: 1.51, 95% CI [0.19, 0.30]; spon-
taneous: t(39) = 9.57, p < .001, Cohen’s d: 1.51, 95% CI
[0.20, 0.31]). Thus, naïve observers reliably recognised
COGNITION AND EMOTION, 2017
https://doi.org/10.1080/02699931.2017.1320978
emotional states from both posed and spontaneous
emotional expressions. The results are displayed in
Can perceivers recognise emotions from spontaneous expressions?
Disa A. Sauter and Agneta H. Fischer
Figure 2 (see Table 3 for a breakdown of results per
Department of Social Psychology, University of Amsterdam, Amsterdam, Netherlands
The vast majority of research into nonverbal com- expressions that uses posed expressions, it is impor-
munication of emotions uses posed stimuli, because tant to establish whether it is scientifically sound to
of the high degree of experimental control that they generalise from findings using posed expressions to
afford researchers. However, critics have argued that real-life situations involving spontaneous emotional
the use of posed expressions inflates recognition accu- expressions. The current study aimed to contribute
racy relative to spontaneous expressions (e.g. Nelson to addressing the question of how spontaneous
& Russell, 2013), and concerns have been raised over emotional expressions are perceived compared to
whether observers can in fact reliably recognise the typical stimuli used in the field of emotion
emotions from spontaneous expressions at all research, that is, posed expressions.
(Russell, 1994). Posed stimuli have also been criticised
for being artificial and consequently not representa-
Studies comparing recognition of posed and
tive of expressions that occur in real life (see Scherer,
spontaneous expressions
Clark-Polner, & Mortillaro, 2011 for a discussion). But
although some studies have examined the recog- As noted, only a handful of studies have directly com-
nition of individual emotions from spontaneous pared the perception of spontaneous and posed facial
expressions (e.g. Fernandez-Dols, Carrera, & Crivelli, expressions, and they have generally lent support to
2011; Tracy & Matsumoto, 2008; Wagner, 1990), sur- the proposal that recognition is more accurate for
prisingly few studies have directly compared recog-
nition of emotions from spontaneous and posed
Figure 2. Emotion recognition in Experiment 2 (arcsine Hu scores) for
posed than for spontaneous expressions (Russell,
1994). In an early study, Zuckerman and colleagues
stimuli within a single paradigm. But given the posed (dark
examined whether viewers could judge valence and
Emotion recognition boxes) and for spontaneous
posed and (light spontaneous
boxes) emotional
Performance on the emotion recognition task. The wealth of research into nonverbal emotional intensity from spontaneous facial expressions of positive
expressions. Lines through the boxes are the medians, box edges
dashed line represents chance
CONTACT Disa A. Sauter (calculated d.a.sauter@uva.nl as 1/4
Supplemental data for this article can be accessed here. https://doi.org/10.1080/02699931.2017.1320978
emotional
are the 25th expressions. and 75th percentiles, The
and thedashed
whiskers line
extend rep-
to the
correct, as there were Thislicenses/by-nc-nd/4.0/),
isfour options
distributed under the terms of each
© 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
an Open Access article of the Creative resentsmostLicenseextreme
Commons Attribution-NonCommercial-NoDerivatives chance data points
(http://creativecommons.org/ excluding outliers.
(calculated as 1/4 The correct,
dashed line as
rep-
resents chance (calculated
which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not
there were four options of each valence). as 1/4 correct, as there were four options
valence). altered, transformed, or built upon in any way.
of each valence). 11
20, 0.31]). Thus, naïve observers reliably recognised difference 0.026, p > .2; AudioVisual mean difference
otional states from both posed and spontaneous Expressão
0.09, p > .6). e Reconhecimento
These results fail todesupport
Emoçõesthe
otional expressions. The results are displayed in
ure 2 (see Table 3 for a breakdown of results per
Table 3. Table showing mean recognition rates (raw Hu scores) in
Experiment 2 in each modality, separately for spontaneous (top) and
posed (bottom) expressions. Means as arcsine transformed Hu
scores (used in the statistical analyses) can be found in the
Supplementary Materials.
Audio (n = 42) Visual (n = 40) AudioVisual (n = 42)
Emotion Spontaneous
COGNITION AND EMOTION, 2017
https://doi.org/10.1080/02699931.2017.1320978
Triumph .16 (.17) .12 (.13) .24 (.20)
Amusement .54 (.16) .35 (.16) .47 (.20)
Anger .47 (.27) .21 (.15) .40 (.23)
Can perceivers recognise emotions from spontaneous expressions?
Disa A. Sauter and Agneta H. Fischer
Disgust .34 (.25) .45 (.23) .58 (.27)
Department of Social Psychology, University of Amsterdam, Amsterdam, Netherlands
Fear .25 (.13) .25 (.21) .42 (.20)
Relief .26 (.20) .14 (.13) .37 (.22)
ABSTRACT
Posed stimuli dominate the study of nonverbal communication of emotion, but
ARTICLE HISTORY
Received 31 August 2016
Revised 6 April 2017
Sadness .59 (.28) .63 (.18) .89 (.19)
concerns have been raised that the use of posed stimuli may inflate recognition
accuracy relative to spontaneous expressions. Here, we compare recognition of
emotions from spontaneous expressions with that of matched posed stimuli.
Accepted 6 April 2017
Pleasure .58 (.20) .63 (.32) .72 (.25)
Participants made forced-choice judgments about the expressed emotion and
whether the expression was spontaneous, and rated expressions on intensity
KEYWORDS
Nonverbal communication;
genuine; vocal expressions;
Surprise .09 (.11) .26 (.15) .21 (.15)
(Experiments 1 and 2) and prototypicality (Experiment 2). Listeners were able to
accurately infer emotions from both posed and spontaneous expressions, from
vocalisations; facial
expressions
Total .36 (.20) .34 (.18) .48 (.21)
auditory, visual, and audiovisual cues. Furthermore, perceived intensity and
prototypicality were found to play a role in the accurate recognition of emotion, Posed
particularly from spontaneous expressions. Our findings demonstrate that
perceivers can reliably recognise emotions from spontaneous expressions, and that Triumph .21 (.21) .10 (.16) .18 (.18)
depending on the comparison set, recognition levels can even be equivalent to
that of posed stimulus sets. Amusement .55 (.12) .36 (.13) .52 (.20)
Anger .74 (.19) .76 (.25) .84 (.22)
The vast majority of research into nonverbal com- expressions that uses posed expressions, it is impor- Disgust .28 (.21) .22 (.16) .39 (.14)
ure 2. Emotion recognition
munication of emotions in Experiment
uses posed stimuli, 2 (arcsine
because tant to establish whether itHu scores)sound
is scientifically forto
of the high degree of experimental control that they generalise from findings using posed expressions to Fear .51 (.20) .67 (.19) .71 (.25)
ed (dark boxes)However,
afford researchers. andcriticsspontaneous (light
have argued that real-life situations boxes) emotional
involving spontaneous emotional
the use of posed expressions inflates recognition accu- expressions. The current study aimed to contribute Relief .45 (.22) .30 (.20) .44 (.21)
essions.
racy Lines throughexpressions
relative to spontaneous the boxes(e.g. Nelson are the medians,
to addressing the question ofbox edges
how spontaneous
Sadness .07 (.11) .37 (.17) .22 (.21)
the 25th and 75th can percentiles, and thethetypical
whiskers
stimuli used extend
in the field to
& Russell, 2013), and concerns have been raised over emotional expressions are perceived compared to
the
whether observers in fact reliably recognise of emotion
Pleasure .39 (.19) .12 (.13) .46 (.31)
t extreme data points excluding outliers. The dashed line rep-
emotions from spontaneous expressions at all research, that is, posed expressions.
(Russell, 1994). Posed stimuli have also been criticised Surprise .35 (.18) .35 (.17) .55 (.23)
nts chance
for being (calculated as 1/4
artificial and consequently correct,Studies
not representa- as there
comparingwere four options
tive of expressions that occur in real life (see Scherer,
recognition of posed and
Total .40 (.18) .36 (.18) .48 (.22)
ach valence).
Clark-Polner, & Mortillaro, 2011 for a discussion). But
spontaneous expressions
12
although some studies have examined the recog- As noted, only a handful of studies have directly com-
nition of individual emotions from spontaneous pared the perception of spontaneous and posed facial
expressions (e.g. Fernandez-Dols, Carrera, & Crivelli, expressions, and they have generally lent support to
Expressão e Reconhecimento de Emoções
https://vimeo.com/97449717 13
Expressão e Reconhecimento de Emoções
DISGUST HAPPY
FEAR SAD
https://vimeo.com/97449717 17
Expressão e Reconhecimento de Emoções
Edited by: It was the aim of this study to investigate the impact of major depressive disorder (MDD)
Pascal Belin, University of Glasgow, on judgment of emotions expressed at the verbal (semantic content) and non-verbal
UK
(prosody) level and to assess whether evaluation of verbal content correlate with
Reviewed by:
self-ratings of depression-related symptoms as assessed by Beck Depression Inventory
Jan Van Den Stock, KU Leuven,
Belgium (BDI). We presented positive, neutral, and negative words spoken in happy, neutral,
Linda Isaac, Palo Alto VA & Stanford and angry prosody to 23 MDD patients and 22 healthy controls (HC) matched for age,
University, USA sex, and education. Participants rated the valence of semantic content or prosody on
Sascha Frühholz, University of
Geneva, Switzerland
a 9-point scale. MDD patients attributed significantly less intense ratings to positive
words and happy prosody than HC. For judgment of words, this difference correlated
*Correspondence:
Thomas Ethofer, Department of significantly with BDI scores. No such correlation was found for prosody perception.
General Psychiatry, University of MDD patients exhibited attenuated processing of positive information which generalized
Tübingen, Osianderstraße 24, 72076 across verbal and non-verbal channels. These findings indicate that MDD is characterized
Tübingen, Germany
e-mail: thomas.ethofer@
by impairments of positive rather than negative emotional processing, a finding which
med.uni-tuebingen.de could influence future psychotherapeutic strategies as well as provide straightforward
hypotheses for neuroimaging studies investigating the neurobiological correlates of
impaired emotional perception in MDD.
Psychological Medicine
Contents lists available at ScienceDirect http://journals.cambridge.org/PSM Contents lists available at ScienceDirect
Altered attentional processing of happy prosody in schizophrenia Abnormalities in the processing of emotional prosody from single words
in schizophrenia
Ana P. Pinheiro a,b,⁎, Margaret Niznikiewicz a
Ana P. Pinheiro a,b, Neguine Rezaii b, Andréia Rauber c, Taosheng Liu d, Paul G. Nestor e,
Sensorybased and higherorder operations contribute to abnormal
a
Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School, Brockton, MA, United States
b
Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal Robert W. McCarley b, Óscar F. Gonçalves a, Margaret A. Niznikiewicz b
emotional prosody processing in schizophrenia: an electrophysiological a
b
Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School, Brockton, MA, United States
a r t i c l e i n f o a b s t r a c t investigation c
d
Catholic University of Pelotas, Pelotas, Brazil
Department of Psychology, Second Military Medical University (SMMU), Shanghai, China
e
Article history: Background: Abnormalities in emotional prosody processing have been consistently reported in schizophrenia. University of Massachusetts, Boston, MA, United States
Received 4 June 2018 Emotionally salient changes in vocal expressions attract attention in social interactions. However, it remains to A. P. Pinheiro, E. del Re, J. Mezin, P. G. Nestor, A. Rauber, R. W. McCarley, Ó. F. Gonçalves and M. A. Niznikiewicz
Received in revised form 17 November 2018 be clarified how attention and emotion interact during voice processing in schizophrenia. The current study ad-
Accepted 19 November 2018 dressed this question by examining the P3b event-related potential (ERP) component.
Available online xxxx Psychological Medicine / Volume 43 / Issue 03 / March 2013, pp 603 618 a r t i c l e i n f o a b s t r a c t
Method: The P3b was elicited with a modified oddball task, in which frequent (p = .84) neutral stimuli were DOI: 10.1017/S003329171200133X, Published online: 10 July 2012
intermixed with infrequent (p = .16) task-relevant emotional (happy or angry) targets. Prosodic speech was Article history: Background: Abnormalities in emotional prosody processing have been consistently reported in schizophrenia
Keywords:
presented in two conditions - with intelligible (semantic content condition - SCC) or unintelligible semantic con- Received 4 July 2013 and are related to poor social outcomes. However, the role of stimulus complexity in abnormal emotional prosody
Voice Link to this article: http://journals.cambridge.org/abstract_S003329171200133X Received in revised form 25 October 2013
Emotion tent (prosody-only condition - POC). Fifteen chronic schizophrenia patients and 15 healthy controls were processing is still unclear.
instructed to silently count the target vocal sounds. Accepted 28 October 2013 Method: We recorded event-related potentials in 16 patients with chronic schizophrenia and 16 healthy controls
Attention
Event-related potentials Results: Compared to controls, P3b amplitude was specifically reduced for happy prosodic stimuli in schizophre- How to cite this article: Available online 14 December 2013
to investigate: 1) the temporal course of emotional prosody processing; and 2) the relative contribution of
P300 nia, irrespective of semantic status. Groups did not differ in the processing of neutral standards or angry targets. A. P. Pinheiro, E. del Re, J. Mezin, P. G. Nestor, A. Rauber, R. W. McCarley, Ó. F. Gonçalves and M. A. Niznikiewicz (2013). prosodic and semantic cues in emotional prosody processing. Stimuli were prosodic single words presented in
Keywords:
Schizophrenia Discussion: The selectively reduced P3b for happy prosody in schizophrenia suggests top-down attentional re- Sensorybased and higherorder operations contribute to abnormal emotional prosody processing in schizophrenia: an Schizophrenia
two conditions: with intelligible (semantic content condition—SCC) and unintelligible semantic content (pure
sources were less strongly engaged by positive relative to negative prosody, reflecting alterations in the evalua- electrophysiological investigation. Psychological Medicine, 43, pp 603618 doi:10.1017/S003329171200133X Emotional prosody prosody condition—PPC).
tion of the emotional salience of the voice. These results highlight the role played by higher-order processes in Language Results: Relative to healthy controls, schizophrenia patients showed reduced P50 for happy PPC words, and
emotional prosody dysfunction in schizophrenia. Event-related potentials reduced N100 for both neutral and emotional SCC words and for neutral PPC stimuli. Also, increased P200 was
Request Permissions : Click here observed in schizophrenia for happy prosody in SCC only. Behavioral results revealed higher error rates in schizo-
© 2018 Elsevier B.V. All rights reserved.
phrenia for angry prosody in SCC and for happy prosody in PPC.
Conclusions: Together, these data further demonstrate the interactions between abnormal sensory processes
and higher-order processes in bringing about emotional prosody processing dysfunction in schizophrenia. They
further suggest that impaired emotional prosody processing is dependent on stimulus complexity.
© 2013 Published by Elsevier B.V.
1. Introduction and Kotz, 2008a, 2008b; Pinheiro et al., 2014, 2013; Wildgruber et al.,
2006).
Abnormalities in the perception and recognition of emotional pros- Relative to the study of facial affect processing, fewer studies have
ody have been increasingly recognized as a core feature of schizophre- examined emotional prosody dysfunction in schizophrenia. The existing 1. Introduction In healthy subjects, perception of emotional prosody is thought to
nia (Couture et al., 2006). Deficits in emotional perception seem to be studies revealed alterations in emotional prosody processing in schizo- reflect three interacting stages: 1) sensory processing of a speech signal;
independent of antipsychotic medication and to represent a trait deficit phrenia using behavioral (Edwards et al., 2001; Kucharska-Pietura et al., Among the most significant predictors of long-term disability in 2) implicit categorization of salient acoustic features into emotional and
(Edwards et al., 2001; Kucharska-Pietura et al., 2005). Further, they pre- 2005; Leitman et al., 2010a; Pawełczyk et al., 2018; Shaw et al., 1999; schizophrenia (e.g., Couture et al., 2006) is impaired detection and non-emotional features; and 3) explicit evaluation and assignment of
recognition of emotions from voice, i.e., emotional prosody [EP]. Affect emotional meaning to a speech signal (Schirmer and Kotz, 2006;
dict functional outcome and quality of life (Kee et al., 2003). Emotional Shea et al., 2007; Vaskinn et al., 2007), functional magnetic resonance
recognition from both voice and face is an aspect of social cognition, Paulmann and Kotz, 2008; Paulmann et al., 2010). Event-related poten-
prosody, the non-verbal vocal expression of emotion (Kotz and imaging (fMRI – Leitman et al., 2011; Mitchell et al., 2004), and event-
which has been recently recognized as an important predictor of func- tial (ERP) studies demonstrated that the first two stages are indexed by
Paulmann, 2011), is a cornerstone of adaptive functioning in a social en- related potential (ERP – Kantrowitz et al., 2015; Leitman et al., 2010b;
tional outcomes at all stages of schizophrenia pathology: clinical high N100 and P200, respectively (Paulmann and Kotz, 2008; Paulmann
vironment (Schirmer and Kotz, 2006). At perceptual and physical levels, Pinheiro et al., 2014, 2013) measures. Impaired recognition of emotion
risk (Addington et al., 2008; Green et al., 2012), first episode (Horan et al., 2010; Pinheiro et al., 2012).
vocal emotions are primarily communicated by means of pitch (funda- from a tone of a voice (e.g., Dondaine et al., 2014) may lead to dysfunc-
et al., 2012) and chronic schizophrenia (Kee et al., 2003; Kucharska- Despite the importance of a detailed understanding of emotional
mental frequency – F0), intensity and duration (e.g., duration of sylla- tional social interactions (e.g., Hooker and Park, 2002) and contribute to
Pietura et al., 2005; Green et al., 2012). While face processing abnormal- prosody processing deficits in schizophrenia, few studies have exam-
bles and pauses) (Banse and Scherer, 1996; Juslin and Laukka, 2003). positive symptoms such as auditory verbal hallucinations (Alba-Ferrara
ity in schizophrenia has been well characterized (e.g., Li et al., 2010), ined these abnormalities and their underlying neural mechanisms
Perceiving the emotional quality of the voice is a multi-stage process et al., 2012; Shea et al., 2007). There is some evidence that impairments
voice and prosody processing have been understudied, especially are not well understood. Recent studies suggested that sensory-based
that includes: 1) decoding the acoustic properties of the voice; 2) detect- in emotional vocal recognition in schizophrenia are enhanced when using event-related potential (ERP) approaches, which remain the dysfunction might not exclusively account for abnormal prosody pro-
ing emotionally salient acoustic cues; and 3) cognitively evaluating the stimuli have a negative valence (Bozikas et al., 2006; Edwards et al., only tool to examine temporal changes in neurophysiological events cessing in schizophrenia. Instead, an interaction between dysfunctional
emotional significance of the voice (Paulmann et al., 2010; Paulmann 2001; Huang et al., 2009; Ito et al., 2013; Pinheiro et al., 2013), even that correspond to early stages of analysis of a speech signal. The sensory and higher-order cognitive processes may better explain it
though other studies found worse performance in the recognition of existing studies on vocal emotional processing include just a handful (Leitman et al., 2010, 2011; Pinheiro et al., 2012). A recent ERP study
more complex vocal stimuli with a positive valence such as alluring of behavioral (e.g., Edwards et al., 2001), functional magnetic resonance provided further evidence for these abnormalities (Pinheiro et al.,
⁎ Corresponding author at: Faculdade de Psicologia, Universidade de Lisboa, Lisbon,
voices (Vogel et al., 2016). Notwithstanding, alterations in ERP re- imaging (fMRI—e.g., Mitchell et al., 2004; Leitman et al., 2011) and ERP 2012). This study investigated prosody processing in 15 chronic schizo-
Portugal. sponses of the electroencephalogram to vocal emotional information investigations (Pinheiro et al., 2012). phrenia patients and 15 healthy controls (HC). Additionally, it explored
E-mail address: appinheiro@psicologia.ulisboa.pt (A.P. Pinheiro). occurring before a response is made (i.e., deciding whether the voice
0920-9964/$ – see front matter © 2013 Published by Elsevier B.V.
http://dx.doi.org/10.1016/j.schres.2013.10.042
https://doi.org/10.1016/j.schres.2018.11.024
0920-9964/© 2018 Elsevier B.V. All rights reserved.
Please cite this article as: A.P. Pinheiro and M. Niznikiewicz, Altered attentional processing of happy prosody in schizophrenia, Schizophrenia Re-
search, https://doi.org/10.1016/j.schres.2018.11.024
Downloaded from http://journals.cambridge.org/PSM, IP address: 128.103.149.52 on 04 Feb 2013
19
Expressão e Reconhecimento de Emoções
Social Cognitive and Affective Neuroscience, 2018, 1–13
doi: 10.1093/scan/nsx142
Advance Access Publication Date: 23 November 2017
Abstract
This meta-analysis compares the brain structures and mechanisms involved in facial and vocal emotion recognition.
Neuroimaging studies contrasting emotional with neutral (face: N ¼ 76, voice: N ¼ 34) and explicit with implicit emotion
processing (face: N ¼ 27, voice: N ¼ 20) were collected to shed light on stimulus and goal-driven mechanisms, respectively.
Activation likelihood estimations were conducted on the full data sets for the separate modalities and on reduced,
modality-matched data sets for modality comparison. Stimulus-driven emotion processing engaged large networks with
significant modality differences in the superior temporal (voice-specific) and the medial temporal (face-specific) cortex.
Goal-driven processing was associated with only a small cluster in the dorsomedial prefrontal cortex for voices but not
faces. Neither stimulus- nor goal-driven processing showed significant modality overlap. Together, these findings suggest
that stimulus-driven processes shape activity in the social brain more powerfully than goal-driven processes in both the
visual and the auditory domains. Yet, whereas faces emphasize subcortical emotional and mnemonic mechanisms, voices
emphasize cortical mechanisms associated with perception and effortful stimulus evaluation (e.g. via subvocalization).
These differences may be due to sensory stimulus properties and highlight the need for a modality-specific perspective
when modeling emotion processing in the brain.
Fig. 2. Summary of brain regions highlighted in this meta-analysis. Lateral and medial areas are marked in nontransparent and transparent color, respectively. Early
Fig. 2. Summary of brain regions highlighted in this meta-analysis. Lateral and medial areas are marked in nontransparent and transparent color, respectively. Early
modality specific processingIntroduction
is indicated for faces in red and for voices in green. Later, into potential modality
other expressive convergence
channels and becameistoindicated
dominate ourin violet. Arrows illustrate hypothe-
modality specific processing is indicated for faces in red and for voices in green. Later, potential modality convergence is indicated in violet. Arrows illustrate hypothe-
sized up- and downstream modulations (not tested in the present study). Modality thinking about strong
effects with social perception
evidencemore generally by
are marked (Belin et lines
solid al., and those with weak evidence
sized up- and downstream modulations
[. . .] an innate(not
feelingtested
must have intoldthe present
him study).crying
that the pretended Modality effects with
2011, 2000; strongand
Schirmer evidence
Adolphs, are marked
2017). by solid
Taking issue lines and those with weak evidence
with this
or with evidence from previousofstudies are marked
his nurse expressed grief . . . by dashed
(Darwin, lines. Although
1872, concluding remarks, dmPFC failed to show for faces in this meta-analysis, there is other work implicating this
or with evidence from previous italics
studies are marked by dashed lines. Although dmPFC
added for emphasis)
failed to
situation, thisshow
articlefor faces in
compares andthis meta-analysis,
contrasts there is other work implicating this
the brain struc-
region when the analysis of facial expressions is challenging (e.g. reading the mindtures in the eyes
and test). Leftunderpinning
mechanisms IFG activityfacewasperception
found in thewithemotion contrast of the full voice
region when the analysis of facial expressions is challenging (e.g. reading the mind in the
those
eyes test).
underpinning
Left IFG activity
voice perception.
was found in the emotion contrast of the full voice
and face data sets. However, itsIn exact
his bookfunctionality
The expressionand activation
of emotions in manconditions
and animals,in the context of emotion perception remain to be determined. Amy, amygdala; dmPFC,
and face data sets. However, Darwin
its exact functionality and activation
recognized emotions as nonprivate experiences and conditions in the context of emotion perception remain to be determined. Amy, amygdala; dmPFC, 20
dorso-medial prefrontal cortex; PHG, parahippogampal gyrus; IFG, inferior frontal gyrus; STG, superior temporal gyrus.
described
dorso-medial prefrontal cortex; their characteristic displays.
PHG, parahippogampal gyrus;Whereas Darwinfrontal
IFG, inferior could gyrus; STG, superior
Why emotion temporal
processing gyrus. for faces and
may compare
only speculate about the ‘innate feeling’ by which these dis- voices
plays are perceived and understood, modern science helped
Expressão e Reconhecimento de Emoções
21
emotion modulation [26] appear preferentially lateralized to the right hemisphere, the laterali-
Expressão e Reconhecimento de Emoções
zation of subcortical emotion effects remains unclear.
There is evidence that expressions of different emotions recruit somewhat segregated face-
processing circuits. This was demonstrated with multivoxel pattern analysis (MVPA) within
A importância da modalidade sensorial the FFA and primary visual cortex [27], the medial prefrontal cortex (mPFC) [28], and the
Review Posterior
Posterior
Emotion Perception from superior temporal
sulcus
Lateral insula
prefrontal
Face, Voice, and Touch: cortex
Comparisons and Fusiform
Superior
Convergence gyrus
temporal
Annett Schirmer1,* and Ralph Adolphs2,* gyrus
Historically, research on emotion perception has focused on facial expressions,
Voice Touch
Trends Face Mul!modal
and findings from this modality have come to dominate our thinking about other
Facial expression and perception have
modalities. Here we examine emotion perception through a wider lens by long been the primary emphasis in
comparing facial with vocal and tactile processing. We review stimulus char-
acteristics and ensuing behavioral and brain responses and show that audition
Right medial sagi!al sec"on
research. However, there is a growing
interest in other channels like the voice Coronal sec"on
and touch.
and touch do not simply duplicate visual mechanisms. Each modality provides
a distinct input channel and engages partly nonoverlapping neuroanatomical Facial, vocal, and tactile emotion pro-
cessing have been explored with a
systems with different processing specializations (e.g., specific emotions range of techniques including beha- Medial Medial
versus affect). Moreover, processing of signals across the different modalities vioral judgments, electroencephalo-
converges, first into multi- and later into amodal representations that enable Posterior
graphy/event-related potentials, fMRI prefrontal prefrontal
contrast studies, and multivoxel pat-
holistic emotion judgments. tern analyses. gingulate cortex cortex
Results point to similarities (e.g.,
Nonverbal Emotions: Moving from a Unimodal to a Multimodal Perspective increased responses to social and
Emotion (see Glossary) perception plays a ubiquitous role in human interactions and is hence emotional signals) as well as differ-
of interest to a range of disciplines including psychology, psychiatry, and social neuroscience. ences (e.g., differentiation of individual
However, its study has been dominated by facial emotions, with other modalities explored less emotions versus encoding of affect) Insula
frequently and often anchored within a framework derived from what we know about vision. Fusiform
between communication channels.
Here we take a step back and put facial emotions on a par with vocal and tactile emotions. Like Channel gyrus
similarities and differences
a frightened face, a shaking voice or a cold grip can be meaningfully interpreted as emotional. enable holistic emotion recognition, a
process that depends on multisensory
Amygdala
We first explore these three modalities in terms of signal properties and brain processes
integration during early, perceptual
underpinning unimodal perception. We then examine how signals from different channels and later, conceptual stages. Superior
converge into a holistic understanding of another’s feelings. Throughout we address the
question of whether emotion perception is the same or different across modalities. Amygdala temporal
gyrus
Sensory Modalities for Emotion Expression
There is vigorous debate about what exactly individuals can express nonverbally. There are
1
[41_TD$IF]open questions about whether people are in an objective sense ‘expressing emotions’ as [410_TD$IF]Chinese University of Hong Kong,
Hong Kong; Max Planck Institute for
opposed to engaging in more strategic social communication, whether they express discrete Human Cognitive and Brain Sciences,
emotions and which ones, and whether their expressions are culturally universal. For practical
reasons we ignore these debates here and simply assume that people do express and perceive
Figure 2. Key Brain Regions Involved in Nonverbal Emotion Processing. Regions typically more active for
Germany; National University of
Singapore, Singapore
2
California Institute of Technology,
what we usually call emotions (Box 1). emotional than for neutral stimuli are marked in green, blue, and red for voice, face, and touch, respectively. Regions
Pasadena, CA, USA
22
Facial expressions have been studied in the most detail, possibly because they seem most
typically more active for emotional multimodal than for unimodal stimulation are marked in beige.
apparent in everyday life and their controlled presentation in an experimental setting is relatively *Correspondence:
schirmer@cuhk.edu.hk (A. Schirmer)
easy. Facial expressions in humans depend on 17 facial muscle pairs that we share fully with
Quão eficazmente reconhecemos
emoções transmitidas através da
face?
23
Expressão e Reconhecimento de Emoções
Equal numbers of male and female participants judged which of seven facial expressions (anger, dis-
gust, fear, happiness, neutrality, sadness, and surprise) were displayed by a set of 336 faces, and we
measured both accuracy and response times. In addition, the participants rated how well the expres-
sion was displayed (i.e., the intensity of the expression).636 PALERMO
These three measures are AND
reportedCOLTHEART
for each
face. Sex of the rater did not interact with any of the three measures. However, analyses revealed that
some expressions were recognized more accurately in female than in male faces. The full set of these
norms may be downloaded from www.psychonomic.org/archive/.
Female Face Sex Male Face Sex
100
The information conveyed by facial expressions is so METHOD
logically and socially important that humans have 90
Participants
lved complex neural systems that rapidly and accu- Twelve females (M ! 24 years, SD ! 8.7) and 12 males (M ! 25
ely decode facial expressions displayed by friends and 80
years, SD ! 6.6) participated in the experiment in return for either
s (Rolls, 2000). Over the past few decades, numerous
Percentage Accuracy
Figure 1. Percentage of expressions displayed by female and male models correctly judged to be the intended expres-
Behavior Research Methods, Instruments, & Computers
2004, 36 (4), 634-638
Accuracy in Facial Emotion Recognition sion. Standard error bars are shown.
J
If you wish to distribute this article to others, you can order high-quality copies for your
colleagues, clients, or customers by clicking here. ennifer checks the numbers in her lottery discrimination (1, 2). Similarly, dimensional emo-
Winning
Updated information and services, including high-resolution figures, can be found in the online and is hit by a passing car. In a split second, positions on the pleasure-displeasure axis and
version of this article at:
Jennifer and Michael experience the most intense thus their positivity or negativity should be easier
15 participants rated the
http://www.sciencemag.org/content/338/6111/1225.full.html
Supporting Online Material can be found at:
http://www.sciencemag.org/content/suppl/2012/11/28/338.6111.1225.DC1.html
emotions of their lives. Intuitively, their emotion- to decipher (3).
affective valence and
This article cites 35 articles, 11 of which can be accessed free:
http://www.sciencemag.org/content/338/6111/1225.full.html#ref-list-1
al expressions should differ vastly, an assumption
shared by leading models of emotion. For ex-
The question of affective valence discrimina-
tion is theoretically important for the structure of
intensity of either the full
This article has been cited by 3 articles hosted by HighWire Press; see:
http://www.sciencemag.org/content/338/6111/1225.full.html#related-urls ample, basic emotion models, which posit dis- emotion models and is central for understanding
This article appears in the following subject collections:
Psychology
tinctive categories of emotions such as anger how social communication takes place in highly
image (face + body), the
http://www.sciencemag.org/cgi/collection/psychology
and fear, predict that intense emotions activate intense and potentially dangerous situations. Yet,
maximally distinct facial muscles, which increase although it is commonly assumed that facial ex-
body alone, or the face pressions convey positive and negative affective
1
Department of Psychology, Princeton University, Princeton, NJ valence in a highly distinct manner, there is still
alone. 08540, USA. 2Department of Psychology, New York University,
NY 10003, USA. 3Behavioral Science Institute, Radboud Uni-
room for question on both methodological and
theoretical grounds. From a methodological stand-
versity, Nijmegen, Netherlands.
point, most studies to date have used posed pro-
*To whom correspondence should be addressed. E-mail:
haviezer@mail.huji.ac.il
totypical facial expressions (1) that have been
†Present address: Department of Psychology, Hebrew Uni- carefully designed to signal clear and distinct
versity of Jerusalem, Jerusalem, Israel. emotions (4–7), and indeed the higher their in-
Science (print ISSN 0036-8075; online ISSN 1095-9203) is published weekly, except the last week in December, by the
American Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005. Copyright www.sciencemag.org SCIENCE VOL 338 30 NOVEM
2012 by the American Association for the Advancement of Science; all rights reserved. The title Science is a
registered trademark of AAAS.
If you wish to distribute this article to others, you can order high-quality copies for your
colleagues, clients, or customers by clicking here.
Updated information and services, including high-resolution figures, can be found in the online
version of this article at:
http://www.sciencemag.org/content/338/6111/1225.full.html
Supporting Online Material can be found at:
http://www.sciencemag.org/content/suppl/2012/11/28/338.6111.1225.DC1.html
This article cites 35 articles, 11 of which can be accessed free:
http://www.sciencemag.org/content/338/6111/1225.full.html#ref-list-1
This article has been cited by 3 articles hosted by HighWire Press; see:
http://www.sciencemag.org/content/338/6111/1225.full.html#related-urls
This article appears in the following subject collections:
Psychology
http://www.sciencemag.org/cgi/collection/psychology
Fig. 1. Experiment 1. (A) Examples of reactions to (1) winning and (2) losing
a point. (B) Examples of isolated faces (1, 4, 6 = losing point; 2, 3, 5 =
winning point). [All photos in Fig. 1 credited to a.s.a.p. Creative/Reuters] (C)
Mean
Science (print valence ratings
ISSN 0036-8075; for1095-9203)
online ISSN face +is published
body,weekly,body, and
except face.
the last week inResults
December, byare
American Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005. Copyright
the converted
fromtrademark
the original scale, which ranged from 1 (most negative) to 9 (most pos-
2012 by the American Association for the Advancement of Science; all rights reserved. The title Science is a
registered of AAAS.
itive), with 5 serving as a neutral midpoint. (D) Mean intensity ratings for face +
body, body, and face. Asterisks indicate significant differences between the
ratings of winners and losers. Error bars throughout all figures represent SEM.
ns, not significant.